GenAI's Missing Ingredient: The Emotional Work of Adoption
Secret Cyborgs and Productivity Theater: what's holding back generative AI adoption at work
If generative AI at work was a baseball game, we wouldn’t be out of the first inning.1
I'm skipping the DeepSeek analysis—plenty of others have that covered, including debates about whether it plagiarized the plagiarism engines that preceded it.
Instead, I want to focus on the humans: how they interact—and why they don't interact—with our current tools and those on the horizon. These insights come from recent conversations with leaders wrestling with adoption (even inside tech companies making AI products) and this week's Charter "Leading with AI Summit." Quick plug: if you're not a Charter subscriber, you should be.
Focus matters: shift from broad efficiency to targeted opportunities
The "do more with less" era isn't working out as planned. New BCG research shows companies finding success take on half as many initiatives, spend 80% of their AI budgets on reshaping core functions and invent new offerings—not chasing incremental productivity gains everywhere.
Jared Spataro said he’s hearing the same from executives. It’s a change in Microsoft’s narrative that makes sense: even with CoPilot being “in the flow of work,” individual adoption still lags expectations. He described one company saving tens of millions by using LLMs to overhaul invoice processing, but substantial transformation stories remain rare outside tool providers. (If you've got one, I'd love to hear it!)
The Secret Cyborg problem
Leaders aren’t giving up on individual efficiency gains. Pushing more of the tooling into mainline workflows might reduce what Kellie Romack called "swivel chair mode"—expecting employees to juggle multiple applications mid-workflow. That’s one reason 77% of employees said AI resulted in lower productivity due to AI last year.
Even when tools are used, gains often stay hidden. Sometimes it's what Ethan Mollick calls “secret cyborgs:" people quietly using AI without telling anyone. Not just from fear of job loss—47% feel like it's cheating.
But there's a deeper issue: most organizations still run on productivity theater, as Rebecca Hinds and Trena Minduri note. We're measuring activity (lines of code, email volume, green "available" lights) instead of outcomes. Atlassian found 65% of office workers prioritize responding to notifications over actual progress.
If employees aren’t measured on outcomes, there’s no way to measure the real impacts of generative AI adoption – or even the ROI on training programs. It’s the same issue plaguing the shift to flexible work: leaders focused on visual cues and noise. People produce sound and fury, signifying nothing.
I’ve heard from a few executives who are planning to address this the brute force way: raising the load on teams, expecting managers and teams to pick up the tools and figure it out. That drive will undoubtedly work for some, but it will also crank up burnout and may further the gaps in AI adoption.
Investments needed
For all the executive excitement about AI's potential, there's a stark reality: only 25% of workers have received any AI training. More concerning? Leaders themselves are often too uncertain and burned out to guide their teams through this transformation.
This exposes the core truth about AI adoption: the technology is the easy part. The real work, as Rodney Evans notes, requires “depthfinding:” understanding the human motivations and barriers that shape how people work. Without tackling culture, processes, and incentives head-on, even the best AI tools become expensive shelf-ware.
Leaders, Charter speakers and recent studies from Asana, BCG, Microsoft, Slack, and Upwork point to common patterns of what works:
Make it permissible; build norms. Don’t just send a memo: only a third of employees are aware of their firm’s policy. 48% are uncomfortable admitting they use AI to their managers: leaders need to make it OK.
Point to growth. You can't fake authentic intent: people know if you're using AI just to cut headcount. But most companies actually want to grow and expand. Frame AI adoption around building new businesses and creating opportunities for development; tap into people's natural desire to learn and advance.
Leaders, get into it. As Helen said, “do AI, don’t talk AI.” I know first-hand how hard it is for us old dogs to learn new tricks, and have watched easier tech adoption issues (hello Slack) run aground at the executive suite. Leaders need to get their hands into the tools with their teams, understand what works and set an example.
Leverage early adopters. The first ones will figure out what works for them. Not just individuals, but teams who are willing to experiment with new ways of working. Make them champions, and turn what they do into workflows for others to use to lower the cost of adoption.
Train as a team. Shift from individuals being expected to learn on their own to hands-on, snippet-based exercises done inside teams. It needs to be both mandatory and a real investment of time, building a culture of experimentation.
Don’t ask people to do it on magic time. None of the above is going to happen if learning and experimentation are all expected to be done while the load of work increases as does hitting this quarter’s numbers. Short term efficiency pressure is rampant, but if you want to succeed, make getting your team up to speed one of your team’s 4 deliverables this quarter.
Agents as teammates?
We're still figuring out what an "Agent" even means: AI tools taking on broader workflows and acting on behalf of people. But more agents might not be the answer. As one CHRO told me last year, "It’s confusing enough that we have 20 apps for employee experience; we don't need 100 agents."
Microsoft and Google think they have the solution. Spataro compares CoPilot to the iPhone, with Agents as Apps. (Making Gemini Android, naturally.) But the bigger debate is whether to "treat Agents as teammates." I get the analogy: successful ChatGPT users treat it like an intern, providing detailed instructions and examples of "good" work. Some love this approach; others hate it.
The anthropomorphism has gotten extreme. One AI company trolled San Francisco with billboards boasting their agents "won't complain about work-life balance." Great for their sales, terrible for broad adoption, and fuel for the "force it down their throats" crowd.
Jared advised against developing emotional attachments to AI, which feels impossible given how we're marketing it. Squaring "here's my AI teammate" with "but don't get emotional about it" fights human nature. I worry about where this leads.
Other quick notes…
Coaching, not just productivity.
Many managers are under-skilled and untrained; AI can help people in a lot of ways as a coach. Kevin Delaney noted that he’s fed in transcripts of sessions he’d moderated and gotten back great insights—on a skill he’s had for decades.
GenAI can help close the gender gap in interesting ways. As Erin Grau and Karin Klein pointed out in their TIME OpEd, women are 24% less likely to have a senior leader mentor or coach at work. The coaching applications on Coursera’s platform are already used by women more than men.
Generalists, not specialists.
The rise of the generalist – and the need for EQ over IQ – is becoming more and more apparent. Jared used an analogy of his team moving from being like a football team with specialists in many roles to rugby for his team.
Closing thoughts
We're still in the first inning of this game. The tools will get better: the main takeaways from DeepSeek might end up being that it exposes chain of thought, making it more approachable to many.
But one thing is clear: companies that treat AI adoption as an opportunity to transform work alongside their teams—rather than despite them—will have the advantage. The desire for growth and development is nearly universal; smart organizations will tap into that energy rather than fight it.
Whether your company leads or lags, as individuals we can't sit this one out. The game is on, and the rules are being written in real time.
What's your take—are you seeing organizations get this right? And what's working for you personally?
1 Get used to the occasional baseball analogy. My son’s a research analyst for the Toronto Blue Jays which means he’ll probably meet Max Scherzer, the biggest psychopath in sports.
Great stuff Brian. To your list of how to get engagement – based on feedback to my AI creative event experience that I semi-accidentally developed last year, I'd add that having a fun, physical 'moment' that brings AI to life for your team in a non-threatening way can be hugely valuable in changing attitudes and the cultural conversation around AI & its use cases.
More on what I've built & why (also referencing the same Upwork study you referenced here!):
https://thefuturenormal.substack.com/p/whats-going-on-with-ai