The Transparency Trap
Why "We Don't Know" Kills Trust in the AI Era
AKA, what I got wrong about AI and transparency
Employees are worried: layoffs keep coming, headlines flash about AI threats to jobs. It’s a situation where admitting uncertainty makes the situation worse: looking to leadership for reassurance, hearing “We’re figuring it out” makes the situation worse.
I thought full transparency was the better answer, and I was wrong. In an era where trust is already in low supply, the transparency that I know can build trust too often backfires.
This was one of the most striking parts of a recent Charter Forum conversation with Dr. Gabriella Rosen Kellerman, a psychiatrist-turned-organizational behavior expert who’s spent the past decade studying how people and organizations thrive through uncertainty.
Most recently Chief Product Officer and Chief Innovation Officer at BetterUp, and co-author of Tomorrowmind, Gabriella now serves as an Expert Partner and Director at BCG, where she helps organizations navigate the collision of AI, trust, and leadership. (Full disclosure: I’m also a senior advisor at BCG.)
Gabriella Rosen Kellerman
Trust busters
The trust crisis extends far beyond your organization’s walls. Instagram chief Adam Mosseri recently warned that platforms would “get worse at [detecting AI fakes] over time as AI gets better at imitating reality” and people were likely to default to skepticism.
One can only hope: when Casey Newton of Platformer investigated a viral whistleblower story about delivery company abuses, he discovered it was an elaborate AI-generated hoax, complete with fabricated screenshots and fake employee accounts that manufactured outrage that fooled thousands and damaged Doordash’s reputation and business.
In a world where the US government fabricates a new history for January 6th and people can’t trust what they see online, you can understand why transparent uncertainty might further reduce people’s trust in leadership when it comes to AI’s impact on their jobs.
Trust in business leadership is already dropping as well. The Edelman Trust Barometer recently showed declines for the first time in years. Now AI anxiety is accelerating the erosion.
The transparency trap
Gabriella shared a case study that crystallized the problem: A technology company facing AI-driven workforce reductions struggled with rumors of impending cuts that were eroding engagement. Leadership decided to “lean into transparency,” telling tens of thousands of employees that layoffs were coming but that they didn’t know which roles would be eliminated or when.
“This approach worsened the distrust,” Gabriella explained. The employees felt either information was being withheld, or leaders weren’t doing their jobs. They couldn’t imagine that leadership didn’t know the answers, or at least have a set of scenarios to walk through.
The issue wasn’t honesty, it was abdication of responsibility. In a climate already marked by broken promises and continuous layoffs, “we don’t know” didn’t signal integrity. It signals that leadership hasn’t done the hard work of scenario planning, hasn’t identified what’s constant amid change, and can’t articulate a path forward.
What would have worked? At a minimum, outlining scenarios, providing ways for people to impact those outcomes and support to navigate the change, like Zapier’s Lauren Franklin did with her support team facing a similar situation.
Compounding distrust in AI: workslop
Gabriella’s research team at BetterUp coined a term for the AI-created detritus flooding our inboxes: “workslop.” AI-generated content that looks professionally formatted (beautiful hierarchies of information, polished language) but contains limited actual substance or insight and often lacks the specific context of the organization or challenge.
40% of professionals report encountering workslop regularly. Among those who’ve seen it, workslop accounts for roughly 15% of peer-to-peer documents. The impact goes beyond wasted time: receiving workslop lowers colleagues’ perceptions of the sender’s capability and, critically, their trustworthiness.
“One of my strong hypotheses here is that it’s really not about the AI,” Gabriella said. “It’s that we haven’t set clear standards, norms, quality bars for what it means to do good work that actually does advance the task.”
The problem mirrors what’s happening in the broader trust landscape. Just as AI fakes erode trust online, AI workslop erodes trust internally. But the root cause is leadership’s failure to establish guardrails, not the tech.
“It’s been a minute since leaders were expected to be the ones to set a quality bar in this sort of rudimentary way,” Gabriella noted. “We have to do that over and over, but I also don’t think we’re necessarily the kinds of leaders ourselves that we’re excited about that.”
The answer involves getting your hands dirty: leaders getting directly into the work with their teams, reviewing content and being explicit about what “good” looks like.
The junior staff competence gap
The workslop problem hits hardest with less experienced employees who lack the judgment to distinguish between AI output and quality work. As Kellerman put it, a manifestation of the Dunning-Kruger effect in the AI era.
“What I’m seeing with more junior colleagues is this overuse,” said one Forum participant. “Rather than treating it as a tool and layering in the human discretion and nuance on top of it, they’re just taking what’s produced and shipping it.”
The issue isn’t just inexperience—it’s that junior employees don’t yet have the expertise to recognize what’s missing. “When you lack a certain amount of competence, you also lack the ability to see where you have gaps because you’re not competent enough to see what you’re missing,” the same leader explained.
This creates a vicious cycle: Organizations push AI adoption to boost productivity, junior staff produce workslop because they lack quality benchmarks, that workslop damages trust, and leadership responds with more mandates rather than clearer standards.
SPECIAL OFFER
Charter’s Leading with AI Summit
Gabriella’s part of a stellar lineup of speakers and panelists, including…
NYC on Feb 10th: Gabriella along with Nickle LaMoreaux (IBM), Sebastian Siemiatkowski (Klarna), Molly Kinder (Brookings), Katy George (Microsoft), Iavor Bojinov (HBS), Brandon Gell (Every), Melanie Rosenwasser (Dropbox), Mary Alice Vuicic (Thomson Reuters), and many others.
SF on Feb 24th: Donna Morris (Walmart), Amjad Masad (Replit), Hannah Prichett (Anthropic), Nick Bloom (Stanford), Fiona Tan (Wayfair), Iain Roberts (Airbnb), Rani Johnson (Workday), Aneesh Raman, Jessica Lessin, Helen Kupp and Brandon Sammut (Zapier) and more.
I’ll be at both events, would love to meet everyone there. Use the code FORUM for half off in-person registration (virtual is free!)
Building trust: coherence over transparency
The answer to both the transparency and workslop issues is trust: delivering on what you promise—even when what you’re promising is difficult.
“I’ve been thinking a lot about the idea of coherence,” said Kit Krugman, SVP of People at Foursquare. “Sharing a consistent message supported by behaviors you see across the organization. Even if you’re saying: ‘we’re going to have a really hard next six months,’ and then you do, you build trust by closing the delta between what is said and what actually happens.”
This reframes the trust challenge entirely. Leaders don’t need all the answers. They need alignment between their words and their actions, between their strategy and how they treat people.
“We see strategy and vision and where we’re going as one thing,” said Tracy Layney, former CHRO at Levi Strauss and now faculty at Chicago Booth. “And then how we treat people, employee value proposition, empathy as something else, but they’re all part of the same message, especially when you’re talking about building trust. Strategy and empathy are both necessary because they contribute to building trust, and that trust is necessary to achieving results, especially in the long term.”
For AI specifically, this means being explicit about both opportunities and risks, establishing clear quality standards, and demonstrating through behavior—not just words—that the organization values human judgment alongside AI capabilities.
If you like what you’re reading, please like ❤️, subscribe 📨, and share 🔄. Thank you!
What leaders must do differently
Here are three specific actions required of the moment:
First, managers must set explicit quality standards for AI-assisted work. This means moving beyond “use AI to be productive” to “here’s what good work looks like with these tools.” Establish clear expectations: when is AI appropriate? When should it be avoided? What does “AI-assisted but human-owned” actually mean?
Gabriella emphasized this isn’t a one-time exercise: “We have to do that over and over” as the technology and its capabilities evolve. What constituted acceptable AI use six months ago may not be sufficient today.
Second, make employees responsible for their output, regardless of the tools used. The person who ships the work owns the quality, full stop. This means training employees to recognize workslop, to apply critical thinking to AI-generated content, and to understand that using AI doesn’t absolve them of responsibility for the final product.
Third, leaders must engage directly with the work and their teams. This is how you simultaneously set quality standards and build the coherence that creates trust. When leaders are in the work—reviewing documents, providing feedback, discussing decisions—they can spot workslop, model critical thinking, and demonstrate that AI is a tool that amplifies human judgment, not a replacement for it.
Engagement also solves the transparency trap. When leaders are actively involved in the work, they can articulate specific scenarios, identify what’s constant amid change, and demonstrate competence through action rather than vague reassurances.
Of course, that also requires that they have enough bandwidth to be in the work. If instead you’ve “delayered” management, doubled spans and kept them in meetings…well, good luck to you.
The bottom line
Trust gets built through demonstrating competence in navigating uncertainty. That requires leaders who set clear standards, hold people accountable for quality, and roll up their sleeves to get into the work themselves.
As AI anxiety grows and workslop proliferates, the organizations that will thrive are those where leaders move from announcing AI drives to actively shaping how their teams use these tools effectively.
How’s trust in your organization? Are you seeing the relationship between trust, transparency and AI adoption?
Reading, listening, watching…
Fewer rote tasks, more burnout. From the Wall Street Journal:
AI tools that can sort and summarize emails, take meeting notes and file expense reports promise to free us to concentrate on the important stuff.
This sounds great. The catch is that our brains aren’t capable of thinking big thoughts nonstop. And we risk forfeiting the epiphanies that sometimes spring to mind while doing easy, repetitive job functions.
Go (way) deep: Michael Burry, Dwarkesh Patel, Patrick McKenzie, and Jack Clark get into the reality (or lack of it) of AI’s impact on productivity today, the extent of the bubble we’re in, and a LOT more. Worth a half hour of your time going deep.
Riffing with David Green: David and I get into why the patterns behind AI and flexibility failures and success are so similar, and lot more for an early 2026 podcast start! Check out the podcast transcript, listen on Apple or Spotify.
Eliminating choice reduces you talent pool: my latest for Charter taps new research from Wharton showing that companies in states that enacted trigger laws restricting abortion post-Roe faced an 9% decline in applicants and 10% higher costs to get people to take their jobs—true for men as well as women.




Normally I read articles, but I found this piece easy to listen to as a podcast!
The difficulty is that leaders have long been criticised for pretending they know more than they do, papering over cracks and offering false certainty. In response, many have swung toward radical honesty and transparency. But either way, it’s frontline workers who bear the greatest cost of uncertainty.
AI has already invaded society, and while much of what it produces is bland or “sloppy,” it’s undeniably useful for everyday administrative work. It’s hard to imagine responsible companies not using it. The real issue isn’t AI itself, but how it’s used and judged.
Those who think critically can usually spot AI slop because it lacks a human element. That’s actually encouraging. Writers and professionals who use AI to challenge their thinking and add depth will stand out, while lazy or incompetent use will be exposed. I think of AI slop like solving a maths problem without showing your working, the answer might look right, but without evidence of reasoning, it doesn’t pass.
As for leadership, even if this truly is an unprecedented moment and they don’t have all the answers, it’s still their responsibility to do the hard work by planning scenarios, and finding ways to protect their people’s futures.
It is good that they are taking the transparency route, but these leaders still need to put in some effort and use the skills that got them into those high ranking, high earning positions in the first place.