The Tin Man problem
Why transformation efforts fail, and what an octopus can teach us about fixing them.
Stay tuned at the end for hot takes on Jack Dorsey whacking 40% of Block employees, Nick Bloom’s latest insights on AI, and Anthropic’s ethics.
Like Work Forward? Please like ❤️, subscribe 📨, and (most of all) share 🔄
Only 12% of transformations create sustained performance gains. The reason has nothing to do with strategy, and everything to do with culture.
Phil Le-Brun and Jana Werner have sat through more C-suite transformation announcements than they can count. Someone declares the company is going to be “AI-first” or “digitally transformed” or “truly agile.” The latest version? “We’re AI-first.”
Le-Brun’s read on the current movement: “It’s like saying I’m going to be an electricity-first company.” Everyone nods. “Yep, yep, great idea boss.” Then they go back to their silos.
Le-Brun and Werner spent years leading and advising on large-scale transformations at McDonald’s, Amazon, DHL and more, before writing The Octopus Organization. They recently joined a Charter Forum session to share what they’ve learned along the way.
The Octopus Organization, Le-Brun and Werner, 2025
When it comes to change management programs, failure is the constant: only 12% of transformations create sustained performance improvement. When I asked Werner why so many fail, she didn’t point to strategy or execution. She pointed at what leaders choose to change: artifacts like organization charts. process flows, and approval chains.
“The go-to approach for most leaders is changing artifacts — like using a PowerPoint where you move lines. What’s much harder to change, but what works, is when you get the switch in mindset that then trickles into a different behavior. And that behavior then naturally creates artifacts for the new way of operating.”
The org chart is irrelevant, it’s your culture that needs to change.
The comfort of visible change
There’s a reason leaders default to reorganizing. Artifacts are concrete. You can announce them in a town hall, and put them in a deck. Moving boxes feels like doing something. Two years later when it didn’t work, you can do it again. Progress!
The harder work is invisible and slow: changing what gets rewarded, how questions get asked, whether failure is treated as data or embarrassment. Le-Brun described the gravitational pull back to the status quo: “I can stop and just get back to business as normal. That’s how most organizations operate.” It’s the end-state trap: a change process that has a target, which once reached you realize isn’t enough.
That’s what the authors describe as a Tin Man organization: “a rigid and clumsy character, slow to move and react. He could take instructions but showed little initiative.”
Werner and Le-Brun propose a different model: the octopus. It has three hearts, a distributed nervous system, and is so adaptable that it can change its RNA within 12 hours to accommodate new conditions. As Werner put it: “The idea of changing one thing and then having a new normal, especially now in the AI age, doesn’t work anymore. The idea that all the intelligence sits at the top doesn’t work anymore either. There is no way we will keep up with, let alone thrive, in a world where things change at the pace of AI. We need to distribute intelligence.”
On the end-state trap, Le-Brun was blunt: “As soon as you declare, ‘I’ve reached the end state, I am now Octopus,’ you start going backwards. This is about a living transformation. How do you continue to evolve versus trying to reach that end state?”
Authors Jana Werner and Phil Le-Brun
Curiosity gets beaten out of us
Change requires risk tolerance. Continuous change requires a learner’s mindset, and curiosity. Children ask roughly 100 questions an hour. Adults ask very few. In startups, questions get asked repeatedly. In big organizations, they’re more often limited to the backchannels.
Werner’s explanation: “Curiosity is leaving our organizations because trying and experimenting — failure — has no upside in traditional organizations. The idea that you can do something wrong is career-limiting. It’s embarrassing. It even personally feels painful and shameful.”
Leaders rewarded for having answers bring exactly that habit into the C-suite, then wonder why their teams don’t surface problems early, don’t experiment, and don’t push back when something seems wrong. I learned this myself the hard way. Early on, I was told to be “seldom wrong, never in doubt.” That attitude later led to my teams thinking their input wasn’t valued—just do what the boss says. Their curiosity, let alone willingness to raise challenging issues, was squashed.
They noted the breadth of the challenge: 92% of leaders say curiosity is valued but only 24% of employees agree their organization supports it. That gap exists the moment someone raises a dissenting view in a meeting.
Werner is skeptical of how leaders typically respond to this data. “You can’t as a leader say, ‘we’re all just going to be psychologically safe now.’ You need to give space in the moment. Be comfortable asking, ‘what am I missing?’ and listening to dissenting points of view.” Behavior, not policy.
Ferrari’s CEO Benedetto Vigna understood this. He signed 300 NDAs to talk directly with suppliers and engineers when he joined the company, then went on CNBC to publicly discuss his own failures. He wasn’t doing it for the public. He was signaling to his own people that experimentation was safe. Le-Brun’s framing: “If your hypotheses are always right, they’re not really experiments. You’re not really moving the organization forward.”
Focus, then distribute
Earlier this week, prepping for a corporate workshop, I was asked how to balance greater autonomy with efficiency. How do you make sure you’re not just going to get those “thousand flowers blooming” but headed in a thousand different directions?
Distributing intelligence doesn’t mean distributed priorities. Le-Brun’s story from McDonald’s draws the line clearly. “We had more priorities than restaurants at one point, and we had a lot of restaurants.” When a new CEO came in, he named three things the company would do that year. One was to go from zero restaurants with e-commerce to 20,000. “It was a forcing function,” Le-Brun said. “Everyone was aligned. Did we add a restaurant yesterday or not? It forced a shift in the mental model and a breaking down of barriers.” Cross-functional teams got a real business outcome and genuine authority to hit it.
Most “AI transformation” programs skip this step of creating tangible goals. The declaration of being AI-first substitutes for a forcing function. Without a scoreboard everyone can read, people fill the ambiguity with whatever feels safe: their existing work, done the existing way.
Light a thousand fires
Sometimes you want a thousand flowers to bloom, or a thousand fires to burn away excess bureaucracy and to re-instill a sense of autonomy in far-flung functions and business units.
Le-Brun described a chief transformation officer at a South African mining company Werner Swanepoel, whose job was to create the conditions where transformation would happen organically, not to drive a specific transformation. His framing: “Just imagine if you had a thousand people in your organization improving what they did every day. The cumulative effect is significant. And it’s unstoppable.”
AstraZeneca’s “One Million Hour Challenge” ran a similar play: give employees the mandate to collectively find ways to save a million hours of work. The people closest to the work know where the waste is. They don’t need a consultant to diagnose it.
There’s a mistake leaders reliably make once something like this works, and Le-Brun named it: “Don’t immediately take what they’ve done and say, ‘Here’s the best practice, everyone has to implement it,’ because you’ve just robbed everyone else of ownership.” Forcing a local solution onto the broader organization strips away the context that made it work and removes any reason for the people now implementing it to care whether it does.
The manager problem
One symptom of organizations stuck in Tin Man mode: managers keep accumulating while decision-making slows down. A system that rewards headcount and budget will keep producing managers whose job is to justify their own existence.
I’ve seen this first hand, more than once. The bigger the organization, the more likely that headcount and budget size determine stature and rewards. When Salesforce bought Slack, the “leveling” arguments were all about how many humans reported to you, not your impact on the business.
BCG and IPSOS research found that only one in ten people want to become a manager. Le-Brun: “Most of them actually want to practice their craft, to do things that feel valuable. They are just working in the system we’ve created for them.”
Werner sees AI as the mechanism that forces this to change. “How do I make decisions quickly? How do I not need all of these levels when we’re hiring bright people who can make the decision right now within guardrails?” Flatter structures, in the Octopus model, are about cutting latency, not cost.
The same logic applies at the team level. Le-Brun: “Most learning happens on the job. How do you turn a meeting or a retrospective into a learning opportunity? Shift a status meeting to ‘what questions aren’t we asking?’” An SVP at Airbus, Fabrice Valentin operationalized this: instead of sending people to training, he gives teams a dedicated week to upskill before each new project, with the learning embedded in the actual work. People find out quickly what transfers and what doesn’t, and the failed transfers turn out to be as instructive as the ones that work.
The not-so-secret ingredient: trust
Werner’s sharpest observation landed near the end of our conversation. “Trust is really interesting because with people, it starts with layers. You trust someone to get your coffee and not poison you, and eventually you trust them to look after your kids, maybe for a night while you go away. With AI, it’s a black box. You never fully trust it because you always know there’s hallucinations. So trust is something that is really changing the fabric of what we’re doing more than I’ve ever seen.”
Le-Brun’s corollary: “We never talk about it. Trust is never part of a transformation plan.” And yet everything Werner and Le-Brun advocate — distributed decisions, psychological safety, empowering people to improve their own work — runs on it. Start with the artifacts and hope trust follows, and you land in the 88%. Start with trust, and the right artifacts tend to emerge on their own.
Start today
Most of what Werner and Le-Brun are describing is, in their words, common sense but it’s also hard work. Not because it’s complex, but because it requires courage and hard decision making: extremely clear priorities, upending traditional power structures, and focusing on building great team dynamics.
Replace one announcement with a forcing function. Pick a single, measurable outcome, concrete enough that anyone can tell on any given day whether you’re on track. Communicate it publicly. Give a cross-functional team the authority and resources to hit it. Watch what breaks down and fix it in real time rather than in the next reorganization.
Change one meeting this week. Pick a recurring status meeting and replace the update agenda with one question: what aren’t we asking? Or close with a short retrospective. The meetings already exist. They just need redirecting.
Create space for dissent in the moment. In your next leadership session, try Werner’s prompt: “What am I missing here?” Then stop, actually listen, and don’t get defensive or even respond in the moment. Psychological safety is built one moment at a time, and it tends to start with whoever has the most power in the room showing that they’re willing to cede the floor to the least powerful.
Audit your transformation for behavior change, not announcements. For each major initiative underway, ask: what is actually different now in how decisions get made, how failure gets treated, what questions people feel safe asking? If the honest answer is “not much,” you’re still moving boxes.
What’s your transformation program horror story?
ICYMI
Block’s layoffs are an outlier. Their influence might not be.
Jack Dorsey announced last Thursday that he was laying off 40% of Block employees, because AI. There are plenty of reasons to be highly skeptical of this move, but it’s also got the potential to accelerate layoffs, especially in tech.
Similar to Elon Musk cutting 80% of Twitter and Andy Jassy demanding all Amazon employees get back in the office 5 days a week, it’s a shift in the Overton Window: an extreme move that makes the once-unthinkable seem reasonable.
My OpEd in both Charter and TIME gets into why Dorsey and Block are outliers, but also some data to keep in mind before deciding to follow suit.
AI hasn't moved the needle yet. But the bets being placed are enormous.
Nick Bloom just released one of the most comprehensive surveys of executives and AI to date: 6,000+ leaders across four countries, partnering with the Bank of England and the Federal Reserve.
Bloom finds that AI has had essentially no net impact on employment and only modest productivity gains so far. But dig into the predictions and it gets more interesting, and more concerning.
Get into the data, a video of his talk at Charter’s summit last week, and some practical advice on what to do next in my latest Charter column (which is also in TIME, go figure).
Trying to change government policy by vendor contract
I'm a supporter of the stance Anthropic has taken against (checks notes) mass domestic surveillance and autonomous killing robots—aka, Skynet.
I'd love to see more leaders like Dario Amodei and Daniela Amodei not backing down from principles their company was built upon.
But any stance comes with risk, and this one has been made extreme. Now, Anthropic is under threat of corporate death by the US government. That helps you understand why so many execs remain silent, but given that it’s a blow against capitalism you’d hope a few would find their voices.
Ethical decisions come with costs, and one of the people who understands this best is David Schellhase, former general counsel at Slack. He shares insights from decades navigating those waters as well as his take on the current situation.
“Trying to change government policy by vendor contract is a really difficult method,” he told Elizabeth Ralph. “It’s kind of the wrong tool.”
Read David’s interview in The San Fransisco Standard.
p.s. LinkedIn buried this one; apparently it’s political?
You’ve come this far! Thank you!!
Please like ❤️, subscribe 📨, and (most of all) share 🔄





As I observed over five years ago as AI was just entering the customer experience and employee experience scene, another significant digital transformation issue is the lack of balance between people and technology: https://customerthink.com/digital-transformation-isnt-either-or-in-reality-its-and/. This has only gotten worse over time.
Reaylly enjoyed the "Octopus Organization" - it's in line with great books like "Humanocracy" or great transformations like Bayer's. Yet, when the evidence is so clear, why do not more companies embrace this?