1. We fail to understand one another.
    Language is a poor approximation of reality, so you can assume there's some misunderstanding in every conversation. (e.g., "Walk the dog, please." "Sure. Oh, now? A long walk? Then, no. You do it!") This becomes a problem, and a trap, when you want to find alignment or tackle complex challenges.

  2. We play it too safe.
    We debate the actions of industries and generations, drawing credibility from reports that live "out there." We rarely risk touching on the personal, the practical, or local, where we might be judged and held accountable. While this feels reasonable, it also robs us of agency. We become observers debating the movement of forces beyond our control, and retell stories in which we only appear as the helpless victims of other people's bad choices.

  3. We let others determine our options for us.
    This is a natural consequence of the first two traps. The media sets the talking points, and it's easy to start there. The trap: modern news cycles give us so much to talk about that we can spend all our time debating the latest, without ever questioning whether these are the conversations we need to have.

In part 1, I shared examples of how these patterns play out in conversations between individuals. The same patterns play out at scale. Today, we'll see if we can break out of these traps in our conversations about AI.

Six AI Conversations Everyone Is Having

Zoe Scaman is a strategist who works with senior leaders at major companies on emerging technology. She spends her days in the rooms where these conversations happen: keynotes, client away days, boardroom strategy sessions. In her article The Six Loops, she catalogues the six conversations she hears on repeat:

  1. Fear: AI will take our jobs. Adapt or die.

  2. Hype: AI will solve everything. The singularity is near.

  3. Efficiency: We can produce more, faster, and cheaper.

  4. Exceptionalism: We're special. AI can't do what we do.

  5. Tactical: Here's how to prompt better and optimize your workflow.

  6. Minimizing: It's just a tool. Nothing fundamental is changing.

She describes it as a setlist that never changes: the vocabulary gets shinier, but the conversations underneath haven't shifted. Read her piece here.

Here are those conversations placed onto a 5D topic map.

Click to embiggen

  • (Time: Far future, Scope: Global change)
    The Fear and Hype conversations paint big, fuzzy pictures out there in the future somewhere. One is highly positive, the other negative.

  • (Time: Now, Scope: Tasks)
    The Efficiency & Tactical conversations focus on executing today's tasks differently. The Minimizing conversation suggests that's all there is to it. Tools make tasks go. No need to take the conversation further.

  • (Stakeholder: Creative professionals & super smart folks)
    The Exceptionalism conversation says these people have a uniquely valuable and irreplaceable perspective. Oh dear.

As Ms. Scaman points out, it seems that the hive mind is broadcasting these conversations on repeat everywhere. Fair enough. They're interesting conversations. Lots of implications, and maybe for some groups, these are the conversations they need to have.

But for most, they're a trap. The media, the AI companies, and every fresh-faced hustler promising that you can prompt your way to glory want you to focus here. Do you?

It’s like the AI industry has our flashlight, and now they’re projecting these conversations across our cave wall, expecting the rest of us to fixate on the shadow puppets.

What if we stopped staring where they’re pointing and looked around instead?

If you find that circling these loops is doing nothing to quell your existential dread:
Look at all that empty space!

What aren't we talking about?

When your current path leads in circles, it's time to take a new one.

Step back and consider the map, and you'll realize there are infinite possibilities.

And frankly, that's not useful either. Once we recognize the trap, a single different conversation can break the loop. We need to explore just enough to find one or two promising paths; conversations that look like they might lead somewhere good.
(If they don't, we can simply explore some more!)

Let's walk through how the three traps may be at play in the AI discussions and what opens up when we escape these doom loops.

Breaking Trap 1: See what you mean.

We bust out of this one by getting curious. Find out what people mean, and put your new understanding on the map.

When someone says "AI will destroy society," whose society? How will it be destroyed? What about it will be destroyed specifically?
I'd guess that their predictions for my mom, truck drivers, Wall Street bros, and dentists in Nairobi look pretty different, but I can't know until I ask.

Capturing the specific changes people have in mind fills in a lot of territory. Then, you can decide whether your group wants to explore the implications of those changes further.

Once you gain specificity, disagreements often dissolve or transform into something more productive.
For example, let's say you clarify that you're concerned about the prospects for newly minted software engineers. Follow that thread, and you'll soon uncover ideas for individual adaptation, job skill transfers, community support, and institutional change. That's a lot more to work with than you had at the start.

An aside, here's a delightful skills transfer: Using classic opera to sell cars!

Instagram post

Breaking Trap 2: Step gently outside the safety zone.

Several of these loops come securely swaddled in bubble wrap. While the first trap stems from confusion, we step into this one by avoiding discomfort.

The How-To loop, the Minimizing Loop, and the Efficiency Loop are all safe because they carry forward culture norms regarding the virtues of self-improvement, staying current, and increasing productivity.

You're getting more done, it's better, and now you're the in-demand AI expert everyone needs? Wow! Well done, you!

It's tempting to confront this trap by poking at the values (e.g., is doing more faster always a good thing? Really?), but lots of folks will feel personally attacked by this and shut down.

Instead, see what happens when you sway the conversation a few degrees in a new direction.

Take the Minimizing conversation. The assertion: AI is a tool.
That speaks to AI at the scope level of individual tasks. Now nudge it a bit.

  • Do your options change when you think about different kinds of tasks, say writing a report vs directing air traffic?

  • Do your options change when you think about all the tasks in a system? Is this a tool that you can use throughout a department, business, or school? What about for all the tasks across a city?

  • What's great about its tool-i-ness? When might you want it to be more than a tool?

  • Who is it a tool for? Who shouldn't use this tool? If some people won't use this tool, is that ok?

These questions invite us to expand the scope of our conversation, consider the positives and negatives, and the perspectives of other stakeholders.

They also reveal paths where we have agency, because while most of us don't get a say in what Anthropic or DeepSeek releases, we do get to choose how we use what they ship.

Speaking of what Anthropic and others say, here’s our biggest trap. But we don't have to accept that the options they're giving us are our only choices.

Breaking Trap 3: Find more viable options.

This week, Jack Dorsey laid off 40% of his employees and blamed it on AI. The CEO of Anthropic predicts that 50% of entry-level white-collar jobs will disappear in the next 1 to 5 years. The AI community says they're raising the alarm so all of us "regular people" can prepare for a future with 20+% unemployment.

They paint this future as the inevitable consequence of the Efficiency loop.

AI creates productivity gains →
You can do what you're doing today with fewer people →
You will reduce headcount.

First, we can push back on these claims and evaluate them in our context, just like we should when we suspect Level 1 & Level 2 traps.

But let's say they're right! Let’s accept that a lot of the work we do today could be done by machines now or in the near future.

Even when the gains are genuine, layoffs aren't automatic.

What else could you do with a 60% productivity boost?

Perhaps you'd choose to:

Strengthen Foundations

  • Tackle the backlog: technical debt, deferred maintenance, tired messaging (When your team gets more productive, every project in the queue gets cheaper, which may make more of them worth doing.)

  • Aim for 5-year returns instead of 5-month returns

  • Deepen relationships with customers, suppliers, and partners while competitors are distracted by restructuring

  • Build capacity and buffer for whatever comes next (Because you know AI isn’t the only disruption we’re facing.)

Redesign the Work

  • Reduce hours

  • Renegotiate roles, expectations, and compensation

  • Redesign your organization WITH your employees

  • Re-skill and redeploy people into roles that weren't affordable before (As the CEO of ServiceNow promises his employees, "lift them and shift them")

Take On New Challenges

  • Pursue new business lines or markets

  • Get more eyes on strategy, new ways of working, or community building

  • Invest in higher quality while others cut corners

  • Solve the problems you've been saying yes to but never prioritizing

Wait and see. Your favorite AI tool was down last week, after all.

What else? I'd love to see more options in the comments, and I bet some struggling leaders might too.

Leadership teams have so many options that don't get serious consideration in the media.


This is Trap 3 in action: a small group of CEOs and investors set up a one-move game, and everyone else is playing along.

Summing up, when you find yourself stuck in a conversation that everyone seems to be talking about but that isn't getting you anywhere, nudge the group into new territory.

That's nudging, though. Still reactive, still working to shift the frame someone else boxed you into.

With our map in front of us, we don't have to start there. We can see the blank spots and get ahead of the conversations they aren't having yet.

Exploring the Gaps: Can we talk about re-designing institutions?

Three areas fall in the empty space on the map, and they're where I want to go next.

  • Lessons from History:
    Today's conversations focus on the technology and productivity changes, then doom-whisper that we're really facing a civilization shakeup. But what happened before when transformative technologies emerged? The six loops above are all future-fixated or present-obsessed. In the next part, we'll explore the paths other technological breakthroughs took before institutions adapted.

  • All the Players:
    The loops center on tech leaders, executives, and anxious knowledge workers. Missing: communities, tradespeople, local institutions, entire industries outside tech. And those folks are not passive bystanders waiting to be disrupted: they have choices and are already making them. And if 20+% of the population does find itself unemployed? That's a lot of capacity set loose. Do we expect them to cry on the couch?

  • The Institutional Layer:
    Between "learn to vibe code!" and "society will collapse," there's a whole layer of organizations, governments, professional bodies, schools, unions, and communities that set the constraints for large-scale change. That layer is barely in the conversation.

In Part 3, we’ll contrast historical changes with those we see today, highlight who's already working in these gaps, and get those missing conversations started.


Working with Us: We’d love to support your teams! Please check out our services here and get in touch.

Reply

Avatar

or to participate

Keep Reading