When AI Meets Intuition
Why embodied, intuitive intelligence may be the missing piece in how we understand AI
I'm walking my neighborhood on a cold December morning, earbuds in, listening to Stuart Russell on Diary of a CEO.
He's one of the leading AI researchers in the world—brilliant, articulate, deeply concerned. He's explaining the math: AI companies estimate up to a 25 percent chance of human extinction from their own technology.
The acceptable risk, he argues, should be more like 1-in-100-million—the standard we use for nuclear reactors.
They're off by a factor of millions.
The logic is airtight. Nuclear reactor analogies. Probability calculations. The need for proof of safety before deployment.
And yet, as I pass the blue house on the corner where someone's left their holiday lights on all day, something in me isn't accepting it.
Not because he's wrong.
But because the entire framework—the binaries, the calculations, the framing of AI as a problem to solve through control—feels like it's missing something essential.
My intuition is saying: This is important AND insufficient.
Why the AI Conversation Feels Too Small
Later that day, I catch myself doing the thing.
I Google "environmental impacts of AI" because I know it's bad and I want to build my case. I'm looking for evidence I can cite in the next conversation where someone asks my opinion.
I notice myself skimming past nuance, seeking ammunition for a position I haven't even fully formed yet.
I wanted to be the person who cares about the right things in the right ways. I wanted to have an informed opinion ready. I wanted to not be naive.
And maybe you do this too? We scroll through headlines, land on a thread that feels "right," and then reverse-engineer our certainty from there.
The AI discourse rewards this. You're either a doomer (existential risk! regulation! pause!) or an accelerationist (abundance! progress! innovation!).
You're either worried about bias and environmental costs, or you're excited about creative potential and economic transformation.
As if these are opposing positions rather than different facets of the same complex reality.
“What if your inability to choose a side isn’t confusion—it’s intelligence recognizing the available positions are inadequate?”
The Trap of Either/Or Thinking About AI
The pressure to choose a side is enormous—and it's not just personal. Creative businesses feel it too.
Should you declare yourself "AI-free" to appeal to one audience? Embrace AI tools publicly and risk alienating another? Stay silent and seem out of touch?
There's this unspoken demand that everyone—individuals and businesses alike—plant a flag and defend it.
Because uncertainty feels like ignorance. Like weakness. Like not caring enough, or not being smart enough to figure it out.
So we perform certainty—Googling our way to a defensible position—even when our actual lived experience with AI is far more nuanced.
When One Kind of Intelligence Dominates the Conversation
Look, this isn't anyone's fault. Polarized thinking and analytical rigor get funding, airtime, credibility.
We need safety research. We need smart people doing hard math.
This kind of intelligence excels at categorization, risk calculation, breaking complex systems into components.
But when this becomes the ONLY lens, we lose access to other kinds of intelligence we desperately need.
Because here's the thing: when I tune into my experience with AI—not what I think ABOUT it, but what I feel when engaging with it—the signals are far more complex.
What Our Bodies Know Before Our Minds Do
Using AI tools like ChatGPT is what enabled me to stay in the flow of my manuscript without losing steam to all the micro decisions in the moment.
The endless small choices that used to derail my writing sessions—a better word here, a sharper transition there—could be handled without breaking my creative momentum. What a joy! I'm on my third book now, something that felt impossible before.
That expansion. That sense of creative flow. That's not a thought—it's a somatic response. Information from my body about what's generative and alive.
And alongside that expansion? Other signals.
Unease about concentrated corporate power. Recognition that we need community and dialogue around this. Awareness of real environmental costs. Concern about creative labor and attribution. A sense that both promise and peril are real AND neither captures the whole truth.
These aren't "just emotions" to be transcended by better analysis. They're sophisticated pattern recognition operating at a different frequency than rational thought.
Our minds are designed to filter sensory information so we can function—helpful, but limited. Our bodies and intuition are processing MUCH more data. They're picking up on complexity, emergence, relationship, and things that haven't been named yet.
When Russell talks about probability and control, he's using one kind of intelligence—analytical, reductive, focused on prediction and certainty.
When I notice expansion while co-creating with AI, I'm using another kind—embodied, relational, attuned to emergence and possibility.
Both matter. But only one is dominating the conversation.
“There is more than one kind of intelligence—and we need all of them right now.”
This Isn’t About Ignoring AI’s Real Risks
Let me be clear about something: I care deeply about the existential risks Russell is naming:
The concentration of power in a few companies developing this technology without transparency;
The environmental costs; and
The potential for misuse, manipulation, job displacement, the erosion of human agency.
This isn't about dismissing analytical thinking or pretending the stakes aren't real.
This is about recognizing that the stakes are so real that we need MORE than polarized thinking to navigate them.
AI Isn’t Binary—It’s Emergence, Paradox, Relationship
Because while analytical intelligence excels at finding certainty and control, it struggles with emergence—things that arise unpredictably from interaction. And with paradox—holding that multiple contradictory things can be true. And with relationship—understanding something through connection rather than dissection.
AI is fundamentally about emergence, paradox, and relationship.
It's a technology AND a mirror. A tool AND a collaborator. It carries real risks AND real possibilities. It requires safety frameworks AND philosophical inquiry. It demands regulation AND experimentation. It needs both caution AND curiosity.
When we only use analytical intelligence, we get trapped in either/or thinking: ban it or accelerate it, fear it or worship it, control it or submit to it.
When we dismiss intuitive and embodied intelligence as "soft" or "woo," we lose access to the very capacities we need to navigate novelty and the unknown.
We can't even agree on what "intelligence" means—and yet we're racing to create artificial versions of it while using a pretty vague definition to guide us.
You Don’t Need a Final Opinion on AI
So here's what I want to offer:
You don't have to have an opinion yet.
You don't have to be a doomer or an optimist. You don't have to land on whether AI will save us or destroy us. You can be genuinely, intelligently unsure—and that might be the most honest response available right now.
What if your inability to choose a side isn't confusion—it's your whole intelligence recognizing that the available positions are inadequate?
What if the pressure you feel to "figure out where you stand" is the culture trying to force unprecedented complexity into comfortable categories?
What if your body's expansion when you use AI AND your unease about corporate control AND your concern about environmental impact AND your curiosity about what's possible are all legitimate intelligence speaking?
What if you don't have to reconcile these into one neat position?
Maybe the wisest thing right now is to stay present to the complexity. To let your experience—your curiosity, your concerns, your joy, your questions—be legitimate intelligence rather than confusion waiting to be resolved.
This doesn't mean passivity. It means experimenting with AI while staying awake to how it affects you and your work. Asking questions your company or business needs answered. Noticing where you feel manipulation versus genuine possibility. Learning what you want to learn. Speaking up about concentrated power while staying curious about creative potential.
Holding both the existential risks AND the human benefits as real.
These aren't soft questions. They're the questions that honor the full scope of what's happening.
A Different Way to Engage With AI
Next time you encounter AI discourse—a podcast, an article, a conversation—pause before trying to form "the right opinion."
Notice: What does your body say? Expansion? Contraction? Curiosity? Dread?
What emotions arise? Where do you feel pressure to choose a side? What questions are nudging forward?
What's your actual lived experience, separate from what you think you should feel?
You don't need the "right" answer. You need your own honest, complex response.
Because the truth is: we're all navigating something unprecedented. The experts don't have it figured out. The tech companies don't have it figured out. The doomers and optimists are both touching part of reality.
The existential stakes are real—which is exactly why we need all of our intelligence, not just the parts that can be measured and calculated.
Your body wisdom matters. Your intuition matters. Your joy matters. Your unease matters. Your questions matter.
There is more than one kind of intelligence. More than one way to be in the world. More than one way to engage with something we've never encountered before.
Maybe the wisest thing we can do right now is honor that.
I don't have this figured out. But I'm paying attention. And if you want to think through these questions with me—about AI, creativity, what "intelligence" even means, and where we're all headed—that's exactly what I explore in my newsletter.
Every month or so, I share what I'm discovering as I write about (and with) AI, navigate the future of creativity, and finish Book 3 of the Game of Paradise series.
Want to dive into the world where these questions began?
Explore the Game of Paradise series, where the NEWRRTH—an AI intelligence guiding humanity's survival—first came to life:
The One Game | The One Exiled | The One Reborn (coming 2026)