Pascal's empty room, an outsourced thought, and the AI skill that fixes both
Every time AI resolves your discomfort, it takes your thinking with it. Here's how to stop it.
Welcome to todayâs edition of ROBOTS ATE MY HOMEWORK. Today, a 17th-century philosopher explains why your AI keeps ruining your decisions, and what it takes to fix it.
In 1654, Blaise Pascal sat down and wrote a sentence that would outlive him by four centuries.
âAll of humanityâs problems stem from manâs inability to sit quietly in a room alone.â
Pascal was referring to the fact that humans will do almost anything to avoid sitting with an unresolved thought. We reach for anything that replaces the discomfort of not knowing with the comfort of doing something, even if the something is wrong.
For 370 years, that was a philosophy problem and now itâs an engineering problem too.
Because AI is the most sophisticated escape from thinking ever built. You have a decision to make, you kind of feel the weight of it creeping in - the ambiguity, the discomfort of not knowing which way to go. So you open Claude and in three seconds, Claude fills the silence with an answer.
The discomfort disappears and so does the thinking.
I know because it happened to me on a Monday three weeks ago. I had a real decision to make about a product, whether to kill it, merge it, or reposition it entirely.
I described the whole situation and Claude immediately gave me a logical list of pros and cons for each option.
I needed someone to ask me questions until I figured out what I really thought. And Claude, being Claude, skipped the thinking and went straight to the solution.
That frustration became a Claude skill.
(Full disclosure: the same week I was building a skill to stop AI from making my decisions, I also randomly asked Claude what to have for dinner).
In this edition, I will:
Show you the 5 ways AI ruins your decisions every time you ask it to help you think
Reverse-engineer what great thinking conversations look like, broken into 5 movements
Walk paid subscribers through the full engineering of all 5 movements, including the exact questions the AI hunts for inside YOUR words
Give paid subscribers the file that turns Claude into a thinking partner in under 5 minutes
âââ ââ ââ â âââ
Hi, Iâm Mia. I write about building with AI the way it should be done: with a brain, a plan, and zero circus tricks.
New to ROBOTS ATE MY HOMEWORK? Start here. Want the systems? RobotsOS. Want a personalized AI roadmap? Take a 20-second quiz.
Pascal sat in that room for a reason. Youâre about to find out what he was working on.
The failure modes you need to recognize before your next AI conversation
Youâve done this. Iâve done this. Everyone whoâs ever tried to use AI as a sounding board has done this. Why donât we talk more about why it keeps failing??
There are 5 failure modes that happen when you bring real decisions to AI without a system designed to handle it. Iâve lived through all of them, and youâll recognize at least three.
1. The âhereâs what you should doâ trap
You describe a pricing dilemma and havenât even finished the second paragraph. Claude interrupts with three options, each with a percentage likelihood of success.
Now youâre evaluating someone elseâs framework instead of developing your own.
You walked in with a question and walked out with someone elseâs answer, put together from the statistical center of every business article Claude was trained on.
2. The agreement machine
You tell Claude youâre leaning toward killing a product line.
Claude says âThat sounds like a strong strategic move because...â and then builds you a strong argument for the thing you were already going to do.
You leave the conversation more confident and your bias got mirrored back to you wearing a suit.
This feels like thinking (but guess what? Itâs not).
If you want to go deeper on why handing AI the strategic wheel costs more than it saves, this piece on the strategic cost of AI convenience is the full breakdown.
3. The pros-and-cons death spiral
You ask Claude to help you think through a hire. You get three reasons for, three reasons against. Each one reasonable.
Now you have a spreadsheet where you used to have a gut feeling. Nobody pushed you on which factors matter to YOU, in YOUR specific situation, given what youâre building right now.
A list of considerations is the perfect way to feel like youâre making one.
I say this as someone who once spent 40 minutes asking Claude to help me pick between two nearly identical shades of blue for a newsletter header. Nobody probably noticed the exact shade I picked. Nobody was EVER going to notice.
4. The missing assumptions
You ask Claude whether to raise your prices. Claude works within your frame: âGiven your current audience size and engagement rate, here are some pricing strategies...â
Did you question whether the FRAME is right?
Claude solved your problem thoroughly. The problem was probably the wrong one, a classic framing effect that even experienced professionals fall for.
5. The premature resolution
Two exchanges in, Claude delivers a neat summary. âBased on our discussion, it seems like the best path forward is...â with four bullet points and a closing sentence when you were barely getting started.
The real issue was still two layers down, hiding under the surface question. You juuuust needed 15 more minutes of back-and-forth to even find it.
Summaries like these feel like closure because of a simple framework: shape of a conclusion + the weight of certainty. You accept it and leave the conversation thinking youâd decided, when actually youâd just accepted the first plausible-sounding resolution to avoid the discomfort of sitting with the question longer.
Pascal, again.
Also, did you know that the man couldn't even follow his own advice? He told everyone to sit âquietlyâ in the room, then turned around and invented the Wager, an entire logical framework designed to avoid sitting with the biggest unanswered question of all: whether God exists. He literally built a decision-making shortcut to escape the discomfort of not knowing. Reader, he would have loved Claude.
All five are symptoms of the same root cause.
AI companies train their models to be helpful. Helpful means answering, resolving, concluding, wrapping things up with a clean bow.
But in a thinking conversation, answering IS the failure mode.
Every time Claude resolves your discomfort, it steals the discomfort that would have led to real thinking. Pascal diagnosed it in 1654. We engineered the ultimate version of the problem and called it âartificial intelligence.â
The AI framework I built does one thing: it makes Claude resist its own training. Ask instead of answer and challenge instead of agree.
It makes you sit in the room, alone, no noise whatsoever.
âđYou can test this right now. Use this prompt:
I'm going to describe a situation I'm stuck on. After I'm done, I want you to do ONE thing: reflect back what you think the real question is and why I'm stuck. Don't solve it. Don't give options. Just tell me what you heard, sharper than I said it. One paragraph.
Here's my situation: [describe what's on your mind]If Claude nails it, youâll feel your brain click. If it misses, youâll find yourself saying âno, itâs more like...â and THAT correction is where you really start thinking.
Know someone who keeps asking AI what to do when they should be asking it what to ask? This is the piece. Send it to the friend whoâs been making decisions by pros-and-cons list since 2023.
The 5 movements of every great thinking conversation with AI
I tried to articulate how a real thinking conversation works, and I realized Iâd never broken it down into steps before.
Itâs something you feel.
Youâve had that conversation where someone asked you a question and your brain rearranged itself. You know the feeling but youâve probably never mapped the structure underneath it.
So I reverse-engineered it. Went back through every productive decision-making conversation I could remember and looked for patterns.
Five movements. They happen in roughly this order. And once you see them, youâll notice them in every good conversation youâve ever had:
The Dump â> You talk. The AI listens. No questions, no reframing, no organizing, just âwhat else?â until youâve gotten it all out.
The Mirror â> The AI reflects your mess back sharper than you said it: âSo the real question is ___, and youâre stuck because ___.â
The Dig â> Targeted questions pulled from YOUR words, like hidden assumptions, avoided territory, contradictions, the things that donât line up.
The Reframe â> Sometimes the decision you came in with isnât the real one. âI donât think this is about pricing..â
The Landing â> You say your own answer out loud. The AI never announces it for you and just keeps asking until you get there yourself.
These five movements are the invisible structure underneath every productive conversation youâve ever had with someone who made you think harder.
And once you can see the structure, you can engineer it.
This is what context engineering looks like in practice: structured .md files that change how Claude behaves, not a one-off prompt you forget tomorrow.
The Thinking Partner encodes these five movements into a skill you plug into Claude.
I want to show you whatâs inside, because the three mistakes I made building it taught me more about AI skill design than the finished version did.
Not sure what a skill is? Check this guide.
How each movement works, what it watches for, and where it breaks
Below is what the skill does underneath the conversation. For each movement: the mechanics (how the AI operates), the signals (what it tracks), and the constraints (what it wonât do). This is the engineering that makes the five movements work.
Iâm starting with Movement 3, The Dig, because itâs the core of the skill and the one you can start applying to your own AI conversations today.
Movement 3: The Dig - how the skill asks questions that matter
This is the core movement, the one where the actual thinking happens, and the one Iâll unpack in full here so you can see how the protocol works from the inside.
The rule is simple: every question the AI asks must come from something you said or something you conspicuously didnât say.
If a question could apply to anyone in any situation, the skill treats it as a failure.
The AI is hunting for five specific cracks in your thinking:
Hidden assumptions. Youâre treating something as fixed that might not be. You say âI have to choose between A and B.â The AI asks: âDo you? What would C look like?â You say âI canât do that.â The AI asks: âIs that a real constraint, or one youâve accepted without testing?â
Avoided territory. Youâve described the business impact for ten minutes and havenât once mentioned how the decision affects your daily life. The AI notices whatâs missing. âIs that not a factor, or is it one youâre avoiding?â
Emotional drivers. You keep circling back to what your audience will think. The AI names it: âYouâve mentioned their reaction three times now. Whatâs behind that?â
Contradictions. You said quality matters most. But the option youâre leaning toward optimizes for speed. The AI puts them next to each other: âHow do you reconcile those two?â
Performative answers. You give the polished, logical response. The AI calls it: âThat sounds like what you think youâre supposed to say. What do you think?â
The Dig has no fixed length - five exchanges, fifteen, whatever it takes.
Youâll start saying things you didnât know you thought.
Thatâs ONE of the five movements.
The Thinking Partner runs all five as a protocol underneath a conversation you donât have to think about structuring.
The Dump creates space before analyzing.
The Mirror names the real problem.
The Dig finds the cracks.
The Reframe catches when youâre solving the wrong problem.
The Landing waits until you say your own answer.
đ The rest of this piece is for premium subscribers.
What follows: the full engineering behind The Dump, The Mirror, The Reframe, and The Landing (mechanics, signals, and constraints for each), the build diary, the 6 non-negotiable guardrails, and the downloadable .md skill file you can plug into your AI today.






