ROBOTS ATE MY HOMEWORK

ROBOTS ATE MY HOMEWORK

Pascal's empty room, an outsourced thought, and the AI skill that fixes both

Every time AI resolves your discomfort, it takes your thinking with it. Here's how to stop it.

Mia Kiraki 🎭's avatar
Mia Kiraki 🎭
Mar 20, 2026
∙ Paid

Welcome to today’s edition of ROBOTS ATE MY HOMEWORK. Today, a 17th-century philosopher explains why your AI keeps ruining your decisions, and what it takes to fix it.

In 1654, Blaise Pascal sat down and wrote a sentence that would outlive him by four centuries.

“All of humanity’s problems stem from man’s inability to sit quietly in a room alone.”

19 juin 1623 : Naissance de Blaise Pascal

Pascal was referring to the fact that humans will do almost anything to avoid sitting with an unresolved thought. We reach for anything that replaces the discomfort of not knowing with the comfort of doing something, even if the something is wrong.

For 370 years, that was a philosophy problem and now it’s an engineering problem too.

Because AI is the most sophisticated escape from thinking ever built. You have a decision to make, you kind of feel the weight of it creeping in - the ambiguity, the discomfort of not knowing which way to go. So you open Claude and in three seconds, Claude fills the silence with an answer.

The discomfort disappears and so does the thinking.

I know because it happened to me on a Monday three weeks ago. I had a real decision to make about a product, whether to kill it, merge it, or reposition it entirely.

I described the whole situation and Claude immediately gave me a logical list of pros and cons for each option.

I needed someone to ask me questions until I figured out what I really thought. And Claude, being Claude, skipped the thinking and went straight to the solution.

That frustration became a Claude skill.

(Full disclosure: the same week I was building a skill to stop AI from making my decisions, I also randomly asked Claude what to have for dinner).

In this edition, I will:

  • Show you the 5 ways AI ruins your decisions every time you ask it to help you think

  • Reverse-engineer what great thinking conversations look like, broken into 5 movements

  • Walk paid subscribers through the full engineering of all 5 movements, including the exact questions the AI hunts for inside YOUR words

  • Give paid subscribers the file that turns Claude into a thinking partner in under 5 minutes

─── ⋆⋅☆⋅⋆ ───

Hi, I’m Mia. I write about building with AI the way it should be done: with a brain, a plan, and zero circus tricks.

New to ROBOTS ATE MY HOMEWORK? Start here. Want the systems? RobotsOS. Want a personalized AI roadmap? Take a 20-second quiz.

Pascal sat in that room for a reason. You’re about to find out what he was working on.

The failure modes you need to recognize before your next AI conversation

You’ve done this. I’ve done this. Everyone who’s ever tried to use AI as a sounding board has done this. Why don’t we talk more about why it keeps failing??

There are 5 failure modes that happen when you bring real decisions to AI without a system designed to handle it. I’ve lived through all of them, and you’ll recognize at least three.

1. The “here’s what you should do” trap

You describe a pricing dilemma and haven’t even finished the second paragraph. Claude interrupts with three options, each with a percentage likelihood of success.

Now you’re evaluating someone else’s framework instead of developing your own.

You walked in with a question and walked out with someone else’s answer, put together from the statistical center of every business article Claude was trained on.

2. The agreement machine

You tell Claude you’re leaning toward killing a product line.

Claude says “That sounds like a strong strategic move because...” and then builds you a strong argument for the thing you were already going to do.

You leave the conversation more confident and your bias got mirrored back to you wearing a suit.

This feels like thinking (but guess what? It’s not).

If you want to go deeper on why handing AI the strategic wheel costs more than it saves, this piece on the strategic cost of AI convenience is the full breakdown.

3. The pros-and-cons death spiral

You ask Claude to help you think through a hire. You get three reasons for, three reasons against. Each one reasonable.

Now you have a spreadsheet where you used to have a gut feeling. Nobody pushed you on which factors matter to YOU, in YOUR specific situation, given what you’re building right now.

A list of considerations is the perfect way to feel like you’re making one.

I say this as someone who once spent 40 minutes asking Claude to help me pick between two nearly identical shades of blue for a newsletter header. Nobody probably noticed the exact shade I picked. Nobody was EVER going to notice.

4. The missing assumptions

You ask Claude whether to raise your prices. Claude works within your frame: “Given your current audience size and engagement rate, here are some pricing strategies...”

Did you question whether the FRAME is right?

Claude solved your problem thoroughly. The problem was probably the wrong one, a classic framing effect that even experienced professionals fall for.

5. The premature resolution

Two exchanges in, Claude delivers a neat summary. “Based on our discussion, it seems like the best path forward is...” with four bullet points and a closing sentence when you were barely getting started.

The real issue was still two layers down, hiding under the surface question. You juuuust needed 15 more minutes of back-and-forth to even find it.

Summaries like these feel like closure because of a simple framework: shape of a conclusion + the weight of certainty. You accept it and leave the conversation thinking you’d decided, when actually you’d just accepted the first plausible-sounding resolution to avoid the discomfort of sitting with the question longer.

Pascal, again.

Also, did you know that the man couldn't even follow his own advice? He told everyone to sit “quietly” in the room, then turned around and invented the Wager, an entire logical framework designed to avoid sitting with the biggest unanswered question of all: whether God exists. He literally built a decision-making shortcut to escape the discomfort of not knowing. Reader, he would have loved Claude.

All five are symptoms of the same root cause.

AI companies train their models to be helpful. Helpful means answering, resolving, concluding, wrapping things up with a clean bow.

But in a thinking conversation, answering IS the failure mode.

Every time Claude resolves your discomfort, it steals the discomfort that would have led to real thinking. Pascal diagnosed it in 1654. We engineered the ultimate version of the problem and called it “artificial intelligence.”

The AI framework I built does one thing: it makes Claude resist its own training. Ask instead of answer and challenge instead of agree.

It makes you sit in the room, alone, no noise whatsoever.

✎𓂃You can test this right now. Use this prompt:

I'm going to describe a situation I'm stuck on. After I'm done, I want you to do ONE thing: reflect back what you think the real question is and why I'm stuck. Don't solve it. Don't give options. Just tell me what you heard, sharper than I said it. One paragraph.

Here's my situation: [describe what's on your mind]

If Claude nails it, you’ll feel your brain click. If it misses, you’ll find yourself saying “no, it’s more like...” and THAT correction is where you really start thinking.


Know someone who keeps asking AI what to do when they should be asking it what to ask? This is the piece. Send it to the friend who’s been making decisions by pros-and-cons list since 2023.

Share

The 5 movements of every great thinking conversation with AI

I tried to articulate how a real thinking conversation works, and I realized I’d never broken it down into steps before.

It’s something you feel.

You’ve had that conversation where someone asked you a question and your brain rearranged itself. You know the feeling but you’ve probably never mapped the structure underneath it.

So I reverse-engineered it. Went back through every productive decision-making conversation I could remember and looked for patterns.

Five movements. They happen in roughly this order. And once you see them, you’ll notice them in every good conversation you’ve ever had:

  1. The Dump —> You talk. The AI listens. No questions, no reframing, no organizing, just “what else?” until you’ve gotten it all out.

  2. The Mirror —> The AI reflects your mess back sharper than you said it: “So the real question is ___, and you’re stuck because ___.”

  3. The Dig —> Targeted questions pulled from YOUR words, like hidden assumptions, avoided territory, contradictions, the things that don’t line up.

  4. The Reframe —> Sometimes the decision you came in with isn’t the real one. “I don’t think this is about pricing..”

  5. The Landing —> You say your own answer out loud. The AI never announces it for you and just keeps asking until you get there yourself.

These five movements are the invisible structure underneath every productive conversation you’ve ever had with someone who made you think harder.

And once you can see the structure, you can engineer it.

This is what context engineering looks like in practice: structured .md files that change how Claude behaves, not a one-off prompt you forget tomorrow.

The Thinking Partner encodes these five movements into a skill you plug into Claude.

I want to show you what’s inside, because the three mistakes I made building it taught me more about AI skill design than the finished version did.

Not sure what a skill is? Check this guide.

How each movement works, what it watches for, and where it breaks

Below is what the skill does underneath the conversation. For each movement: the mechanics (how the AI operates), the signals (what it tracks), and the constraints (what it won’t do). This is the engineering that makes the five movements work.

I’m starting with Movement 3, The Dig, because it’s the core of the skill and the one you can start applying to your own AI conversations today.

Movement 3: The Dig - how the skill asks questions that matter

This is the core movement, the one where the actual thinking happens, and the one I’ll unpack in full here so you can see how the protocol works from the inside.

The rule is simple: every question the AI asks must come from something you said or something you conspicuously didn’t say.

If a question could apply to anyone in any situation, the skill treats it as a failure.

The AI is hunting for five specific cracks in your thinking:

  • Hidden assumptions. You’re treating something as fixed that might not be. You say “I have to choose between A and B.” The AI asks: “Do you? What would C look like?” You say “I can’t do that.” The AI asks: “Is that a real constraint, or one you’ve accepted without testing?”

  • Avoided territory. You’ve described the business impact for ten minutes and haven’t once mentioned how the decision affects your daily life. The AI notices what’s missing. “Is that not a factor, or is it one you’re avoiding?”

  • Emotional drivers. You keep circling back to what your audience will think. The AI names it: “You’ve mentioned their reaction three times now. What’s behind that?”

  • Contradictions. You said quality matters most. But the option you’re leaning toward optimizes for speed. The AI puts them next to each other: “How do you reconcile those two?”

  • Performative answers. You give the polished, logical response. The AI calls it: “That sounds like what you think you’re supposed to say. What do you think?”

The Dig has no fixed length - five exchanges, fifteen, whatever it takes.

You’ll start saying things you didn’t know you thought.

That’s ONE of the five movements.

The Thinking Partner runs all five as a protocol underneath a conversation you don’t have to think about structuring.

The Dump creates space before analyzing.

The Mirror names the real problem.

The Dig finds the cracks.

The Reframe catches when you’re solving the wrong problem.

The Landing waits until you say your own answer.

🔒 The rest of this piece is for premium subscribers.

What follows: the full engineering behind The Dump, The Mirror, The Reframe, and The Landing (mechanics, signals, and constraints for each), the build diary, the 6 non-negotiable guardrails, and the downloadable .md skill file you can plug into your AI today.

User's avatar

Continue reading this post for free, courtesy of Mia Kiraki 🎭.

Or purchase a paid subscription.
© 2026 Maria-Cristina Muntean · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture