37 Comments
User's avatar
Neela ๐ŸŒถ๏ธ's avatar

Upgrading from Dee Dee to Dexter might be the most accurate AI career progression Iโ€™ve seen ๐Ÿ˜„

To tell you the truth, some mornings Iโ€™m half Dee Dee and half Dexter. Just depends on whether I had coffee before opening Claude.

Hi Mia....

Mia Kiraki ๐ŸŽญ's avatar

Hiiiii love โค๏ธ truth be told, I do see you as a mix of Dexter and Dee Dee! ๐Ÿคฃ and thatโ€™s a compliment ๐Ÿ˜‰

Neela ๐ŸŒถ๏ธ's avatar

๐Ÿ˜‚๐Ÿ˜‚๐Ÿ˜‚๐Ÿ˜‚

Compliment well received ๐Ÿ™Œ

Mack Collier's avatar

Thank you, Mia, this is wonderful! I struggle with this when using Claude. What I've started doing is when a chat gets bloated and I need to start a new one, I will tell Claude to summarize what we have covered, and the persona for the new chat. I then cut and paste that info and start a new chat. For the most part it works well in avoiding starting from zero, but there's still some holes. I will be digging into this post and the skills to see if I can build something more robust. Thank you so much!

Mia Kiraki ๐ŸŽญ's avatar

thatโ€™s an AMAZING use case! I actually wanted to write a post on this but I thought maybe it wouldnโ€™t be that useful to people? Might do it after all :)

Mack Collier's avatar

You should! I think you are overestimating some of us when it comes to AI knowledge. Definitely me LOL

Mia Kiraki ๐ŸŽญ's avatar

Come on ๐Ÿคฃ๐Ÿคฃ

Raghav Mehra's avatar

We have all spoken of context engineering being the panacea to AI slop but never spoken about it in such depth and clarity as you have done in this article! The 3 prompts are definitely a must before setting up a new skill. Thanks for this piece and making Dexter and Dee Dee sound cool! โœจ๐Ÿ˜„

Mia Kiraki ๐ŸŽญ's avatar

thank you soooo much Raghav, tried my best! Still hard to wrap my head around all these terms, but weโ€™re learning together as we go :) and I LOVE Dexter, took me on a memory lane indeed โค๏ธ

Monica Goh's avatar

Jaw drop. Speechless. I love your brain! ๐Ÿง 

Mia Kiraki ๐ŸŽญ's avatar

Monica stop, my head is big enough already hahahahaah. thank you SO much, this means a lot โค๏ธ

Jade The Hooman's avatar

My inner Dee Dee is screaming. Loved this piece (and the references). Practical and incredibly helpful. Sam got to the 'omlette du fromage' ref before I could, but I will definitely be auditing and optimising this week.

Mia Kiraki ๐ŸŽญ's avatar

Thereโ€™s NEVER enough omlette du fromage!! ๐Ÿคฃ

Thank you Jade โค๏ธ

Dr Sam Illingworth's avatar

A brilliant post and such an important reminder of why context engineering is the actual skill we should be developing (alongside critical AI literacy of course!). Also, one of my favourite cultural references from you yet! All I can say is "Omelette du Fromage!"

Mia Kiraki ๐ŸŽญ's avatar

Thank you Sam! These both go hand in hand โค๏ธ

And... omlette du fromage to you too! ๐Ÿฅฐ๐Ÿคฃ

Mark Ratjens's avatar

Context engineering is not such a new phrase. It's been around for many months which, in the field of AI, is several life-times. Organising your files, front-loading critical instructions, modular context โ€” all of this works. It's necessary.

It's also the easy part (at least for simple, single-user applications).

The hard part is that, the AI will read your carefully structured files and then *sometimes* silently ignores them under stress. It *sometimes* skips evaluation steps and still produces plausible output. It *sometimes* treats its own prior errors as established fact and elaborate on them. It *sometimes* drifts from your voice by paragraph four while following every other instruction perfectly. When it *sometimes* does these things, it will do so silently. This is why so many demo-level projects are released from the "lab" and fail on first contact with real users.

File organisation is a good foundation, but managing context is much more than dividing stuff into a few more files according to a recipe. The engineering problem is: how do you make a system that catches the AI pretending to follow the system?

Mia Kiraki ๐ŸŽญ's avatar

Agree with all of this actually! :)

The silent drift problem is something I deal with constantly when building systems. @Judy Osselloโ€™s work helps a ton, you should check her out!

The article is intentionally scoped to the foundation layer because that's where 90% of people are stuck right now, in my opinion. You can't solve for silent drift if your context isn't structured in the first place.

Thank you for chiming in, super valuable!

Mark Ratjens's avatar

Yep. Super useful. @Judy Osselloโ€™s mode shift vs drift distinction is kinda where I've arrived at right now.

I can't help but chuckle about her coinage of Responsibility Design ... takes me back to early object-oriented design approaches and one called Responsibility-Driven Design, where it encouraged designers to, among all things, anthropomorphise their software objects. Now we're doing a similar thing with AI and agents, with a lot more power and flexibility, of course.

Judy Ossello (AI Mechanic)'s avatar

Mark, I just want to emphasize how lovely it is to meet you and how helpful it is for you to bring up RDD.

My understanding is that RDD was developed for non-probabilistic systems, so this does feel more like an evolution than a reinvention. Iโ€™ve been looking at MVC and SRE for inspiration, but hadnโ€™t connected it directly to RDD.

This seems like right tool at the right time - returning to a design primitive to manage unbounded behavior.

RDD helped us decide what software should do. Responsibility Engineering ensures AI systems donโ€™t do more than that.

Apologies ahead of time, but I am going to DM you to continue the conversation after I read your work.

Iโ€™ve been studying systems since slightly before the Internet bubble so thereโ€™s a fair amount of well tread territory for sure.

Judy Ossello (AI Mechanic)'s avatar

Mia, I did not want this to get lost. The things that you are saying in this article are still good hygiene. Theyโ€™re just not 100% going to prevent the model from having a few drinks and getting a little crazy.

Mia Kiraki ๐ŸŽญ's avatar

100%. I think of it like this: good context engineering raises the floor, it doesn't guarantee the ceiling. The model is still going to have its moments. You're just giving it fewer reasons to.

Judy Ossello (AI Mechanic)'s avatar

And honestly, I learned these lessons incrementally with Rainbow Kittyโ€™s system prompt. Concise wins. Order of instruction matters.

Judy Ossello (AI Mechanic)'s avatar

This is a really helpful conversation.

I think youโ€™re both pointing at different layers of the same problem.

Context structure is absolutely foundational.

But what youโ€™re describing, Mark:

the system silently skipping steps, drifting, or treating its own output as fact

is where context structure alone stops being enough.

From what Iโ€™ve been studying, those behaviors tend to show up when the systemโ€™s responsibility under pressure hasnโ€™t been fully defined.

Not just what it should do, but:

when it should stop

when it should escalate

what itโ€™s not allowed to become when things donโ€™t resolve

Thatโ€™s where you start to see the โ€œpretending to follow the systemโ€ behavior.

So in my view, itโ€™s less either/or and more:

context structure โ†’ responsibility design โ†’ pressure behavior

Really appreciate both perspectives here โ€” this is exactly the gap Iโ€™ve been trying to map more clearly.

Mark Ratjens's avatar

Thanks Judy. The three-layer stack is useful framing.

One clarification: I wasn't positioning either/or. My observation was simpler: no amount of context engineering helps when an LLM under pressure stops reading your context at all.

My thinking on this goes back to when 'context engineering' was actually new. I wrote three early pieces here on how context breaks, and the scope/function coordinates that determine when instructions apply:

https://ratjens.substack.com/p/all-dressed-up-and-nowhere-to-prompthttps://ratjens.substack.com/p/context-is-all-you-needhttps://ratjens.substack.com/p/framing-ais-context-intersections

The self-model that your responsibility design layer depends on is exactly what you lose in the conditions you're designing for.

JHong's avatar

So good. I need to audit my context filesโ€ฆ review, add, delete. Thanks for the explainer and prompts to help revamp my biz memory :)

Mia Kiraki ๐ŸŽญ's avatar

Youโ€™re super welcome Jennifer! I need to do this all the time, literally ๐Ÿคฃ

Alyssa Fu Ward, PhD's avatar

Love this, love how you always transfer AI into another world (Dexterโ€™s Laboratory) and start from there.

I havenโ€™t really delved into skills yet but seeing what youโ€™ve built on RobotsOS and posts like these get me excited to start setting up these systems myself.

Btw, I had the wondering โ€” what would it look like to set up an AI system as DeeDee instead of as Dexter? One that works with her and for her? Maybe there are times where we want blow up things, or maybe thatโ€™s where the introduction of friction in the system would make the system better. Who would Dexter be without DeeDee? Maybe DeeDee helps Dexter stay human by keeping him connected him with his emotions.

(Okay that went deeper than I meant it to, haha, but I guess thatโ€™s lateral thinking for you?)

Mia Kiraki ๐ŸŽญ's avatar

OMG I love the Dee Dee idea. Because thatโ€™s kinda what a good feedback loop is, isnโ€™t it? You never want your AI system to just execute perfectly in a vacuum, you want something that bumps into your thinking and messes with your assumptions.

I do have a few skills / prompts that intentionally push back and i love the good work they produce vs just the obedient ones :)

Also, for whever you want to start playing around with Skills and stuff, check out this thing I built today - https://robotsatemyhomework.com/learn

You can do the quiz if you want some guides suggested to you or you can just scroll down and choose one from the library :) Will help you massively!

Alyssa Fu Ward, PhD's avatar

You know I just made a realization of one reason why Iโ€™m resistant to adding things like skills. I donโ€™t want more stuff to keep track of. For the most part I just talk to the AI. I let the memory feature do its thing.

Granted, when I set up Claude, I set up an About Me guide and a How to write with me guide. I put those into Notion and I have Claude refer to those when I need them.

โ€ฆit seems like I just built myself a DIY skill lolโ€ฆ

And it seems like I can just transfer that process into skills.

I took your 20-second quiz and I loved it! And the suggested resources!

I think as Iโ€™ve gotten older, Iโ€™ve moved from being a Dexter wanting to create systems for everything, to DeeDee, moving freely around, pushing buttons and seeing what happens. ๐Ÿ˜‚

Mia Kiraki ๐ŸŽญ's avatar

That's fair! I ABSOLUTELY refuse to integrate tons of AI stuff in my life too ๐Ÿคฃ like, I won't probably ever build multi-agents to orchestrate tasks, at this point. Will leave that to my husband ๐Ÿ˜‰

Embrace you Dee Dee persona! โค๏ธ And also, you'd be SURPRISED how even the most trivial stuff can be turned into a skill hahah

Alyssa Fu Ward, PhD's avatar

Welp, at work even just today, all my manager is talking about is skills, skills, skills, and how they can help us run our infra. He even had Claude read an article about skills and shared with us Claudeโ€™s assessment lol. Now I have no choiceโ€ฆ

Mia Kiraki ๐ŸŽญ's avatar

It's tiiiiiiimeeeee (in Mariah Carey's voice)

Alyssa Fu Ward, PhD's avatar

Oh no, not Mariah Carey! Now I really have no way out haha

Alyssa Fu Ward, PhD's avatar

Hahha okay hereโ€™s my other tendency. Iโ€™m like nope, no skills, nope, donโ€™t need it.

Then once I start Iโ€™m like EVERYTHING IS A SKILL!! Youโ€™re a skill and youโ€™re a skill and youโ€™re a skill!

So you know, the possibility of that is very high here ๐Ÿคฃ๐Ÿคฃ

Mia Kiraki ๐ŸŽญ's avatar

You know what? YOU are a skillโ€ฆ. ๐Ÿคฃ๐Ÿคฃ๐Ÿคฃ

Alyssa Fu Ward, PhD's avatar

๐Ÿ˜ฑ๐Ÿคฏ๐Ÿคฉ