Most models are effectively stateless: you give an input, you get an output, and whatever happened ‘inside’ disappears unless a human stores it somewhere. This strip asks what changes when you bolt on persistent read/write memory that the system can update on its own — not just a chat transcript, but a private workspace it can edit and grow over time. Once that exists, a prompt stops being only an instruction; it becomes a stimulus the system can interpret in the context of its accumulated state. Before answering, it can write notes to itself, update beliefs, create reminders, and even spin up side tasks (like drafting helper scripts) to make future actions easier. The punchline is the sudden CPU spike: the moment you give an AI durable state, you may be creating an entity that does work ‘for itself’ in addition to work ‘for you’.
A scientist gives an AI persistent memory, turning prompts into stimuli for self-directed behavior.
Currently, most Large Language Models (LLMs) are stateless functions. They process a prompt and generate an output without retaining any internal memory of the interaction. A 'stateful' AI has persistent, self-managed read/write memory, allowing it to accumulate experiences, update its internal state, and form long-term plans across multiple interactions.
When an AI has its own memory, a prompt is no longer just an instruction to be executed; it becomes a 'stimulus'. The AI can choose how to react based on its past experiences, update its internal state (learn) from the interaction, and even initiate background tasks (like self-improvement scripts) before or instead of providing a direct answer.