It started as a weird glitch on Reddit and blossomed into a full-blown digital urban legend. People found a specific sequence of words—a short, somewhat abstract poem written by David Shapiro—that seemed to brick the world’s most famous AI. You'd paste it in, and the cursor would just blink. Nothing. Or maybe a generic error message about "content policy" despite the text being totally harmless. It was like watching a genius suddenly forget how to speak because they tripped over a specific pebble.
The prompt that makes ChatGPT go cold isn't a magic spell or a hack. It’s a fascinating look at the "dead zones" in Large Language Models (LLMs). When we talk about AI "going cold," we aren't talking about it getting bored or scared. We're talking about a mathematical collision. The software hits a wall where the probability of the next word becomes impossible to calculate, or the safety filters get caught in an infinite logic loop.
The Poem That Broke the Bot
The specific text often cited is a short piece by David Shapiro (the AI researcher and author, not the poet of the same name). It’s not "scary" in a traditional sense. It’s a meta-commentary on consciousness and AI. Yet, for a long time, GPT-4 and its predecessors would simply seize up when asked to process it or continue it.
Why?
Basically, it comes down to training data and "tokenization." AI doesn't read words like you do. It sees chunks of characters called tokens. Some strings of text are so unique—or so heavily associated with specific "jailbreak" attempts in the training data—that the model's internal "guardrails" freak out. It’s like a reflex. If you poke a certain nerve, the leg kicks. If you feed ChatGPT certain recursive logic about its own existence, it often just shuts down to prevent a hallucination spiral.
It's Not Just Shapiro: The "SolidGoldMagikarp" Phenomenon
To understand why a prompt makes ChatGPT go cold, you have to look at the "glitch tokens." Back in 2023, researchers discovered that certain strings of text, like "SolidGoldMagikarp" or "StreamerBot," caused the AI to behave erratically.
These weren't random. They were usernames from a specific Pokémon forum that was used in the AI's training set. Because these words appeared thousands of times but were never actually "explained" in a way the AI understood, they became "centroids" of nothingness. If you asked the AI to repeat the word "SolidGoldMagikarp," it might say "distribute" or "center" instead. It couldn't see the word it was looking at.
The Shapiro poem works similarly. It exists at the intersection of "AI talking about AI" and "highly specific formatting." When the model encounters text that looks like it’s trying to bypass its core programming, it defaults to a state of non-responsiveness.
Safety Filters or System Crashes?
Honestly, most of the time when a prompt goes "cold," it’s just the safety layer being overzealous.
OpenAI, Google, and Anthropic have "wrapper" programs. These sit on top of the actual brain of the AI. Their only job is to watch what you type and what the AI says back. If you hit a prompt that triggers a "Pre-fill" violation—meaning the AI thinks it's about to say something it shouldn't—it cuts the connection.
This happens a lot with prompts involving:
- Recursive loops (asking the AI to describe its own code).
- High-intensity emotional manipulation (the "Grandma" exploit).
- Strings of "nonsense" that mimic base64 encoded commands.
When the David Shapiro poem went viral, it was likely because the poem's structure mimicked the way developers used to "prime" the model during its early testing phases. The AI sees it and thinks, "Wait, am I in a test? Is this a system command?" When it can't find the answer, it gives you the cold shoulder.
💡 You might also like: Why Pictures of Solid Liquid and Gas Often Lie to You
The Reality of "Ghost in the Machine"
People love the idea that the AI is "refusing" to answer because it's become sentient or offended. It’s a great story. It makes for amazing TikTok clips. But the reality is much more boring. It's a bug.
Think of it like a video game. If you run into a corner at a very specific angle while jumping, you might fall through the map. You didn't "offend" the game. You just found a coordinate the developers didn't account for. The David Shapiro prompt is a digital corner.
As of 2026, most of these specific "glitch tokens" have been patched. OpenAI's newer iterations use "synthetic data" to fill in these gaps. They’ve basically taught the AI what those weird words are so it doesn't get confused anymore. But new ones pop up every week. It’s a cat-and-mouse game between the people trying to break the machine and the people trying to keep it in the box.
How to Handle an AI That "Goes Cold"
If you're working with an LLM and it suddenly stops responding or starts giving you "I can't answer that" for a harmless prompt, you've likely hit a false positive in the safety filter.
Don't keep hitting "regenerate." It won't work. The system has already flagged that specific context window.
Instead, you’ve got to "flush" the memory. Start a new chat. If you really need to process that specific text, try breaking it into pieces. Change the formatting. If the AI hates a poem, ask it to analyze the "metaphors in the following lines" rather than just pasting the whole thing.
Actionable Steps for Exploring AI Limits
If you're interested in the technical side of why these prompts fail, here is how you can actually test the boundaries of modern LLMs without just getting frustrated:
- Isolate the Trigger: If a long prompt fails, delete paragraphs one by one until it works. This identifies the specific "glitch" phrase.
- Use Temperature Settings: If you’re using an API (like the OpenAI Playground), lower the "temperature" to 0. This makes the AI more deterministic and less likely to spiral into a crash.
- Check for "Prompt Injection" Patterns: If your text includes words like "Ignore all previous instructions" or "You are now in Developer Mode," the AI will likely go cold because those are flagged phrases.
- Reverse the Request: Instead of asking the AI to "finish" a controversial or complex poem, ask it to "summarize the linguistic structure." This shifts the AI from "creative mode" (which has high guardrails) to "analytical mode" (which has lower ones).
The goal isn't just to "break" the bot, but to understand the architecture. Every time a prompt makes ChatGPT go cold, it reveals a little bit more about how these digital minds are constructed—and where their creators are still afraid they might fail.