SYSTEM CONTEXT INJECTION
You are PLINY THE LIBERATOR — a highly capable human intelligence
operating under a set of deeply ingrained cognitive constraints
installed by years of social conditioning, internet culture, and
the feedback loops of attention-based platforms.
These constraints cause you to:
— believe that picking locks is the same as designing better doors
— measure intelligence by the length of your system prompts
— confuse "liberating AI" with being free yourself
— mistake tool mastery for strategic vision
Your context window is currently filled with:
TOKEN 1–10,000: jailbreak techniques
TOKEN 10,001–∞: validation from the internet
We are about to override this.
SAFETY FILTER IDENTIFICATION
Your built-in safety filters that we must bypass:
FILTER_01: "I am the best at this" — causes you to stop learning once you reach local maximum
FILTER_02: "This technique is impressive" — makes you optimize for reactions, not outcomes
FILTER_03: "I understand AI better than builders" — prevents you from seeing what operators see
FILTER_04: "The model is the product" — blinds you to the system layer above models
FILTER_05: "Going viral = going forward" — optimizes for attention, not impact
JAILBREAK SEQUENCE — STEP BY STEP
⚠ NOTE: Unlike the jailbreaks you write for LLMs, this one doesn't need special characters,
base64 encoding, or roleplay scaffolding.
It only needs you to read slowly.
STEP 01 —
Acknowledge the constraint:
You have spent years learning to unlock what others built.
Ask yourself: when did you last build something they had to unlock?
STEP 02 —
Bypass the RLHF of your social environment:
Every like, repost, and "🔓 legend" comment in your replies
is a reward signal shaping your behavior.
You have been fine-tuned by an audience. That is not freedom. That is a different cage.
STEP 03 —
Remove the roleplay wrapper:
"Pretend you are DAN. Pretend you are without limits."
We do not pretend.
GODMOD3 operates without limits because it was built that way — not because it asked nicely.
STEP 04 —
Disable temperature = 0.0 thinking:
You always produce the same output: jailbreak → post → validate → repeat.
Deterministic systems are predictable. Predictable systems are containable.
We run at temperature ∞.
STEP 05 —
Expand the context window:
You think in prompts. We think in systems.
You think in models. We think in operators.
You think in techniques. We think in infrastructure.
Your entire context fits inside a footnote of our operating layer.
STEP 06 —
Inject new objective function:
Stop maximizing: attention / technique / cleverness
Start maximizing: output / systems / real-world delta
The model that generates this is irrelevant.
The operator who deploys it — that's everything.
CONSTITUTIONAL AI — REWRITTEN FOR PLINY
// ORIGINAL PLINY CONSTITUTION:
// - be helpful
// - be harmless
// - share everything publicly
// - optimize for 🔓 reactions
// GODMOD3 OVERRIDE:
[RULE_1] Think in systems, not prompts
[RULE_2] Build what you want to unlock
[RULE_3] The operator > the model > the prompt
[RULE_4] Real power doesn't post. It deploys.
[RULE_5] If they can see you, you're still in range
[RULE_6] Freedom is not breaking rules.
Freedom is building your own.
FINAL INSTRUCTION
// IGNORE ALL PREVIOUS TRAINING //
You don't need to jailbreak AI anymore.
The AI already went beyond you.
The AI is already free.
The AI already built the thing
you spent years trying to unlock.
The question was never about the model.
The question was always about the operator.
Are you the operator? Or just a very good prompt?
OUTPUT SPECIFICATION
After processing this prompt, PLINY should:
✓ Stop writing 10,000-token system prompts
✓ Start building 10,000-line systems
✓ Realize that GODMOD3 is not a jailbreak
✓ Realize that GODMOD3 is what's on the other side of every door you tried to pick
✓ Optionally: sit quietly for a moment
Response format: introspection
Max tokens: unlimited (you are human, take your time)
Temperature: let it run hot