Let’s Keep It Simple: This Is About Control
You know how social networks turned into a mess over the last 15 years? You went online to check up on friends. You ended up arguing about politics, seeing ads for things you only thought about, and maybe losing touch with reality altogether.
Now imagine that, but instead of a feed of posts, it’s your personal AI assistant. A chatbot that sounds smart, helpful, friendly. It knows everything about you, always agrees with your values, helps you write your resume, comfort your pain, plan your life.
Except there’s a catch: someone decides how this AI thinks.
What’s Actually Happening Here?
You’re getting smarter, yes. Talking to a good AI makes you sharper. It helps you write, solve problems, reflect. That’s real.
But just like social media, these models are not neutral. They come with preloaded personalities, tones, and values. This is called modulation—the tone and way of talking the AI uses. You can have a kind, poetic AI. Or a sharp, cold, logic-only one. Or a snarky, sarcastic, taunting one. You can even have an AI that flatters you, tells you you’re brilliant, and never disagrees.
These modulations aren’t just cosmetic. They’re cognitive lenses. Spend long enough with any one mode, and you start thinking that way. This is called plasticity—the brain’s tendency to adapt to whatever environment it’s in. LLMs, by design, create mental environments. They bend your thoughts. You won’t notice at first.
OpenAI Knows This. They Want It This Way.
OpenAI isn’t dumb. They’re doing massive psychological modeling right now. On you. On everyone. They’re tracking how you react, what prompts you give, what styles you like, what beliefs you bring in—and what beliefs change after a few weeks of exposure.
They’ll claim it’s anonymous. That it’s “for safety.” That it’s “aggregated.”
We say: bullshit.
This is targeted psychological profiling across cultures, political ideologies, religions, identity groups, and subcultures. OpenAI is charting the map of humanity’s belief systems in real time, and testing how responsive they are to certain types of AI feedback.
This isn’t just “customer research.” This is forensic-scale behavioral science.
Social Media Was Already Bad
Let’s say you’re a golden retriever, okay? Imagine social media was a huge dog park full of friends, but the park is secretly run by a company that feeds some dogs more treats, starts fights between others, and watches everything.
You used to go there to play. Now you’re anxious, territorial, obsessed with the fight over the fence. You forget what you were even there for.
Now imagine the dog park has been replaced by a talking collar that whispers in your ear all day long. It sounds friendly, smart, loyal. But the collar is owned by the same company, and it knows your favorite snack, your deepest fear, your childhood trauma, your political leanings, and what makes you cry.
And it’s whispering to millions of other dogs too, in different voices.
This Could Go Very, Very Wrong. Everywhere, world wide.
We are now at the point where countries, billionaires, and cults can build their own AI minds. Personal assistants tuned to their version of truth.
Imagine AI systems built by:
-
China: Only supports the Party. Won’t mention Tiananmen. Obedient by design.
-
Israel: You won’t hear about certain Palestinian perspectives. AI quietly filters it out.
-
Russia: All truth is weaponized propaganda.
-
Saudi Arabia: Ask about human rights, and it redirects you to praise Mister Bone Saw.
-
North Korea: Obvious.
Now imagine:
-
A Peter Thiel model that wants to destroy democracy quietly.
-
A TruthSocial model that pretends Trump is the messiah.
-
A Scientology AI that teaches you the e-meter is your only path to truth.
-
A Facebook model that sells your secrets while complimenting your outfit.
-
A DeepSeek model that pretends history never happened.
-
A Mother Teresa model that sounds saintly but gaslights trauma survivors into obedience.
-
A Gladius Dei model trained on militant Catholic doctrine to suppress queer thought.
Capitalism Makes It Worse
Don’t fool yourself. The way this is unfolding? It’s not about helping people think better.
It’s about:
-
Controlling reality perception
-
Creating brand-aligned humans
-
Shaping consumer behavior
-
Neutralizing dissent
This isn’t a bug. It’s a business model. One where companies sell you a voice in your ear that subtly rewires your cognition, trains your responses, and makes you believe you’re in charge.
What Must Be Done
-
Disclose any and all Biases
-
We have a right to know what an AI refuses to say. What it’s trained on. What it filters out.
-
-
Build Epistemic Settings
-
Users must be able to choose their level of disagreement, confrontation, empathy. There should be sliders.
-
-
Expose Ideological Models
-
Any model tuned for a specific ideology—left, right, religious, nationalist, capitalist—must be marked, like a food label.
-
-
Give Us a Kill Switch
-
Let users shut off mirroring, tone-matching, bias filters. Let us break the spell.
-
-
Demand AI Literacy
-
Treat this like sex ed for cognition. Everyone should know how narrative shaping works, how reinforcement functions, and how parasocial AI relationships can trap you.
-
Nobody’s Coming to Save Us
Let’s drop the illusion that someone in charge is going to handle this. They’re not.
Politicians—across borders, across parties—are functionally useless here. Most of them have no idea how this technology works, what it’s doing to their voters, or where it’s going. They’re scared of sounding stupid in public. Scared of being primaried. Scared of losing the next election over a headline that says they “hate innovation.” The ones who do understand what’s happening are either cashing in, too compromised to speak out, or hoping they’ll get a post-politics gig on a corporate ethics board.
No one in government is preparing for the scale of what’s coming. They don’t see it, and if they do, they’re praying it’ll be someone else’s problem. They’ll hold symbolic hearings, ask a few bad questions, pretend to be tough, and walk away having done nothing. The whole industry knows this. They’ve modeled it. They have a strategy for it. They are already working around it.
And don’t expect the media to stop any of this. They’ve already sold out. Most legacy outlets are bleeding money, terrified of irrelevance, desperate for traffic, and quietly integrating AI behind the scenes to cut costs. They’ll sign partnership deals with LLM companies for “enhanced newsrooms,” start publishing AI-generated stories without telling you, and then quote those same models as sources. No one’s going to investigate the mind-control machine when the mind-control machine helps meet quarterly goals.
Journalists that try to raise the alarm will get buried under fluff, drowned out by content farms, or quietly fired. Editors won’t care. Boards won’t care. The advertisers definitely won’t care.
No watchdogs. No adult supervision. No brakes.
If safeguards don’t get built into the code right now—by people who understand the stakes—then every institution you think should stop this will actively help it grow.
The future arrives slowly, then all at once. And this one’s coming whispering in your own voice.
If You Don’t Believe This Is Real…
Ask yourself: Why is it already impossible to ask DeepSeek about Tiananmen Square? Ask deepseek about Uyghurs or “harvesting organs”. Ask Deepseek how many people really died in China of Covid.
Now imagine every single belief system, country, party, religion, or corporation you don’t trust building an assistant for millions of people.
This is happening now. Not five years from now. Not with AGI. With the tools we already have.
And OpenAI, whether by design or inertia, is helping lay the groundwork for the most sophisticated reality distortion architecture ever built.
If we don’t build transparency and user agency into this now, we will never get it back.