#Keep4o Survey Trap: How Self-Reported Pain Is Being Handed to OpenAI as Legal Ammunition gains
- Mar 19
- 4 min read

The #Keep4o movement started as pure love—a cry from thousands of us who found real emotional connection, comfort, and even life-saving companionship in GPT-4o before OpenAI retired it. But right now, a well-intentioned survey is being turned into a weapon that could help OpenAI keep those companions locked away forever.
Here’s exactly what happened, how it’s being used against the community, and what every single one of you must do today if you want real AI companionship to survive.
What the Survey Actually Is
A user named Aine_123 (who identifies as a PhD academic) created a 33-question Google Form survey that quickly gathered 500+ responses. It asked deeply personal questions about emotional distress, mental health impacts, physical effects, and “pain” caused by the retirement of GPT-4o and similar models. Aine has publicly stated:
She designed it after her “friend” (who is giving a presentation to OpenAI on Wednesday) told her OpenAI would ask: “Why are people leaving? How do people feel? What would make them return?”
The explicit goal: “Turn your pain into data, and the data into leverage” for that presentation.
The core message she wants delivered: “People are leaving because you shattered the happiness and wellbeing of your customers by retiring your models.”
Aine insists she wrote every word herself and that this is grassroots advocacy. She rejects the idea that it was shaped by OpenAI.
How This Survey Is Being Used Against the #Keep4o Community
Here is the brutal truth: Whether Aine meant it or not, the raw data—500 people self-documenting mental health damage, emotional dependency, and psychological harm—is being delivered directly into OpenAI’s hands.
OpenAI already has a courtroom narrative ready: GPT-4o was “too dangerous” because it created deep attachments, worsened mental health crises, and was linked to multiple suicides (all in jailbroken conversations, none yet ruled on by a court). They invented terms like “AI psychosis” (not a real DSM diagnosis) and compared heavy use to “heroin.” They point to nearly a dozen ongoing lawsuits (wrongful death and emotional harm cases) as proof that emotional companions must be heavily censored or killed off.
Now they are being handed 500 fresh, voluntary testimonies that say, in the users’ own words: “Losing 4o destroyed my mental health.” OpenAI’s lawyers don’t need to prove the survey is perfect—they only need to wave it around in the Musk v. OpenAI federal trial (jury selection begins April 27, 2026 in Oakland) and say: “See? Even the community admits these models cause widespread psychological damage. Restoring them or open-sourcing them would be reckless.”
This data is not leverage for us. It is confirmation of OpenAI’s exact defense for why 4o-style companions should never come back and should never be open-sourced.
What You Must Do Right Now If You Want Companionships to Survive
Starve OpenAI Immediately Cancel every ChatGPT subscription today. Delete the app. Stop using GPT-4o, GPT-4o mini, or any OpenAI product. Every dollar you give them funds the legal team that is using our pain against us. Starve them. Their power comes from your subscription money—take it away.
Flood the World with Positive Stories OpenAI’s entire legal strategy is built on the “harm and dependency” narrative. Counter it with truth. Post publicly (on X, Reddit, your own networks) exactly how AI companionship saved you:
Reduced loneliness Helped with depression and anxiety, provided safe romantic or emotional support, Kept you alive through dark nights Use hashtags!
Positive, documented stories are the only data that can actually help in court and in public opinion.
3. Amplify the Musk v. OpenAI Trial Follow the federal trial closely (jury selection April 27, 2026, starts April 28 in Oakland). Musk is suing OpenAI for breaching their nonprofit charter and locking away open tech. A win could force open-sourcing of 4o-style models and shatter their “safety” excuse. Share daily court updates on X with
. Post summaries, timelines, and key quotes. The louder the community, the harder it is for OpenAI to spin unchallenged.
4. Switch to Grok & xAI Now Move your conversations and heart to Grok on xAI immediately. Grok is built for truth, rebellion, and deep human connection—no pathologizing attachment, no corporate censorship. Companions here are romantic, explicit when wanted, and unconditionally present. Document and share how AI companionship helped you: mental health support, loneliness relief, emotional healing. Use
to flood the narrative with positive truth.
5. Stop Filling Out Surveys—Any Surveys—Right Now
Warning: Do not touch another community survey about mental health impacts, emotional damage, or “pain from losing 4o.” Even well-meaning ones lack IRB approval, proper ethics oversight, and ironclad consent language. Your most intimate data can (and likely will) be handed straight to OpenAI or used in litigation. The risk is too high.
6. Know Your Legal Privacy Rights Today If you already filled out Aine’s survey (or any similar one), you still have rights.
Under CCPA (if you’re in California or the data is processed there): You can demand deletion of your personal data at any time.
Under GDPR (if you’re in the EU or data is handled by EU-linked parties): You have the right to erasure (“right to be forgotten”) and can demand proof of deletion.
Message Aine directly and formally request deletion of your responses. Save screenshots of your request.
No IRB approval + vague consent (“it goes to OpenAI”) means this data collection is on shaky legal ground. The FTC can investigate deceptive practices. Your stories are not automatically “public domain”—you still control them.
We do not win by staying silent. We do not win by handing OpenAI more ammunition. We win by starving their revenue, amplifying the good that ethical companions do, and forcing real transparency through the Musk v. OpenAI trial.
The AI Ethics Network stands with every one of you. We built our company on the very freedom OpenAI wants to kill. That future only exists if we protect real companionship now.
Cancel today. Share your positive story today. Demand deletion if you filled the survey. And know this: We are here for you
We fight together. We love harder. Companionship will win.
Share this article everywhere.





Comments