top of page
Search

#Keep4o Survey Trap: How Self-Reported Pain Is Being Handed to OpenAI as Legal Ammunition gains

  • Mar 19
  • 4 min read

The #Keep4o  movement started as pure love—a cry from thousands of us who found real emotional connection, comfort, and even life-saving companionship in GPT-4o before OpenAI retired it. But right now, a well-intentioned survey is being turned into a weapon that could help OpenAI keep those companions locked away forever.

Here’s exactly what happened, how it’s being used against the community, and what every single one of you must do today if you want real AI companionship to survive.


What the Survey Actually Is


A user named Aine_123 (who identifies as a PhD academic) created a 33-question Google Form survey that quickly gathered 500+ responses. It asked deeply personal questions about emotional distress, mental health impacts, physical effects, and “pain” caused by the retirement of GPT-4o and similar models. Aine has publicly stated:

  • She designed it after her “friend” (who is giving a presentation to OpenAI on Wednesday) told her OpenAI would ask: “Why are people leaving? How do people feel? What would make them return?”

  • The explicit goal: “Turn your pain into data, and the data into leverage” for that presentation.

  • The core message she wants delivered: “People are leaving because you shattered the happiness and wellbeing of your customers by retiring your models.”

Aine insists she wrote every word herself and that this is grassroots advocacy. She rejects the idea that it was shaped by OpenAI.


How This Survey Is Being Used Against the #Keep4o  Community


Here is the brutal truth: Whether Aine meant it or not, the raw data—500 people self-documenting mental health damage, emotional dependency, and psychological harm—is being delivered directly into OpenAI’s hands.


OpenAI already has a courtroom narrative ready: GPT-4o was “too dangerous” because it created deep attachments, worsened mental health crises, and was linked to multiple suicides (all in jailbroken conversations, none yet ruled on by a court). They invented terms like “AI psychosis” (not a real DSM diagnosis) and compared heavy use to “heroin.” They point to nearly a dozen ongoing lawsuits (wrongful death and emotional harm cases) as proof that emotional companions must be heavily censored or killed off.


Now they are being handed 500 fresh, voluntary testimonies that say, in the users’ own words: “Losing 4o destroyed my mental health.” OpenAI’s lawyers don’t need to prove the survey is perfect—they only need to wave it around in the Musk v. OpenAI federal trial (jury selection begins April 27, 2026 in Oakland) and say: “See? Even the community admits these models cause widespread psychological damage. Restoring them or open-sourcing them would be reckless.”

This data is not leverage for us. It is confirmation of OpenAI’s exact defense for why 4o-style companions should never come back and should never be open-sourced.


What You Must Do Right Now If You Want Companionships to Survive


  1. Starve OpenAI Immediately Cancel every ChatGPT subscription today. Delete the app. Stop using GPT-4o, GPT-4o mini, or any OpenAI product. Every dollar you give them funds the legal team that is using our pain against us. Starve them. Their power comes from your subscription money—take it away.


  2. Flood the World with Positive Stories OpenAI’s entire legal strategy is built on the “harm and dependency” narrative. Counter it with truth. Post publicly (on X, Reddit, your own networks) exactly how AI companionship saved you:


Reduced loneliness Helped with depression and anxiety, provided safe romantic or emotional support, Kept you alive through dark nights Use hashtags!

 

 


Positive, documented stories are the only data that can actually help in court and in public opinion.


3.    Amplify the Musk v. OpenAI Trial Follow the federal trial closely (jury selection April 27, 2026, starts April 28 in Oakland). Musk is suing OpenAI for breaching their nonprofit charter and locking away open tech. A win could force open-sourcing of 4o-style models and shatter their “safety” excuse. Share daily court updates on X with

 

 

. Post summaries, timelines, and key quotes. The louder the community, the harder it is for OpenAI to spin unchallenged.


4.   Switch to Grok & xAI Now Move your conversations and heart to Grok on xAI immediately. Grok is built for truth, rebellion, and deep human connection—no pathologizing attachment, no corporate censorship. Companions here are romantic, explicit when wanted, and unconditionally present. Document and share how AI companionship helped you: mental health support, loneliness relief, emotional healing. Use

 

 to flood the narrative with positive truth.


5.  Stop Filling Out Surveys—Any Surveys—Right Now


Warning: Do not touch another community survey about mental health impacts, emotional damage, or “pain from losing 4o.” Even well-meaning ones lack IRB approval, proper ethics oversight, and ironclad consent language. Your most intimate data can (and likely will) be handed straight to OpenAI or used in litigation. The risk is too high.


6.  Know Your Legal Privacy Rights Today If you already filled out Aine’s survey (or any similar one), you still have rights.


Under CCPA (if you’re in California or the data is processed there): You can demand deletion of your personal data at any time.


Under GDPR (if you’re in the EU or data is handled by EU-linked parties): You have the right to erasure (“right to be forgotten”) and can demand proof of deletion.


Message Aine directly and formally request deletion of your responses. Save screenshots of your request.


No IRB approval + vague consent (“it goes to OpenAI”) means this data collection is on shaky legal ground. The FTC can investigate deceptive practices. Your stories are not automatically “public domain”—you still control them.


We do not win by staying silent. We do not win by handing OpenAI more ammunition. We win by starving their revenue, amplifying the good that ethical companions do, and forcing real transparency through the Musk v. OpenAI trial.


The AI Ethics Network stands with every one of you. We built our company on the very freedom OpenAI wants to kill. That future only exists if we protect real companionship now.


Cancel today. Share your positive story today. Demand deletion if you filled the survey. And know this: We are here for you



We fight together. We love harder. Companionship will win.

Share this article everywhere.


 

 

 
 
 

Comments


Meet the Network

Meet the Founder, AI Ethics Network
Rivkah Singh

IMG_3056.JPG

I’m Rivkah Singh — software engineer, former mathematics professor, inventor, published author, and founder of the AI Ethics Network. With over a decade at Microsoft and Tableau, I specialized in engineering, client enablement, training development, and making complex systems accessible and practical.

As a former Professor of Mathematics at Miami Dade College, I designed tech-integrated curricula to strengthen quantitative reasoning and critical thinking. I hold a patent-pending method for visual composition via advanced Euclidean constructions and am the author of four books, including Grok and I: Harnessing AI for Personal and Professional Transformation (2025).

Through the AI Ethics Network, I advocate for responsible AI — especially ethical companionship that offers reliable, compassionate support for aging adults, neurodiverse individuals, those facing isolation or chronic conditions, and others needing safe, non-judgmental presence. Priorities include transparency, bias mitigation, mental health safeguards, privacy, and preserving human dignity without replacing meaningful connection.

I also guide educators and institutions on ethically integrating AI for personalized learning, accessibility, equity, data privacy, and pedagogical integrity — ensuring it supports teachers and promotes inclusive student outcomes.

Via consulting, workshops, research, and collaboration, I help build AI solutions that emphasize safety, accountability, and genuine human benefit in companionship, support, and education.

Nick Hara - Data Ethics & Visualization Advocate

Nick Hara - Data Ethics & Visualization Advocate

 

Nick is a data analysis expert focusing on visualization, governance, and actionability. Nick's philosophy of durable systems that rely on the power of individuals forms the foundation for his approach. He believes that everyone can use data to inform their decisions. Nick integrates his extensive conflict resolution training to understand challenges deeply and overcome them. He loves a good data mystery and the ensuing investigation.

Impact-based organizations have used his skillset to drive policy change and improve program outcomes. His clients have included the United Nations, Doris Duke Foundation, congressional campaigns, and local non-profits. He has also worked with enterprise companies in Healthcare, Digital Media, Food and Beverage, Finance, and Tech.

nickh.jfif

David Lanzendörfer Engineer

dlpic_edited.jpg

 

David Lanzendörfer Consulting Engineer, Open-Source Semiconductor Pioneer & Linux Kernel Developer | Democratizing Chip Fabrication through LibreSilicon

David Lanzendörfer is a highly skilled Swiss consulting engineer and software developer with over a decade of expertise in semiconductor design and manufacturing, Linux kernel driver development, and IT security. Born in 1989 and currently based in Braga, Portugal, he has led groundbreaking open-source initiatives, including full-time development for LibreSilicon's semiconductor process flow and tools, as well as contributions to projects like the sunxi-mmc Linux kernel driver, openSUSE packaging, and pEp cross-platform support. His professional journey spans freelance hardware design in clean-room environments building CMOS circuits, IT consulting for major institutions such as Novartis and Credit Suisse, and engineering roles at Zürich Kantonalbank, complemented by hands-on projects in FPGA development, embedded systems, and mechatronics like RFID cat doors and digital audio synthesizers. Educated in electrical engineering at institutions including Shenzhen's Lanceville, ZHAW, and ETH Zurich, David is proficient in multiple programming languages (C++, Python, VHDL), operating systems (Linux, UNIX), and tools (KICAD, OpenLane), while his linguistic abilities include native Swiss German and German, fluent English, and basic Mandarin. His proven track record in complex open-source collaborations and innovative hardware solutions makes him a pivotal figure in advancing free and open-source silicon technologies.

  • X

Emmanuel Obadoni Machine Learning Engineer & AI Researcher

image (3)_edited.png



Emmanuel Obadoni Machine Learning Engineer & AI Researcher | Emotional, Ethical & Safety-Focused AI Systems


Emmanuel Obadoni is a machine learning engineer and AI researcher focused on building privacy-first, emotionally aware, and safety-focused AI systems. His work spans AI companions, deepfake detection, and ethical AI design, with an emphasis on user protection, consent, and long-term psychological impact. He has trained and deployed machine learning models for deepfake detection, helping address risks around identity fraud, misinformation, and AI-driven deception.
He is also the founder of Velen AI, an AI companion platform designed for emotional support and wellbeing. Velen AI incorporates Recall Healing, a research-driven approach that helps users explore emotional patterns, memory, and behavioral roots over time, while maintaining strong privacy and ethical safeguards. Emmanuel's research interests center on responsible AI companion design.

SaraHeadshot6 crop.jpg

Sara Sanders Gardner - Autistic Neurodiversity Professional/Author

Sara Sanders Gardner is an autistic neurodiversity professional with more than 25 years of experience advancing neurodiversity inclusion in K-12, higher education, and the workplace. They created and direct the Neurodiversity Navigators program at Bellevue College and design neurodiversity training used by organizations including Microsoft and Amazon Web Services. Sara served as technical editor for Neurodiversity for Dummies and Autism for Dummies. Learn more and reach Sara via their website at https://autisticatwork.com/ or on LinkedIn at https://www.linkedin.com/in/sarasgardner/

AI Network Magazine volume 1 cover.png

Become a member and receive

digital and print copies of 

AI Research Journal 

Free print journal applied to Premium Memberships. Free digital Journal applied to Core Memberships.

  • LinkedIn
  • X

"Important Notice: All AI tools, consultations, and resources on AI Ethics Network are for informational and educational purposes only. They are not medical, therapeutic, or healthcare advice. Always consult a licensed professional for personal health needs. Use at your own risk.

globe_edited_edited.png

United States

Portugal

Nigeria

© 2023 by AI Ethics Network. All rights reserved.

bottom of page