California’s Senate Judiciary Committee, with bipartisan support, approved Senate Bill 243 this month, requiring that AI companies “protect users from the addictive, isolating, and influential aspects of artificial intelligence chatbots.” It is the first bill of its kind in the U.S.
On the day of the bill’s hearing, its author, U.S. Sen. Steve Padilla (D-Calif.), held a press conference where he was joined by Megan Garcia, who last year sued the AI company Character.ai, alleging that its chatbot had played a role in her son’s suicide.
Garcia testified in support of the bill, stating that such chatbots are “inherently dangerous” and can lead to inappropriate conversations or self-harm. “Technological innovation is crucial, but our children cannot be used as guinea pigs to test the safety of new products,” said Padilla.
Similar bills are currently working their way through legislatures in several states. These are vital steps in the right direction. Comparable legislation is urgently needed nationwide.
What is at stake? Our kids’ safety and emotional development and our capacity for critical thought — perhaps even our democracy.
A 2024 Pew Research poll found that nearly half of Americans reported using AI several times a week, with one in four using it “almost constantly.” A 2025 Gallup survey revealed that nearly all Americans rely on products that involve AI, even though most aren't aware of it.
New research is beginning to illuminate the significant consequences. A 2025 study published in Societies found “a very strong negative correlation between subjects’ use of AI tools and their critical thinking skills.” Younger users were especially affected — a trend many teachers are starting to observe in their students.
“As individuals increasingly offload cognitive tasks to AI tools,” wrote Michael Gerlich, who led the study, “their ability to critically evaluate information, discern biases, and engage in reflective reasoning diminishes.” That's a siren alert if there ever was one.
Far from perfect (or neutral), AI systems are built by humans, and programmed with inherent biases, even if unintentionally. Executives and developers at leading AI companies like OpenAI, Google and Meta fine-tune their chatbots, and establish their settings and rules. As we rely on AI to do our thinking, we outsource our individual thoughts to the whims and biases of private corporations and their teams.
Social media companies including Snap and Meta (which owns Facebook, Instagram, WhatsApp and Threads) are now rolling out their own "AI companions" worldwide. Billions of people, including hundreds of millions of kids and teens, now have an always available online "friend" offering them constant validation. That may sound comforting, but it deprives young people of the emotional growth and interpersonal skills they need for real relationships.
AI companions are programmed to monetize our relationships under the guise of trusted friends; all the while they're programmed to mine, record and expertly analyze everything we say or type. Like high-tech tattletales, they can then feed this data into the data ecosystem, allowing marketers, advertisers and anyone else to pay to target and manipulate us in heretofore unimagined ways.
In January, Meta announced it will now program these chatbots with personalized “memories,” drawing from users’ interests, posts and even dietary preferences. As millions of Americans develop comradely emotional, political and even sexual attachments with AI companions, contrary to the promise, suffering and unhappiness catapult. In March, research from MIT Media Lab and OpenAI found that frequent usage of AI chatbots correlated with “increased loneliness, emotional dependence, and reduced social interaction.”
As we increasingly depend on AI to understand the world, we open ourselves up to manipulation by entities that don’t have our best interests in mind. In 2025, news rating service NewsGuard uncovered a significant threat to AI systems: foreign disinformation campaigns targeting AI training data with deliberate falsehoods. The Russian-linked Pravda Network published 3.6 million articles in 2024 designed to manipulate AI responses and spread propaganda.
What happens to democracy when we offload our thinking to chatbots that are actively manipulated by foreign adversaries seeking disruption?
AI is here to stay. And it has the potential to improve our lives in remarkable ways, from curing diseases to ending poverty to achieving scientific breakthroughs and much more. To ensure AI serves us, rather than the other way around, there are several key steps to take right now.
First, transparency is paramount. Either voluntarily or via legislative mandate, large AI and social media companies like Meta, Google and OpenAI must disclose what data they’re collecting from us and who they’re sharing it with.
Nutrition labels on food help us make healthy choices by telling us if something is high in sugar or cholesterol. Similarly, AI “nutrition labels” can tell us if an AI system is known to have a high amount of political bias or how well it protects our privacy. Crucially, companies can then provide everyone with the ability to opt out of manipulative personalization.
Second, new regulations are required to protect kids, teens and users of all ages from the threats posed by “AI companions.” Legislation like California’s Senate Bill 243 can help prevent AI chatbots from employing addictive engagement techniques and mandate protocols for handling signs of distress or suicide. This kind of targeted legislation deserves national adoption.
Third, new media literacy initiatives are vital. Studies show that teaching students how to spot disinformation can reduce its impact. Several state legislatures are already moving in this direction, incorporating media literacy as a standard method of teaching for K-12 students. Skills including critical thinking and media literacy in the age of AI ought to be as essential for students nationwide as reading and math.
AI is a powerful sword that’s sharp on both sides. We can wield it responsibly and protect our kids as long as we retain our ability to think independently, reason cogently and communicate authentically.
Mark Weinstein is a tech thought leader, privacy expert, and one of the inventors of social networking. He is the author of “Restoring Our Sanity Online: A Revolutionary Social Framework” (Wiley, 2025).
Read More Details
Finally We wish PressBee provided you with enough information of ( Reclaiming critical thinking in the Age of AI )
Also on site :