NAMI is helping bring clarity and safety to AI in mental health by evaluating chatbot responses, risks, and safeguards—guided by science and lived experience.
“My heart is broken,” said Mike, when he lost his friend Anne. “I feel like I’m losing the love of my life.” Mike’s feelings were real, but his companion was not. Anne was a chatbot — an artificial ...
A version of this essay first appeared on the website of the U.S. PIRG Education Fund. With the rise of ChatGPT and social media companies like Snapchat and Instagram integrating AI chatbots into ...
AI chatbots offer a different entry point. They are available at any time, require no appointments, and do not involve direct ...
Dr. McBain studies policies and technologies that serve vulnerable populations. On any given night, countless teenagers confide in artificial intelligence chatbots — sharing their loneliness, anxiety ...
People are reporting psychotic breaks after chatting with A.I. bots. Experts say companies must build guardrails to protect mental health. Users say human-like A.I. chatbots can worsen delusions and ...
In November, the Food and Drug Administration (FDA) held a Digital Health Advisory Committee meeting where it considered treating artificial intelligence mental health chatbots as medical devices. As ...
In the absence of stronger federal regulation, some states have begun regulating apps that offer AI “therapy” as more people turn to artificial intelligence for mental health advice. But the laws, all ...
Schools grappling with teen mental health problems face new challenges keeping their students safe in the age of artificial intelligence (AI). Studies show AI has been giving dangerous advice to ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results