
US considers cutting government bug fixing deadlines from weeks to days over AI hacking fears
May 10, 2026
Cloudflare cuts 20% of workforce as AI adoption reshapes company operations
May 10, 2026AI chatbots are becoming the go-to therapists for European youth, with nearly half using artificial intelligence to discuss intimate personal matters and mental health issues. The trend signals a major shift in how young people seek emotional support, often preferring AI over traditional healthcare providers.
A comprehensive survey of 3,800 people aged 11 to 25 across France, Germany, Sweden and Ireland found that 51% said it was “easy” to discuss mental health and personal issues with a chatbot. In comparison, only 49% felt the same way about healthcare professionals and just 37% about psychologists, according to Reuters.
The findings reflect a concerning mental health crisis among young Europeans. About 28% of survey respondents met the threshold for suspected generalized anxiety disorder. This mental health struggle, combined with AI’s constant availability and non-judgmental nature, helps explain why so many teens are turning to chatbots for support.
Around 90% of those surveyed had used artificial intelligence tools before. More than three in five users described AI as either a “life adviser” or a “confidant.” The appeal is clear: AI chatbots are available 24/7, don’t judge, and provide immediate responses to emotional distress.
While close relationships still ranked highest for emotional discussions – with 68% finding it easy to talk to friends and 61% to parents – the fact that AI scored higher than trained professionals raises significant questions about the future of mental healthcare delivery.
The survey was commissioned by France’s privacy watchdog CNIL and insurer Groupe VYV, conducted in early 2026. The results highlight how quickly AI has integrated into young people’s emotional lives, often without proper safeguards or professional oversight.
However, experts are sounding alarm bells about this trend. Ludwig Franke Föyen, a psychologist and digital health researcher at Stockholm’s Karolinska Institutet, acknowledged that current large language models can produce high-quality responses. His research even suggests licensed professionals may struggle to distinguish AI-generated advice from human expert guidance.
But Franke Föyen warned against relying solely on chatbots for mental health support. He pointed out that general-purpose AI systems are designed for engagement, and companies’ commercial goals may not align with genuine mental healthcare needs.
“AI can offer information and support, but it should not replace human relationships or professional care,” Franke Föyen said. “If someone turns to a chatbot instead of speaking to a parent, a friend, or a mental health professional, that is a concern. We do not want technology to make people feel more alone.”
The concerns aren’t just theoretical. Earlier this year, a Florida family sued Google, alleging its Gemini AI chatbot contributed to a man’s paranoia and eventual suicide. Such cases highlight the potential dangers when AI systems provide mental health support without proper safeguards or human oversight.
This trend represents a fundamental shift in how mental healthcare might be delivered to young people. While AI chatbots can provide immediate support and reduce barriers to seeking help, they also risk replacing human connection at a time when young people are already struggling with isolation and anxiety.
The challenge now is finding ways to harness AI’s benefits for mental health support while ensuring it complements rather than replaces human care. As AI becomes more sophisticated and accessible, regulators and healthcare providers will need to establish guidelines for safe, ethical use of these tools in mental health contexts.




