Meta introduces parental controls for AI chatbots amid rsing concerns over child safety

In a move that underscores growing global concern over the impact of artificial intelligence on young users, Meta has announced the rollout of enhanced parental control features for its AI-powered chatbots, including the widely used Meta AI Chatbot.
The new initiative aims to give parents greater oversight over how children and teenagers engage with AI systems across Meta’s platforms, such as Instagram, Messenger, and WhatsApp.
The announcement follows months of increasing regulatory pressure from the U.S. Federal Trade Commission (FTC) and mounting public debate about the psychological, social, and ethical implications of exposing minors to advanced AI technologies. In recent months, several advocacy groups in the United States and Europe have urged big tech companies to establish clear safety frameworks for AI tools accessible to children.
With AI chatbots increasingly integrated into daily life — from homework help and gaming to mental health advice and digital companionship — younger audiences have become frequent users. However, experts have raised concerns about potential risks, including exposure to inappropriate content, data privacy violations, manipulation, and cyberbullying.
Meta stated that the new system will allow parents to monitor and limit conversations, set time usage restrictions, and filter sensitive topics that may not be suitable for minors. The parental dashboard will also include real-time alerts if the AI chatbot engages in discussions related to personal information or risky behavior.
The FTC has recently intensified investigations into how tech companies use AI with minors, emphasizing transparency and ethical data use. Earlier this year, the agency warned major firms, including Meta, OpenAI, and Google, that AI chatbots should not be marketed to minors without strict age verification and parental consent mechanisms.
In Europe, similar efforts are underway. The EU’s Artificial Intelligence Act, expected to take full effect in 2026, includes specific provisions aimed at protecting children from manipulative AI interactions. Meanwhile, the UK’s Online Safety Act also compels tech companies to shield minors from harmful digital content and provide parents with more control.
Meta’s spokesperson said the new tools are part of a “long-term strategy to build trust, transparency, and responsible AI engagement.” The company confirmed that early trials of the parental controls will begin in the United States and Canada by the end of 2025, followed by expansion to other regions in early 2026.
Experts view the development as a necessary, though overdue, step in safeguarding digital childhoods. As AI continues to evolve and mimic human-like interactions, ensuring safe engagement for younger users remains a central challenge — one that could define the next era of technological regulation. (ILKHA)
LEGAL WARNING: All rights of the published news, photos and videos are reserved by İlke Haber Ajansı Basın Yayın San. Trade A.Ş. Under no circumstances can all or part of the news, photos and videos be used without a written contract or subscription.
OpenAI has officially transformed ChatGPT into a platform for third-party apps, embedding interactive applications directly within conversations.
Internet and telecommunication services were restored across Afghanistan on Wednesday evening, ending a widespread 48-hour blackout that had crippled daily life, commerce, and connectivity.
The use of artificial intelligence (AI) technologies in Türkiye has seen a notable surge in recent years, according to new data released by the Turkish Statistical Institute (TurkStat) on Wednesday.