WhatsApp bans AI companies from using platform for chatbot training

WhatsApp has introduced a significant policy update to its Business Solution Terms, effectively barring artificial intelligence (AI) companies from using its platform for developing or training AI technologies, including large language models, generative AI systems, and digital assistants.
The new rules, set to take effect on January 15, 2026, aim to curb the growing use of WhatsApp’s infrastructure by AI providers.
The updated terms explicitly prohibit AI developers from accessing or utilizing WhatsApp’s Business Solution tools to power or enhance their technologies. This move primarily targets general-purpose AI chat assistants, such as ChatGPT, which have increasingly leveraged WhatsApp’s Business API. Originally designed to facilitate business-to-customer communication—such as order confirmations, customer support, and service updates—the API has become a critical revenue stream for Meta, WhatsApp’s parent company. However, AI-driven non-business traffic has placed a significant technical burden on the platform without generating corresponding revenue.
Under the new policy, AI companies are forbidden from using WhatsApp data—whether directly or through third parties—for improving chatbots or other AI tools. This restriction extends to anonymized or aggregated data. Businesses employing AI providers for operational purposes are also prohibited from sharing WhatsApp data for AI training. A limited exception allows businesses to use their own WhatsApp data to fine-tune private AI models for internal use, but this data cannot be used to enhance public AI systems.
WhatsApp has warned that violations of these rules could lead to account termination, signaling strict enforcement of the new policy.
The decision comes as concerns mount over the use of user data in AI development. With AI models increasingly reliant on vast amounts of user-generated content, platforms like WhatsApp are taking steps to safeguard user information and maintain control over their ecosystems. Industry experts describe this as a reflection of the growing tension between social platforms, which hold valuable user data, and AI developers seeking access to it. “This change highlights the intensifying debate as AI technology evolves rapidly,” one expert noted.
Meta’s move underscores its commitment to prioritizing business-to-customer communication on WhatsApp while balancing the rise of AI with the protection of its core services. As the AI landscape continues to evolve, this policy marks a significant step in regulating how platforms are used for AI experimentation. (ILKHA)
LEGAL WARNING: All rights of the published news, photos and videos are reserved by İlke Haber Ajansı Basın Yayın San. Trade A.Ş. Under no circumstances can all or part of the news, photos and videos be used without a written contract or subscription.
Researchers at the University of California, Berkeley, in collaboration with Ajou University and the Georgia Institute of Technology, have developed a self-balancing miniature robot inspired by a species of water strider, potentially transforming search-and-rescue operations and environmental monitoring.
OpenAI is facing sharp criticism from across the artificial intelligence and academic communities after claims that its GPT-5 model had “solved” ten of Paul Erdős’s famous unsolved mathematical problems were exposed as inaccurate.
Chinese aerospace company LandSpace marked a significant milestone in its quest to develop a reusable rocket, successfully conducting a static-fire test of its Zhuque-3 launcher on Monday, October 20.