Meta soon begins to train its AI models with EU -Users Data
Meta AI is trained with all users’ interactions and public content that has been posted on Meta’s social platforms
The Great Tech giant resumes its AI training plan, after quitting launching in the middle of the EU data regulators’ concerns
Meta has resumed its plan to train its AI models with the EU users’ data, the company announced on Monday, April 14, 2025.
All public submissions and comments shared by adults across Meta’s social platforms will soon be used to train Meta AI along with all interactions that users directly exchange with chatbot.
This comes when the Big Tech giant successfully launched Meta AI in the EU in March, almost a year after the company stopped launching in the midst of growing concerns among the EU data regulators.
“We believe that we have a responsibility to build AI that is not only available to Europeans but is built for them. Therefore, it is so important for our generative AI models to be trained in a number of data so that they can understand the incredible and different nuances and complexities that make up European societies,” Meta wrote in the official message.
This kind of education, the company notes, it is not unique to meta or Europe. Meta AI collects and processes the same information, in fact across all regions where it is available.
As mentioned earlier, Meta AI will be trained with all public submissions and interactions data from adult users. Public data from the accounts of people in the EU under 18 will not be used for training purposes.
Meta also promises that no people’s private messages shared about iMessage, and WhatsApp will ever be used for AI training purposes.
(Image Credit: Meta / Future)
As of this week, all Meta users in the EU will start receiving messages about the terms of the new AI training, either via app or e email.
These messages will include a link to a form where people can draw their consent for their data to be used for training Meta AI.
“We have made this objection form easy to find, read and use, and we honor all the forms of objection we have already received, as well as newly submitted,” the provider explains.
It is important to understand that once you are fed into an LLM database, you will completely lose control of your data as these systems make it very difficult (if not impossible) to exercise the GDPR’s right to be forgotten.
This is why personal experts such as Proton, the provider behind one of the best VPN and encrypted E -mail apps, encourage people in Europe who are concerned about their privacy to opt out of Meta AI training.
“We recommend filling this form when it is sent to you to protect your privacy. It is difficult to predict what this data can be used in the future – better to be safe than sorry,” Proton wrote on a LinkedIn post.
Meta’s message comes at the same time that the Irish data regulators have opened a study of X’s Grok AI. Specifically, the study seeks to determine whether Elon Musk’s platform uses publicly available X posts to train its generative AI models in accordance with GDPR rules.
You also like