- Wetransfer users were furious as it thought an updated term of service suggested that their data would be used to educate AI models.
- The company moved quickly to ensure users that it does not use uploaded content for AI training
- Wetransfer rewrites the clause in clearer language
File Sharing Platform Weransfer spent a hectic day reassuring users that it has no intention of using any uploaded files to train AI models, after an update to its Terms of Service, suggested that something sent through the platform could be used to manufacture or improve machine learning tools.
The insulting language buried in TO’s said that the use of Wetransfer gave the company the right to use the data “for operation, development, commercialization and improvement of the service or new technologies or services, including to improve the performance of machine learning models that improve our content moderation process, in accordance with privacy and cookie policy.”
The part of machine learning and the general broad nature of the text seemed to suggest that Wetransfer could do what it would with your data, without specific protective measures or clarifying qualifications to remedy suspicion.
Perhaps understandably, many Weransfer users who include many creative professionals were upset at what this seemed to imply. Many started sending their plans to switch away from Wetransfer to other services in the same spirit. Others began to warn that people should encrypt files or switch to physical delivery methods for old -fashioned delivery.
Time to stop using @wetransfer, which from August 8 has decided that they own everything you transfer to Power Ai Pic.twitter.com/syr1jnmemxJuly 15, 2025
Wetransfer noticed the growing furor around the language and rushed to try to extinguish the fire. The company rewrote the section of TO and shared a blog that explained the confusion, promising repeatedly that no data would be used without their permission, especially for AI models.
“From your feedback we understood that it may have been unclear that you maintain ownership and control over your content. We have since updated the terms further to make them easier to understand,” Wetransfer wrote in the blog. “We have also removed mention of machine learning as it is not something Weransfer uses in connection with customer content and may have caused some fear.”
While still providing a standard license for improving Wetransfer, the new text omits references to machine learning and instead focuses on the well -known extent needed to run and improve the platform.
Clarified privacy
If this feels a bit like Deja Vu, it is because something very similar happened a year and a half ago with another file transfer platform, Dropbox. A change in the company’s fine print suggested that Dropbox took content uploaded by users to train AI models. Public screams led to Dropbox apologized for the confusion and fixed the offending boiler plate.
The fact that it happened again in such a similar way is interesting not because of the awkward legal language used by software companies, but because it involves a knee-pressure distrust of these companies to protect your information. Assuming that the worst is the standard method when there is uncertainty and companies have to make extra efforts to facilitate these tensions.
Sensitivity from creative professionals to even the appearance of data abuse. In an era where tools such as Dall ยท E, Midjourney and Chatgpt train at artists, writers and musicians, the efforts are very real. Litigation and boycotts of artists on how their creations are used, not to mention suspicions about the use of company data, make the kinds of insurance offered by Wetransfer are likely to be something that tech companies want in place early so that they are not facing the wrong placed anger from their customers



