
WeTransfer Responds to User Concerns: No AI Training with Your Files!
2025-07-15
Author: Wei Ling
In a bold move to quell rising concerns from users, WeTransfer has confirmed that it does not use files uploaded to its platform for training artificial intelligence (AI) models.
The file-sharing giant faced significant backlash after modifying its terms of service, which many users interpreted as granting the company permission to exploit their uploaded files for AI development. This sparked a wave of criticism across social media.
A spokesperson for WeTransfer clarified to BBC News: 'We don't use machine learning or any form of AI to process content shared via WeTransfer, nor do we sell content or data to any third parties.' This statement seeks to reassure users that their files remain safe and private.
To further ease concerns, WeTransfer has revamped its terms, simplifying the language to eliminate any ambiguity. While the original terms included a clause suggesting that AI could be used to enhance content moderation and identify harmful material, this led many to fear that their creative work could be sold to AI entities.
Users, particularly those in creative fields, expressed their frustration online, voicing intentions to switch to alternative services if their files weren’t protected. WeTransfer responded by updating the critical Clause 6.3 in its terms to clarify: 'You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.'
These changes will take effect on August 8 for existing users, aiming to restore confidence among its customer base.
Interestingly, WeTransfer isn’t alone; Dropbox also had to address similar controversies after receiving negative feedback regarding its data usage policies in December 2023.
As tech giants rush to capitalize on the AI boom, users are becoming increasingly wary. Data protection specialist Mona Schroedel cautions that changes in terms of service often carry hidden risks, highlighting that companies are eager to leverage user data for machine learning under the guise of service improvement.
The trust deficit between tech companies and their users seems to be widening, with consumers now more vigilant than ever about how their data might be used.