WeTransfer recently updated its terms of service. As a result, it suddenly appeared as if your files could be used to train AI without you explicitly giving consent. This understandably raised concerns. The passage has since been removed following strong feedback from users.
What exactly happened?
It all started with a change in the fine print. The new terms included a clause that allowed user data to be used for AI purposes. Not necessarily illegal, but vaguely worded. And when it comes to sensitive data, clarity is crucial.
Backlash and criticism
The update was quickly picked up by tech media and legal experts. They pointed out the implications of vague AI terms for ownership and privacy. Users also voiced their concerns on social media. After all, if you upload your files to send them quickly, you do not expect a back door into an AI model.
Response from WeTransfer
WeTransfer stated that it was never their intention to simply use customer data for AI training. According to the company, the wording was too broad and confusing. The text has since been updated. The new version makes it clear that you remain the owner of your files, and AI is not involved without your consent.
What has been changed now?
The revised terms include an explicit statement: no AI training on files unless you give prior consent. It is also reaffirmed that all rights to your files remain with you. With this change, the company takes a step towards greater transparency and user trust.
What does this mean for you?
As a user, you do not need to worry about unknowingly contributing to third-party AI models. Moreover, this example highlights the importance of clear communication about digital rights. Anyone working with sensitive documents relies on clear agreements, not vague wording.
Read more about AI?