- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Source: https://front-end.social/@fox/110846484782705013
Text in the screenshot from Grammarly says:
We develop data sets to train our algorithms so that we can improve the services we provide to customers like you. We have devoted significant time and resources to developing methods to ensure that these data sets are anonymized and de-identified.
To develop these data sets, we sample snippets of text at random, disassociate them from a user’s account, and then use a variety of different methods to strip the text of identifying information (such as identifiers, contact details, addresses, etc.). Only then do we use the snippets to train our algorithms-and the original text is deleted. In other words, we don’t store any text in a manner that can be associated with your account or used to identify you or anyone else.
We currently offer a feature that permits customers to opt out of this use for Grammarly Business teams of 500 users or more. Please let me know if you might be interested in a license of this size, and I’II forward your request to the corresponding team.
You raise a very good point, people cannot be mad that companies use data that they made public to train their ai. It’s public, people can do whatever they want with it. We really need to teach people to be more careful with what they post online.
But I’m wondering, is there a default license for data posted on lemmy?
That would imply ownership and agency over the retention of our data, which federation kind of fundamentally cannot guarantee. An instance in the Fediverse can only guarantee the right to be forgotten on their own instance. I could see this becoming a big regulatory problem as the Fediverse grows. We’re already seeing regulatory issues with CSAM, for example.