To address user concerns about how language models use their personal data, Meta is adding new control options that will allow them to choose not to include their information in the training of its AI . In fact, by filling out a form, starting now it will be possible to " report problems or send objections " regarding the processing of data " in possession "of outsiders " utilized for the preparation "of generative man-made intelligence models ".
This means that, if you fear that your information may be in danger, now you finally have a solution forkeep them safe .
However, in an effort to stem the concerns of its users, Meta has added to its Privacy Center a very detailed overview of how generative AI models are trained and the role that metadata can play in that process. In a timely and precise manner, the company discloses that it uses “ a combination of sources ,” including “ publicly available information online and licensed information, as well as information related to Meta products and services ,” to train its large language models . Sensitive information, which the company secures thanks to “a robust internal privacy review process ”.
But if Meta's reassurances are not enough to convince users to give their data to artificial intelligence, now there is finally the possibility of preventing this from happening. An important update for the company, in line with the European rules established by the Digital Services Act (DSA), which require it to provide users with greater control over personal data and the way in which it is used by online platforms
Comments
Post a Comment