LinkedIn Using User Data To Train AI Models Without Clear Consent

Image by Airam Dato-on, from Pexels

LinkedIn Using User Data To Train AI Models Without Clear Consent

Reading time: 3 min

In a Rush? Here are the Quick Facts!

  • LinkedIn used U.S. user data for AI without clear notice.
  • An opt-out existed, but LinkedIn didn’t update its privacy policy initially.
  • LinkedIn’s delayed update reflects growing global concerns about AI data use.

LinkedIn, the professional networking platform, has faced criticism for using user data to train its AI models without explicitly informing users beforehand.

LinkedIn users in the U.S. — but not those in the EU, EEA, or Switzerland, likely due to stricter data privacy laws — have an opt-out toggle in their settings that reveals LinkedIn collects personal data to train “content creation AI models.”

While the toggle itself is not new, LinkedIn initially failed to update its privacy policy to reflect this data usage, as first reported by 404 Media.

On a help page, LinkedIn explains that its generative AI models are used for tasks like writing assistant features.

Users can opt out of having their data used for AI training by navigating to the “Data for Generative AI Improvement” section under the Data privacy tab in their account settings.

Turning off the toggle will stop LinkedIn from using personal data for future AI model training, though it does not undo training that has already occurred.

The terms of service have since been updated, as reported by TechCrunch, but such updates typically happen well in advance of significant changes like repurposing user data.

This approach usually allows users to adjust their settings or leave the platform if they disagree with the changes. This time, however, that wasn’t the case.

This comes amid broader concerns about how personal data is processed by AI systems. Scrutiny over AI data practices is intensifying globally.

A recent study from MIT revealed that a growing number of websites are restricting the use of their data for AI training.

Additionally, The DPC recently concluded legal proceedings against X regarding its AI tool after the company agreed to comply with previous restrictions on using EU/EEA user data for AI training.

The incident underscores the increasing importance of transparency and user consent in the development of AI technologies.

As AI continues to advance, it is crucial for companies to be clear about their data practices and obtain explicit permission from users before utilizing their information for training purposes.

The incident also highlights the growing tensions between AI companies and data owners, as more and more organizations are demanding greater control over how their data is used.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
5.00 Voted by 1 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...