Microsoft’s professional networking platform LinkedIn, which has around 15 million users in Australia and approximately a billion globally, has been sued by millions of its Premium customers who allege that the company disclosed their private messages to third parties to train generative AI models without their permission.
A proposed class action lawsuit filed in a Californian federal court alleges LinkedIn quietly introduced a privacy setting last August that let users enable or disable the sharing of their personal data, reported Reuters.
LinkedIn then discreetly updated its privacy policy on September 18, 2024, to note data could be used to train AI models, and in a “Frequently Asked Questions” hyperlink said opting out “does not affect training that has already taken place.”
The lawsuit is on behalf of LinkedIn Premium customers who sent or received InMail messages, and whose private information was allegedly disclosed to third parties for AI training before September 18.
In a statement, LinkedIn said: “These are false claims with no merit.”
The lawsuit contends that LinkedIn updating its privacy policy on September 18 was an attempt to “cover its tracks” and that LinkedIn was “fully aware” it violated customers’ privacy, and it only updated its policy to minimize public scrutiny and legal fallout.
In a case that echoes some of the concerns raised by the latest lawsuit and LinkedIn’s handling of customer data, only a few months ago, LinkedIn was slammed with a A$505.8 million fine by Ireland’s Data Protection Commission (DPC) for illegally processing the personal data of users within the European Union with the aim of delivering targeted advertising.
The DPC was critical of LinkedIn saying that the consent to process third party data of its members “was not freely given, sufficiently informed or specific, or unambiguous.” It also stated that LinkedIn’s interests to process the first party personal data of its members “were overridden by the interests and fundamental rights and freedoms of data subjects.”