LinkedIn incorporates UK members’ profiles to enhance AI training

LinkedIn incorporates UK members' profiles to enhance AI training
LinkedIn incorporates UK members' profiles to enhance AI training

LinkedIn, the Microsoft-owned professional networking platform, will begin utilizing the public-facing profiles, posts, resumes, and activity of its UK members to train its generative AI models starting November 3. The decision, revealed by updated terms of service, excludes private messages but includes most publicly available data on the platform.

The company describes the move as a step to "enhance your experience" and "better connect our members to opportunities." LinkedIn aims to use the data to improve features such as recruitment suggestions and AI-assisted content creation.

Opt-out Option Raises Concerns

LinkedIn users will have the ability to opt out of this data usage through the platform’s "data for generative AI improvement" setting. However, this measure applies only to future data – any information already used in training will remain part of the models. Privacy advocates argue that this opt-out approach places the burden on users to protect their information, rather than making the process explicitly opt-in.

For users, this change introduces a potential "blind spot", as professional posts, resumes, and comments – often containing sensitive personal details – may be incorporated into AI systems without their full awareness. A source close to the matter clarified that data from users under the age of 18 and private messages are automatically excluded from this process. Additionally, user feedback, such as thumbs up or down on AI-generated suggestions, will be used to improve accuracy and reduce potential harm.

Balancing Innovation and Privacy

The use of publicly shared data to train AI models reflects a broader trend among tech companies, including Meta, that have resumed similar practices in the UK after regulatory pauses earlier this year. While LinkedIn promotes the initiative as a way to enhance professional networking and career opportunities, it also underscores a tension between innovation and privacy.

This development comes amidst a post-Brexit regulatory environment that allows more flexibility compared to the European Union’s stricter General Data Protection Regulation (GDPR). However, some experts caution that the lack of an opt-in approach or proactive notifications could undermine user trust.

Addressing Safety and Transparency

LinkedIn has emphasized its commitment to responsible AI use, highlighting that generative AI models are being developed to improve not only user experience but also platform safety. The company states that AI helps detect harmful content and reduces errors in recommendations.

A source close to the platform emphasized that training methods and feedback loops have been designed to minimize risks to users. However, critics argue that relying on "legitimate interest" as the basis for processing user data feels misaligned with expectations of professional privacy within a social networking context.

Broader Implications

While LinkedIn frames this move as a way to deliver more personalized and efficient tools for users, the broader implications of default data inclusion bring questions of consent, governance, and trust to the forefront. As professional and personal information becomes increasingly integrated into AI systems, the responsibility to ensure clarity and uphold privacy standards remains critical.

For LinkedIn’s UK members, this update marks a significant moment in navigating the intersection of generative AI and professional networking. The platform’s assurances of improved opportunities will undoubtedly be judged against the privacy concerns raised by this shift in practice.

Read the source