LinkedIn: If our AI gets something wrong, that's your problem

Artificial intelligence still no substitute for the real thing

by · The Register

Microsoft's LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that's inaccurate or misleading.

LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.

LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools.

The relevant passage, which takes effect on November 20, 2024, reads:

Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.

In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won't be held responsible for any consequences.

The platform's Professional Community Policies direct users to "share information that is real and authentic" – a standard to which LinkedIn is not holding its own tools.

Asked to explain whether the intent of LinkedIn's policy is to hold users responsible for policy-violating content generated with the company's own generative AI tools, a spokesperson chose to address a different question: "We believe that our members should have the ability to exercise control over their data, which is why we are making available an opt-out setting for training AI models used for content generation in the countries where we do this.

"We've always used some form of automation in LinkedIn products, and we've always been clear that users have the choice about how their data is used. The reality of where we're at today is a lot of people are looking for help to get that first draft of that resume, to help write the summary on their LinkedIn profile, to help craft messages to recruiters to get that next career opportunity. At the end of the day, people want that edge in their careers and what our GenAI services do is help give them that assist."

The business-oriented social networking site announced the pending changes on September 18, 2024 – around the time the site also disclosed that it had begun harvesting user posts to use for training AI models without prior consent.

The fact that LinkedIn began doing so by default – requiring users to opt-out of feeding the AI beast – didn't go over well with the UK's Information Commissioner's Office (ICO), which subsequently won a reprieve for those in the UK. A few days later, LinkedIn said it would not enable AI training on member data from the European Economic Area, Switzerland, and the UK until further notice.

In the laissez-faire US, LinkedIn users have had to find the proper privacy control to opt-out.

The consequences for violating LinkedIn's policies vary with the severity of the infraction. Punishment may involve limiting the visibility of content, labeling it, or removing it. Account suspensions are possible for repeat offenders and one-shot account removal is reserved for the most egregious stuff.

LinkedIn has not specified which of its features might spawn suspect AI content. But prior promotions of its AI-enhanced services may provide some guidance. LinkedIn uses AI-generated messages in LinkedIn Recruiter to create personalized InMail messages based on candidate profiles. It also lets recruiters enhance job descriptions with AI. It provides users with AI writing help for their About and Headline sections. And it attempts to get people to contribute to "Collaborative articles" for free by presenting them with an AI-generated question.

Salespeople also have access to LinkedIn's AI-assisted search and Account IQ, which help them to find sales prospects.

Asked to comment on LinkedIn's disavowal of responsibility for its generative AI tools, Kit Walsh, senior staff attorney at the Electronic Frontier Foundation, said, "It's good to see LinkedIn acknowledging that language models are prone to generating falsehoods and repeating misinformation. The fact that these language models are not reliable sources of truth should be front-and-center in the user experience so that people don't make the understandable mistake of relying on them.

"It's generally true that the people choosing to publish a specific statement are responsible for what it says, but you're not wrong to point out the tension between lofty claims of the power of language models versus language like this in user agreements protecting companies from the consequences of how unreliable the tools are when it comes to the truth." ®