LinkedIn warns: you are responsible for sharing inaccurate content created by our AI

The creator takes no responsibility for its creation

by · TechSpot

Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.

A hot potato: Companies that offer generative AI tools tend to advise users that the content being created might be inaccurate. Microsoft's LinkedIn has a similar disclaimer, though it goes slightly further by warning that any users who share this misinformation will be held responsible for it.

Microsoft recently updated its Service Agreement with a disclaimer emphasizing that its Assistive AI is not designed, intended, or to be used as substitutes for professional advice.

As reported by The Reg, LinkedIn is updating its User Agreement with similar language. In a section that takes effect on November 20, 2024, the platform states that users might interact with features that automate content generation. This content might be inaccurate, incomplete, delayed, misleading or not suitable for their purposes.

So far, so standard. But the next section is something we don't often see. LinkedIn states that users must review and edit the content that its AI generates before sharing it with others. It adds that users are responsible for ensuring this AI-generated content complies with its Professional Community Policies, which includes not sharing misleading information.

It seems somewhat hypocritical that LinkedIn strictly enforces policies against users sharing fake or inauthentic content that its own tools can potentially generate. Repeat violators of its policies might be punished with account suspensions or even account terminations.

// Related Stories

The Reg asked LinkedIn if it intends to hold users responsible for sharing AI content that violates its policies, even if the content was created by its own tools. Not really answering the question, a spokesperson said it is making available an opt-out setting for training AI models used for content generation in the countries where it does this.

"We've always used some form of automation in LinkedIn products, and we've always been clear that users have the choice about how their data is used," the spokesperson continued. "The reality of where we're at today is a lot of people are looking for help to get that first draft of that resume, to help write the summary on their LinkedIn profile, to help craft messages to recruiters to get that next career opportunity. At the end of the day, people want that edge in their careers and what our GenAI services do is help give them that assist."

Another eyebrow-raising part in all this is that LinkedIn announced the upcoming changes on September 18, which is around the same time that the platform revealed it had started to harvest user-generated content to train its AI without asking people to opt-in first. The outcry and investigations led to LinkedIn later announcing that it would not enable AI training on users' data from the European Economic Area, Switzerland, and the UK until further notice. Those in the US still have to opt-out.