New questions about liability for AI creations

New questions about liability for AI creations
(Photo: AI-Generated)
A string of challenges to Section 230 — the law that shields online platforms from liability for user-generated content — has failed in the last several weeks. اضافة اعلان

Most recently, the Supreme Court declined Tuesday to review a suit about exploitative content on Reddit. But the debate over what responsibility tech companies have for harmful content is far from settled — and generative artificial intelligence tools like the ChatGPT chatbot could open a new line of questions.

Does it apply?
The law’s 1996 drafters told DealBook that it does not. “We set out to protect hosting,” said Sen. Ron Wyden, D-Ore. Platforms are immune only to suits about material created by others, not their own work. “If you are partly complicit in content creation, you don’t get the shield,” agreed Chris Cox, a former Republican representative from California. But they admit that these distinctions, which once seemed simple, are already becoming more difficult to make.

What about AI search engines?
Typically, search engines are considered vehicles for information rather than content creators, and search companies have benefited from Section 230 protection. Chatbots generate content, and they are most likely beyond protection. But tech giants such as Microsoft and Google are integrating chat and search, complicating matters. “If some search engines start to look more like chat output, the lines will be blurred,” Wyden said.

A deadly recipe?
Generative AI tools have already been used to make intentionally harmful content. And hallucinations — the falsehoods that generative AI tools create (like court cases that never existed) — are a significant problem. If a user prompts an AI for cocktail instructions and it offers a poisonous concoction, the algorithm operator’s liability is obvious, said Eric Goldman, a law professor at Santa Clara University and a Section 230 expert.

But most situations won’t be that clear-cut, and that poses a risk, Goldman said. He fears that anger over immunity for social media platforms threatens nuanced debate about the next generation of tech development.

“The blossoming of AI comes at one of the most precarious times amid a maturing tech backlash,” Goldman said. “We need some kind of immunity for people who make the tools,” he added. “Without it, we’re never going to see the full potential of AI.”


Read more Opinion and Analysis
Jordan News