10 min

Episode 1842 - May 16 - Tiếng Anh - Ngành công nghiệp AI đang hướng tới một tảng băng trôi hợp pháp - Vina Technology at AI time Vina Technology at AI time - Công nghệ Việt Nam thời AI

    • Business News

The AI Industry Is Steaming Toward A Legal Iceberg

[The term "Legal Iceberg" metaphorically illustrates the hidden risks and potential legal challenges lurking beneath the surface of an industry or a particular situation, much like the majority of an iceberg lies submerged underwater. When applied to the AI industry, it suggests that while there may be visible advancements and progress, there are significant legal implications and complexities that may not be immediately apparent but have the potential to cause serious problems or obstacles.

In the context of the AI industry, this could encompass various legal concerns such as data privacy, intellectual property rights, liability issues, bias and fairness in algorithms, regulatory compliance, and ethical considerations. Just as a ship can collide with an iceberg if its presence is not adequately recognized and navigated around, the AI industry may encounter legal challenges and consequences if these underlying legal issues are not addressed proactively and effectively.]

Legal scholars, lawmakers and at least one Supreme Court justice agree that companies will be liable for the things their AIs say and do—and that the lawsuits are just beginning.

By Christopher Mims. WSJ. March 29, 2024.

If your company uses AI to produce content, make decisions, or influence the lives of others, it’s likely you will be liable for whatever it does—especially when it makes a mistake.

This also applies to big tech companies rolling out chat-based AIs to the public, including Google and Microsoft, as well as well-funded startups like Anthropic and OpenAI.

“If in the coming years we wind up using AI the way most commentators expect, by leaning on it to outsource a lot of our content and judgment calls, I don’t think companies will be able to escape some form of liability,” says Jane Bambauer, a law professor at the University of Florida who has written about these issues.

The implications of this are momentous. Every company that uses generative AI could be responsible under laws that govern liability for harmful speech, and laws governing liability for defective products—since today’s AIs are both creators of speech and products. Some legal experts say this may create a flood of lawsuits for companies of all sizes.

It is already clear that the consequences of artificial intelligence output may go well beyond a threat to companies’ reputations. Concerns about future liability also help explain why companies are manipulating their systems behind the scenes to avoid problematic outputs—for example, when Google’s Gemini came across as too “woke.” It also may be a driver of the industry’s efforts to reduce “hallucinations,” the term for when generative AIs make stuff up.

The legal logic is straightforward. Section 230 of the Communications Decency Act of 1996 has long protected internet platforms from being held liable for the things we say on them. (In short, if you say something defamatory about your neighbor on Facebook, they can sue you, but not Meta.) This law was foundational to the development of the early internet and is, arguably, one reason that many of today’s biggest tech companies grew in the U.S., and not elsewhere.

But Section 230 doesn’t cover speech that a company’s AI generates, says Graham Ryan, a litigator at Jones Walker who will soon be publishing a paper in the Harvard Journal of Law and Technology on the topic. “Generative AI is the wild west when it comes to legal risk for internet technology companies, unlike any other time in the history of the internet since its inception,” he adds.

I spoke with several legal experts across the ideological spectrum, and none expect that Section 230 will protect companies from lawsuits over the outputs of generative AI, which now include not just text but also images, music and video.

And the list of potential defendants is far broader than a handful of big tech companies. Companies t

The AI Industry Is Steaming Toward A Legal Iceberg

[The term "Legal Iceberg" metaphorically illustrates the hidden risks and potential legal challenges lurking beneath the surface of an industry or a particular situation, much like the majority of an iceberg lies submerged underwater. When applied to the AI industry, it suggests that while there may be visible advancements and progress, there are significant legal implications and complexities that may not be immediately apparent but have the potential to cause serious problems or obstacles.

In the context of the AI industry, this could encompass various legal concerns such as data privacy, intellectual property rights, liability issues, bias and fairness in algorithms, regulatory compliance, and ethical considerations. Just as a ship can collide with an iceberg if its presence is not adequately recognized and navigated around, the AI industry may encounter legal challenges and consequences if these underlying legal issues are not addressed proactively and effectively.]

Legal scholars, lawmakers and at least one Supreme Court justice agree that companies will be liable for the things their AIs say and do—and that the lawsuits are just beginning.

By Christopher Mims. WSJ. March 29, 2024.

If your company uses AI to produce content, make decisions, or influence the lives of others, it’s likely you will be liable for whatever it does—especially when it makes a mistake.

This also applies to big tech companies rolling out chat-based AIs to the public, including Google and Microsoft, as well as well-funded startups like Anthropic and OpenAI.

“If in the coming years we wind up using AI the way most commentators expect, by leaning on it to outsource a lot of our content and judgment calls, I don’t think companies will be able to escape some form of liability,” says Jane Bambauer, a law professor at the University of Florida who has written about these issues.

The implications of this are momentous. Every company that uses generative AI could be responsible under laws that govern liability for harmful speech, and laws governing liability for defective products—since today’s AIs are both creators of speech and products. Some legal experts say this may create a flood of lawsuits for companies of all sizes.

It is already clear that the consequences of artificial intelligence output may go well beyond a threat to companies’ reputations. Concerns about future liability also help explain why companies are manipulating their systems behind the scenes to avoid problematic outputs—for example, when Google’s Gemini came across as too “woke.” It also may be a driver of the industry’s efforts to reduce “hallucinations,” the term for when generative AIs make stuff up.

The legal logic is straightforward. Section 230 of the Communications Decency Act of 1996 has long protected internet platforms from being held liable for the things we say on them. (In short, if you say something defamatory about your neighbor on Facebook, they can sue you, but not Meta.) This law was foundational to the development of the early internet and is, arguably, one reason that many of today’s biggest tech companies grew in the U.S., and not elsewhere.

But Section 230 doesn’t cover speech that a company’s AI generates, says Graham Ryan, a litigator at Jones Walker who will soon be publishing a paper in the Harvard Journal of Law and Technology on the topic. “Generative AI is the wild west when it comes to legal risk for internet technology companies, unlike any other time in the history of the internet since its inception,” he adds.

I spoke with several legal experts across the ideological spectrum, and none expect that Section 230 will protect companies from lawsuits over the outputs of generative AI, which now include not just text but also images, music and video.

And the list of potential defendants is far broader than a handful of big tech companies. Companies t

10 min