Friday, November 22, 2024
HomeAI News & UpdatesBig Giants Can’t Be Blameless Now, Thanks To Gen-AI

Big Giants Can’t Be Blameless Now, Thanks To Gen-AI

Tech giants have always shied away from taking responsibility for user-generated content on their platforms. This cherished legal barrier is on the verge of being lost because of generative AI.

The last year has seen a mad dash by Meta, Google, Microsoft, Amazon, and even Apple to release generative AI models and tools in response to OpenAI’s dominance. Meta offers Llama and an ever-expanding array of consumer-facing AI tools and capabilities within Facebook, Instagram, and WhatsApp. Both Gemini and Bard are part of Google. Q is one of the tools that Amazon is developing. Microsoft has directly supported OpenAI and has a legion of AI Copilots.

There has been, and will likely be, continued investment of tens of billions of dollars into AI initiatives. Lots more AI-generated content, including stickers, code, and buddies, is on the way.

A very huge deal for Meta:

Mark Zuckerberg, CEO of Meta, imagines a future where artificial intelligence (AI) creates content instead of human creators, for example, on Instagram. He recently commented, “There are both people out there who would benefit from being able to talk to an AI version of you, and the creators would benefit from being able to keep your community engaged and service that demands.” He also mentioned that an update on this will be available in 2024.

According to a former employee, Meta’s goal is to produce content directly without involving creators. “This is a freakishly huge deal for Meta, all the way to the top of the company,” according to the individual. “Its future ideal state is having a ton of new content, all the time, that people enjoy, and it doesn’t have to pay creators for it.”

No Liability Shield:

This strategy does have one kink, though. When major tech firms like Google and Meta started creating their content, they could no longer be insulated from responsibility for what users publish on their platforms under Section 230 of the Communications Decency Act of 1996. Legally, these technology companies have merely functioned as “intermediaries” or “hosts” of some of the most heinous content on the planet, provided they make an effort to moderate their platforms, regardless of how detrimental the material may have been. This has been emphasized several times by the courts.

The problem is that Big Tech corporations create, own, and run the tools and models for generative AI. In the not-too-distant future, these programs will produce search results, social networking postings, and other material with no human involvement whatsoever.

The big tech corporations are already claiming that their AI results are original and merit recognition as works of art. Tech company-owned and -operated generative AI tools will not benefit from the same protections afforded by Section 230, according to industry experts who talked with Business Insider.

Professor Aziz Huq of the University of Chicago Law School, who specializes in AI legislation, stated that “generative AI falls outside of it” upon first glance of the act.

Law School Professor Aziz Huq Talks Qualified Immunity, Constitutional Rights at Seminary Co-Op Event – Chicago Maroon
Professor Aziz Huq of the University of Chicago Law School

An anonymous source close to Meta revealed that the firm is already “discussing at a high level” the potential legal consequences of generative AI. When addressing sensitive personal problems, this individual requested anonymity. No one from Meta or Google was available to comment when we reached them. No remark was offered by a Microsoft spokesperson.

Anupam Chander, a visiting scholar at Harvard University’s Institute for Rebooting Social Media and a professor of law and technology at Georgetown Law, stated, “The big players love and want to keep 230 for as long as they possibly can.”

Anupam Chander professor of law and technology at Georgetown Law

According to Chander, generative AI threatens to be a liability minefield, as Section 230 of the law “will not be available as a defense in most cases,” even if highly profitable companies like Meta or Google might endure years of the legal onslaught. “It could dramatically undermine their business or even perhaps make some parts too risky,” said Chander.

An Open-Source Alternative By Meta:

According to the source acquainted with Meta’s plans, the business may be relying on the open-sourcing of Llama, a big language model that the majority of developers may use for free, to evade or postpone potential future responsibility for generative AI material.

According to Chander, this line of reasoning has just lately been discussed in his legal circles. However, he remains unconvinced that this is a foolproof method to escape responsibility. It is highly conditional to the particular circumstance.

To escape responsibility, a firm may produce an LLM and then provide it to developers for free, without any control over how or for what purpose. Otherwise, according to Chander, there aren’t many ways for a tech business to deny its involvement in generative AI material if it develops and distributes its own AI tools.

New Inventions Require New Laws:

Huq thinks that, under the current wording of Section 230, tech firms like Meta will not be permitted to include generative AI material.

“Everything they’ve been doing so far has been as an intermediary,” added Huq. The transition of deep learning to LLM has been a focal point, and businesses have acknowledged the novelty of the concept. You will not have the same level of protection from legal action. Because it’s novel, there will be different legal implications.

Dissolving Distinctions:

The distinction between an online platform functioning as a host and an actual participant had “almost evaporated,” according to NYU Law professor Jason Schultz, who also serves as a lead in the AI Now Institute and heads the Technology Law and Policy Clinic, even before generative AI hit the mainstream a year ago.

Once again, artificial intelligence is at the root of this haziness. Recommendation algorithms that can learn user preferences and display content accordingly have been running on this technology for a while now. The usage of AI-driven systems that have been taught on what constitutes good and poor material has led to the automation of content moderation, another area that makes use of it.

“The amount of intention and design behind something like ChatGPT is far more intense than something like early Twitter,” said Schultz. “Someone tweets something on Twitter, which shows it, and people will see it. That’s the classic 230 case”. That is not what generative AI is.

“These tools are not just passing content through.”  Schultz firmly stated that either they are creating material themselves or an LLM or image dispersion generator is doing the real creating. Plus, their goal is to give you the impression that a real person is interacting with you.

As Schultz pointed out, the US Supreme Court and lower courts have considered cases involving recommendation algorithms, considering whether or not tech companies can be held partially liable for alleged harms caused by their efforts to attract users and increase platform engagement. Schultz further explained by saying that previous decisions showed that “the more complex the tech has become, the more unsure the justices were” regarding what Section 230 protects, but that the Supreme Court has not yet weakened the provision for internet platforms.

Absolutely, as a massive pass-through is one perspective on an LLM. According to Shultz, another way to look at it is as the real content creator since the final product is unique and exciting. “I’d guess most judges will see them as content creators.”

 

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments