Sunday, December 22, 2024
HomeAI News & UpdatesLegal Experts Engage in Defense Amidst Surge of AI Lawsuits

Legal Experts Engage in Defense Amidst Surge of AI Lawsuits

Since the launch of OpenAI’s ChatGPT in November 2022, the landscape of generative artificial intelligence has ignited discussions about its development, deployment, and necessary regulation. This dialogue has, in turn, triggered a surge in litigation and advocacy efforts, thrusting legal expertise into the forefront of the conversation.

Over the past year, numerous creatives including writers, musicians, visual artists, and software developers have initiated copyright infringement claims and commercial disputes across various courts. These actions are directed not only towards OpenAI but also towards rival startups in the AI sphere.

Comedic figure Sarah Silverman and renowned novelist John Grisham are among those alleging that OpenAI utilized copyrighted materials without obtaining proper approval to train its language models.

Similarly, programmers have raised concerns that Microsoft, alongside its GitHub subsidiary and OpenAI, introduced AI tools without prioritizing legal facets such as attribution, copyright notices, and license terms.

SARAH SILVERMAN

Universal Music, a global music powerhouse, has taken legal action against Anthropic, a rival of OpenAI, asserting that Anthropic’s AI-based platform, Claude, reproduces copyrighted lyrics nearly verbatim.

Visual artists have entered the legal arena targeting AI ventures like Stability AI, Midjourney, and DeviantArt for alleged copyright infringement. They claim that these platforms utilized their artistic styles for training AI models without obtaining consent, offering due credit, or providing compensation.

In navigating these copyright claims, some AI developers in the US have turned to the “fair use doctrine,” a legal defense previously employed by Google in 2015 to counter a claim by the Authors Guild regarding copyright violations in its online book searching function.

To mitigate concerns around copyright issues discouraging potential business customers, Microsoft has pledged to cover legal costs for commercial clients facing lawsuits due to their utilization of AI-generated outputs or tools.

 

Danny Tobey, leading DLA Piper’s AI-focused practice group, represents developers like OpenAI in engagements with regulators, lawmakers, and courts. Tobey notes that these commitments from companies like Microsoft represent astute business strategies, fostering confidence in new technologies to encourage widespread adoption.

In legal proceedings, Tobey has defended OpenAI in defamation cases. One instance involves a lawsuit filed by radio host Mark Walters, alleging ChatGPT wrongly accused him of embezzlement. Walters claimed the platform generated a false lawsuit, a contention presented in court by his lawyer, John Monroe in Dawsonville, Georgia.

Image
Image Credit: https://twitter.com/dannytobey

In the latest defamation lawsuit, aerospace author Jeffrey Battle has turned his sights on Microsoft’s AI-assisted Bing, alleging false conflation with a convicted felon. This legal action seeks to treat Bing as a publisher or speaker of information it provides, as stated by Eugene Volokh, a law professor at the University of California, Los Angeles.

While copyright infringement and defamation issues loom large, legal threats for AI developers and users are expanding to include concerns about safety and accountability, according to Danny Tobey, leader of DLA Piper’s AI-focused practice group. The firm has played a pivotal role in presenting OpenAI’s perspectives to Congress on AI regulation, addressing the uncertainty surrounding the industry.

Tobey emphasizes the transformative potential of generative AI-based large language models, labeling them as “the Dictaphone for everything on Earth.” These tools, employing voice or text interrogation, can handle a myriad of tasks, from planning holidays to addressing health queries, without relying on traditional human gatekeepers.

DLA Piper’s multidisciplinary team, comprising forensic lawyers, data analysts, science experts, and subject matter experts, aids AI-assisted tool developers in testing for accountability, mitigating discrimination and bias risks, and ensuring statutory compliance. The team also collaborates with Fortune 500 companies to establish legal guardrails, encompassing policies, procedures, controls, monitoring, and feedback loops.

Image
Image Credit: https://twitter.com/DLA_Piper

Legal professionals at Morrison Foerster, another firm deeply entrenched in AI-related work, anticipate AI-assisted tools reshaping legal roles. While acknowledging the potential for disruption, they also foresee new challenges, such as concerns about deep fakes as evidence, requiring professionals skilled in validating evidence.

David Cohen, chair of Reed Smith’s records and e-discovery group, anticipates discovery battles as plaintiffs seek evidence about the liability of owners or developers of AI-assisted tools in various claims, including car accidents and employment discrimination. He envisions AI transforming e-discovery by allowing litigants to utilize a generative AI system for document analysis, eliminating inefficiencies in the current discovery process.

eDiscovery Leaders Live: David Cohen of Reed Smith
Image Credit: https://resource.revealdata.com/

As the legal landscape evolves, litigators must grapple with emerging challenges posed by AI tools, from defamation lawsuits to evidence authentication issues, and find innovative ways to adapt to the changing dynamics of the legal profession

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments