Wednesday, November 13, 2024
HomeAI News & UpdatesMeta And IBM Develop ‘AI Alliance’ To Promote Open-Source AI

Meta And IBM Develop ‘AI Alliance’ To Promote Open-Source AI

On Tuesday, Facebook’s parent company, Meta, and IBM formed the AI Alliance, to argue on an “open-science” way to AI development, pitting it against competitors Google, Microsoft, and ChatGPT-maker OpenAI. Microsoft, OpenAI, and Google are not included in the organization, which could lead to legal and ethical difficulties. These two opposed factions – open and closed – debate whether AI should be designed so that the technology underlying it is widely available. The issue of safety is fundamental to the debate, but so is the issue of who will gain from AI advancements.

The senior VP of IBM’s research group, Daro Gil, claims that open-source enthusiasts favor a “not proprietary and closed approach.” “So, it’s not like a thing that is locked in a barrel, and no one knows what they are.”

Expect a fight… a scene from “The Wizard of AI.” ‘There is no winning strategy,’ declares the pacy, graphically brilliant film about the dangers of artificial intelligence, which AI developed.

Darío Gil, head of IBM Research: 'Our vision is for people to benefit from the quantum world without needing to know anything about it'
Image Credit: https://english.elpais.com/

Several universities and AI startups are part of the “AI Alliance”, which IBM and Meta are leading. AMD, Intel, Dell, Sony, and other companies are “coming together to articulate, simply put, that the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and open innovation, including open source and open technologies,” Gil told the Associated Press before the announcement. The organization will almost certainly press officials to guarantee that any legislation they enact favors them.

Meta, IBM create industrywide AI alliance to share technology
Image Credit: https://www.detroitnews.com/

On X, formerly Twitter, Yann LeCun, who is a top AI scientist at Meta, expressed worry that his colleagues’ fearmongering regarding AI “doomsday scenarios” was giving ammo to those who seek to restrain open-source development or research.

“In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them,” Yan stated in a press release. “Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture.”

This debate, says IBM, a 1990s Linux core developer, is part of a longer rivalry that started before the AI explosion.

“It’s sort of a classic regulatory capture approach of trying to raise fears about open-source innovation,” said Chris Padilla, worldwide government affairs team leader of IBM. “I mean, hasn’t this been the Microsoft model for decades? They have long been opposed to open-source apps that compete with Microsoft Windows or Office. They’re using a similar approach here.”

By definition, “open-source” is a software development methodology that has been around for a while and in which the program’s source code is accessible to everyone who wants to look it over, make changes, or add to it. Open-source AI is more than simply code, and computer scientists disagree on how to define it based on whether “the technology’s components are publicly available” and whether “there are any restrictions” on its use. Some refer to the broader philosophy as “open science.”

How Open Source Software Solutions are Profitable
Image Credit: https://www.orangemantra.com/

One reason open-source AI is misunderstood is that OpenAI, the developer of ChatGPT and Dall-E, an image generator, makes very closed-source AI systems, despite its name.

“To state the obvious, there are near-term and commercial incentives against open source,” said OpenAI chief scientist and co-founder Ilya Sutskever in an April Stanford University video interview. He worried about an AI system with “mind-bendingly powerful” skills that would be too harmful to divulge.

The artificial intelligence system that Sutskever used to make his case for open-source risks had figured out “how to construct its own biological laboratory”. Current AI models pose risks too, says UC Berkeley’s David Evan Harris, who cites the possibility of using them to ramp up disinformation operations to undermine democratic elections as an example.

“Open source is great in so many dimensions of technology,” Harris noted, “but AI is a little different.”

“Anyone who watched the movie Oppenheimer knows this: when big scientific discoveries are being made, there are lots of reasons to think twice about how broadly to share the details of all of that information in ways that could get into the wrong hands,” the scientist said.

One group warning about the “risks of open-source or compromised AI models” is the Centre for Humane Technology, which has been critical of Meta’s social media tactics for quite some time.

“As long as there aren’t any guardrails in place right now, it’s just completely irresponsible to be deploying these models to the public,” said Camille Carlton, executive director of the group.

An increasing “public debate” has developed over the “benefits and drawbacks” of utilizing an “open-source” strategy for AI development. Amid the uproar over Joe Biden’s wide executive order on AI, it was easy to forget the “open-source” discussion.

According to the US president’s directive, open models are “dual-use foundation models with widely available weights,” and more research is needed. “Weights are numerical factors that influence an AI model’s performance. By making those weights publicly available online, there can be significant benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model,” as stated in Biden’s directive. He granted CS (Commerce Secretary) Gina Raimondo till July to confer with experts and offer suggestions on balancing the possible benefits and hazards.

The European Union requires additional time to resolve issues. At the end of Wednesday’s deliberations, officials hoping to clinch approval of world-leading AI policy are still debating various proposals, which include one that may excuse certain “free and open-source AI components” from limits “affecting commercial models”.

 

 

 

 

 

 

 

 

 

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments