Wednesday, July 24, 2024
HomeAI News & UpdatesThe UK Government Will Publish ‘Tests’ for New AI Laws

The UK Government Will Publish ‘Tests’ for New AI Laws

Leading AI companies agreed in November to allow the UK’s AI Safety Institute to assess the safety of advanced models underpinning products like ChatGPT before distributing them to businesses and consumers.

What is ChatGPT? How is it used?. ChatGPT is an artificial intelligence… | by ibrahim atasoy | Medium

As it continues to oppose building a tighter regulatory environment for the rapidly emerging technology, the UK government is preparing to publish some conditions that must be completed to approve new legislation on artificial intelligence.

According to various persons knowledgeable about the planned move, British authorities are set to publish guidelines in the coming weeks on the conditions under which they would impose limits on powerful AI models developed by leading businesses such as OpenAI and Google.

One of the “key tests” that would need action is if the systems put in place by the UK’s new AI Safety Institute — a government organization containing academics and machine learning experts — failed to uncover problems associated with the technology. Another possibility that could lead to regulation is if AI businesses do not follow through on their voluntary agreements to avoid harm.

The UK’s careful approach to regulating the sector contrasts with global developments. The EU has agreed on a broad “AI Act” that imposes new harsh constraints on prominent AI businesses developing “high-risk” technology.

AI Safety Institute will make UK a 'global hub', Rishi Sunak says | The Independent

US Vice President Joe Biden issued an executive order requiring AI companies to disclose how they are dealing with dangers to national security along with consumer privacy. Meanwhile, China has issued thorough instructions on AI development, emphasizing the importance of content control.

The United Kingdom has stated that it will avoid developing specialized AI legislation shortly in favor of a light-touch policy, citing concerns that strict regulation will stifle business progress.

The new tests will be included in the government’s response to a consultation on its white paper, which was released in March and recommended dividing responsibility for AI regulation among current agencies like Ofcom and the Financial Conduct Authority.

UKRN performance scorecard - Ofcom

As part of the inaugural worldwide AI Safety Summit sponsored by the UK government in November, top AI companies such as OpenAI, Google DeepMind, Microsoft, and Meta inked many voluntary agreements on the safety of their products.

Before strong models that support products like ChatGPT are distributed to businesses and consumers, the companies agreed to let the UK’s AI Safety Institute review their safety. Model evaluation is believed to be underway, but it is unclear how it will be carried out or whether AI companies will grant extensive access.

“We’re currently lucky because we’re reliant on goodwill on both sides, and we have that, but it could always break down,” a government source said. “It’s very character-dependent and CEO-dependent.”

Some AI experts, however, feel that the UK’s dependence on voluntary commitment is insufficient. “The concern is that the government is setting up the capabilities to assess and monitor the risks of AI through the institute but leaving itself powerless to do anything about those risks,” said Michael Birtwistle, associate director of the Ada Lovelace Institute, an independent research agency.

“The economic stakes are so high in AI, and, without strong regulatory incentives, you can’t expect companies to stick to voluntary commitments once their market incentives move in a different direction,” he said.

According to a person familiar with its contents, the government’s response to its initial AI white paper will include a provision that, to pursue tabling legislation, there must be evidence that such a move would reduce the risks of AI without limiting innovation.

Friday briefing: What we learned at Rishi Sunak's summit on the dangers of AI | Artificial intelligence (AI) | The Guardian

Another reason the government might consider new regulations is if it encounters resistance from AI businesses regarding future revisions to their voluntary agreements, such as asking for access to code or implementing new testing criteria.

The UK government stated that it would “not speculate on what may or may not be included” in its response to the white paper consultation but added that it is “working closely with regulators to make sure we have the necessary guardrails in place, many of whom have started to proactively take action in line with our proposed framework.”

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments