Wednesday, April 24, 2024
HomeAI News & UpdatesLabour Pushes for Mandatory Sharing of AI Test Data

Labour Pushes for Mandatory Sharing of AI Test Data

The Party intends to implement statutory regulations in place of voluntary agreements “so we can see where this is taking us.” Labour has given a proposal to the AI companies for a forced publication of the outcomes of their technology’s road testing. They felt the need for this act after the failed attempt of politicians and controllers to direct social media platforms.

The Party would impose a legal framework that would force AI companies to share test results with government authorities in place of the current voluntary testing agreement between tech companies and the government.

The shadow technology secretary, Peter Kyle, claimed that Labour would make sure artificial intelligence (AI) would not make the same mistakes as lawmakers and regulators made in the past, who had been “behind the arc” in social media.

In response to Brianna Ghey’s murder, he called for increased transparency from tech corporations, stating that a Labour administration would require greater exposure from those developing artificial intelligence (AI) technology. It is the name for computer systems that do activities commonly associated with human levels of intelligence.

Speaking on BBC One’s Sunday with Laura Kuenssberg, Kyle mentioned that we will move from a volunteer code to a statutory code. In this way, the companies engaging in that kind of research and development have to release all of their test data, and they will need to tell us what they are testing for. It will help us to precisely analyze what is happening and where this technology is taking us.

During the first-ever global AI safety meeting in November, Google and OpenAI, the company that developed ChatGPT, among other top AI companies, reached a voluntary agreement with Rishi Sunak to work together on testing cutting-edge AI models both before and after they are deployed. Labour’s proposals would require AI companies to perform safety testing with “independent oversight” and notify the government on a statutory basis if they planned to develop AI systems above a particular level of capacity.

Image Credit:

The US, UK, Japan, France, Germany, and ten other countries supported the AI summit testing pact. Tech companies like Google, OpenAI, Amazon, Microsoft, and Mark Zuckerberg’s Meta are also supporting this AI summitThey all have agreed to test their models.

While visiting lawmakers and tech executives in Washington, DC, Kyle stated that the UK AI Safety Institute, which was recently established, would benefit from the test results by being able to inform the public that we are examining developments in some of the actual cutting-edge areas of artificial intelligence on our own. Some of this technology will have a significant impact on our workplace, society, and culture. He ended the conversation by saying that, Also, we must ensure that this progress is carried out safely.


Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.


Please enter your comment!
Please enter your name here

Most Popular

Recent Comments