Tuesday, December 10, 2024
HomeAI News & UpdatesThe Biden Administration Has Taken the First Move Towards Developing Critical AI...

The Biden Administration Has Taken the First Move Towards Developing Critical AI Standards

On Tuesday, the Biden administration announced the first step towards developing critical guidelines and recommendations for the secure implementation of generative artificial intelligence and how to test and safeguard systems.

National Institute of Standards and Technology (NIST), a part of the Commerce Department, has announced that it is seeking public feedback by February 2 to carry out critical testing that would guarantee the security of AI systems.

Biden administration turns intel success into a media mess - POLITICO

President Joe Biden’s executive order on AI, according to Commerce Secretary Gina Raimondo, prompted the effort, which aims to develop “industry standards around AI safety, security, and trust that will enable America to continue leading the world in the responsible development and use of this rapidly evolving technology.”

US commerce secretary to visit China next week for talks - World - Business Recorder

The agency is creating rules for evaluating AI, as well as encouraging the creation of standards and providing testing settings for AI systems. The request solicits feedback from AI firms and the general public on generative AI risk management and mitigating the hazards of AI-generated misinformation.

In recent months, there has been both enthusiasm and concern about generative AI, which can generate text, photographs, and videos in answer to open-ended cues. This technology can make some industries obsolete, upend elections, and potentially dominate people. As part of Biden’s executive order, agencies are to establish guidelines for this type of testing and deal with any associated cyber, biological, radiological, or nuclear threats.

The National Institute of Standards and Technology as known as NIST is currently developing testing standards, including recommendations for when and how to use so-called “red-teaming” to evaluate and manage risks associated with artificial intelligence.

NIST and the NIS Directive / Regulations - IP Performance

The phrase “external red-teaming” originates from American Cold War simulations, where the rival was called the “red team.” Since then, it has been used to describe contemporary cybersecurity threats. Humane Intelligence, AI Village, and SeedAI hosted the first-ever public assessment “red-teaming” held in the United States in August as part of a major cybersecurity conference.

Thousands of people participated in the experiment to see whether they “could make the systems produce undesirable outputs or otherwise fail, with the goal of better understanding the risks these systems present,” according to the White House.

 

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments