Monday, December 23, 2024
HomeAI News & UpdatesWhite House Sets Limits on Government AI Usage 

White House Sets Limits on Government AI Usage 

Kamala Harris, the vice president, claims that the welfare of society will be prioritized in new regulations for state artificial intelligence deployments, which include an obligation for algorithms to be subjected to bias testing. On Thursday, the United States government announced new regulations requiring increased moderation and openness from federal departments employing artificial intelligence. The government claims these measures are necessary to safeguard citizens as artificial intelligence develops quickly. However, where the technology can be utilized for the public’s benefit, the latest legislation also includes measures to promote the development of artificial intelligence in government organizations. 

 With its recently announced policy for government artificial intelligence, the United States wants to become a global leader. Before the release, VP Kamala Harris stated at a press briefing that the government wants the laws to work as an example for international action. She declared that when it relates to the employment of artificial intelligence by the government, the United States will keep pushing other countries to follow our example and prioritize the needs of the general public. 

media.npr.org/assets/img/2020/11/03/gettyimages-10...

 A new regulation of the White House Office for Management and Budget will govern the federal government’s use of artificial intelligence. Government departments must make the application of AI more transparent, and federal departments must promote the growth of technology. With this policy, the administration is attempting to find a middle ground between utilizing artificial intelligence to address fundamental issues like disease and climate change and avoiding the unknown risks associated with a deeper intelligence application. 

The declaration is the latest in the Biden administration’s series of initiatives to both promote and limit AI. President Joe Biden approved a comprehensive executive order on artificial intelligence (AI) in October. The order would encourage government agencies to use Intelligence technology more extensively and mandate that individuals creating massive artificially intelligent models report their conduct to the authorities for national security reasons.  

 

 The United States, the United Kingdom, China, and other European nations signed a joint statement in November that encouraged international cooperation. It acknowledged the risks associated with the rapid advancement of AI. The following week, Harris disclosed an informal statement supported by thirty-one countries regarding using artificial intelligence in warfare. It advocates the shutdown of technologies exhibiting unexpected behavior and proposes essential safeguards.  

 Agencies must take several actions to prevent unexpected outcomes of AI deployments under the new guidelines of American government utilization of AI, which was released on Thursday. Agencies must first confirm that their AI capabilities do not threaten Americans’ safety. For instance, the Ministry of Veterans Affairs must confirm that artificial intelligence does not provide biased recommendations against any race before using it in its institutions. According to research, systems based on artificial intelligence (AI) and other algorithms that guide diagnosis or determine which individuals receive care can perpetuate long-standing discriminatory practices.  

VA.gov Home | Veterans Affairs

 A government organization must either suspend utilizing the artificial intelligence system or justify it if it cannot ensure these protections. United States government agencies must comply with these new regulations before the first of December. 

Furthermore, the regulation calls for increased openness regarding government AI systems. It demands that agencies make public all artificial intelligence models, data, or codes they hold, assuming that doing so is not harmful to the general public or the administration. Every year, agencies must disclose to the general public their implementation of artificial intelligence, any possible risks the systems can bring, and the steps to reduce those risks.  

 Additionally, the new regulations mandate that government agencies strengthen their skills in artificial intelligence and designate an executive AI director to manage the agency’s adoption of AI. In addition to encouraging creativity in AI, this job involves keeping an eye out for any risks.  

According to authorities, the modifications will also lower certain hurdles to applying artificial intelligence in government agencies, which could lead to more ethical AI testing. Agencies might use the technology to monitor illness progression, anticipate severe weather, assess damages after natural catastrophes, and manage air traffic.  

 Nations all across the globe are taking action to control artificial intelligence. The European Union formally enacted the Artificial Intelligence Act, which regulates the development and application of AI technologies, earlier this month after voting in favor of its adoption in December. China is also developing an elaborate Intelligence regulatory framework.  

 

 

 

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments