An annual list of all artificial intelligence systems utilized by government departments is also compelled by the latest regulations issued by the Department of Management and Budget.
To maintain the security of artificial intelligence applications in the general interest, each federal department in the United States must now designate a senior leader to supervise any artificial intelligence (AI) systems that they operate.
VP Kamala Harris revealed the recently released Office of Management and Budget (OMB) guidelines during a press briefing. He stated that agencies need to set up an artificial intelligence governance board to supervise the use of artificial intelligence in the department. Additionally, agencies must provide the OMB with an annual report detailing all of the artificial intelligence (AI) systems they utilize, any dangers they may pose, and the steps they will take to reduce those potential risks.
To ensure that artificial intelligence (AI) is implemented ethically, we have instructed every federal department to appoint an executive AI officer with the required knowledge, skills, and position of power to manage any AI innovations employed by that agency. Harris informed the media. We recognize that senior leaders throughout our government must be specifically charged with overseeing the implementation and utilization of AI.
The composition of the federal agency determining the necessity of a political appointee for the chief AI officer is a matter of opinion. By summer, administration boards need to be established.
The Biden administration’s artificial intelligence executive order, which mandated federal agencies to develop safety standards and boost the amount of AI experts employed by the government, is expanded upon by this guideline.
Several organizations were hiring chief AI officers even before today’s announcement. Jonathan Mayer was named the Department of Justice’s initial CAIO in February. His cybersecurity specialists will determine the best way to apply artificial intelligence to the justice system.
By the summer, the United States government intends to hire one hundred AI specialists, as OMB head Shalanda Young stated.
Regularly monitoring an agency’s AI systems is one of the duties assigned to its governance bodies and AI officials. According to Young, organizations are required to submit a list of the AI products they utilize. The agency must openly disclose the rationale for omitting AI systems that are determined sufficiently “sensitive” to be included on the list. Additionally, agencies must assess individuals about the safety risks on their own AI platforms.
Government organizations must also ensure that any intelligence system they implement fulfills safeguards that limit the dangers of discrimination based on algorithms and offer people openness regarding how the federal government utilizes AI. Multiple examples are provided in the OMB information sheet, such as:
- Travellers will still be able to choose not to use the TSA face recognition technology at the airport without waiting in line or losing their place.
- A human is in charge of the procedure when AI is utilized in the government healthcare sector to support crucial diagnostic judgments to validate the software’s results and prevent inequalities in healthcare accessibility.
- When artificial intelligence (AI) is used to identify fraud in government services, humans still make significant judgments, and those whom the AI harms might seek compensation.
According to the information sheet, agencies must only employ artificial intelligence systems if they can implement these precautions unless the agency’s leaders can explain why continuing would increase threats to people’s safety or rights generally or would unacceptably impede vital agency operations.
Unless they endanger government operations, all state-owned artificial intelligence models, software, and information should be publicly available, according to the new guidelines.
There are currently no laws governing artificial intelligence in the United States of America. The executive branch’s institutions are given rules regarding how to handle AI technology via the AI directive. Legislation related to AI technology has yet to advance significantly, even though numerous laws have been introduced to regulate various areas of AI.
Correction 29 March 2024, 11:50 AM ET: Following publication, the White House modified its fact sheet to clarify that travellers will still have the option to “continue to” decline TSA face recognition.