A significant advancement occurred when financial regulators in the United States formally acknowledged artificial intelligence (AI) for the first time as a potential threat to the financial system. In its most recent annual report, the Financial Stability Oversight Council (FSOC) identified the expanding use of artificial intelligence (AI) in financial services as a “vulnerability” requiring close monitoring.
The FSOC underscored the introduction of specific risks while acknowledging the potential benefits of AI, which include cost reduction, enhanced efficiency, identification of complex relationships, and improved performance and accuracy. Concerns regarding safety and efficacy are included among these dangers, including cybersecurity and model risks.
The FSOC, which was formed after the 2008 financial crisis, is tasked to detect inflated risks within the financial system. Thursday’s release of the council’s annual report emphasizes the significance of monitoring AI developments to ensure that oversight mechanisms can resolve emerging risks while promoting innovation and efficiency.
The chairman of the FSOC, US Treasury Secretary Janet Yellen, emphasized the council’s responsibility to monitor “emerging risks” as the financial sector implements emerging technologies such as artificial intelligence. Yellen emphasized the potential advantages of responsible innovation in the field of artificial intelligence, while also highlighting the importance of implementing established principles and regulations for risk management.
President Joe Biden of the United States of America issued an executive order on artificial intelligence in October, which focused predominantly on the technology’s implications for national security and discrimination concerns. Financial regulators’ action demonstrates an increasing recognition of the imperative to confront the hazards linked to the swift progression and integration of artificial intelligence within the financial industry.
Governments and academicians around the world have echoed concerns regarding the ethical ramifications of AI development, placing significant emphasis on the necessity for responsible and ethical utilization. Despite public commitments to prioritize safety, a recent survey by Stanford University researchers revealed that tech workers engaged in AI research expressed concern regarding the absence of ethical safeguards implemented by their employers.