Sunday, October 13, 2024
HomeAI News & UpdatesMicrosoft Alerts on AI-Powered Hacking by North Korea and Iran

Microsoft Alerts on AI-Powered Hacking by North Korea and Iran

The US tech firm Microsoft claims to have identified risks from other countries that have attempted to use the generative AI for hacking.

According to Microsoft on Wednesday, US rivals, primarily North Korea, Iran, China, and Russia, are starting to leverage generative artificial intelligence to launch or coordinate offensive cyber operations. China and Russia are less extant, whereas North Korea and Iran are on top.

Microsoft claimed to have identified and neutralized numerous threats With business partner OpenAI. These threats used or attempted to exploit the AI technology they had created.

Although the corporation acknowledged that the methods were “early-stage” and not “particularly novel or unique,” it was nonetheless vital to make them public given that US competitors were using large-language models to increase their capacity for network violations and influence operations.

Cybersecurity companies have utilized machine learning on defense for a long time, mainly to identify abnormal network behavior. However, evil hackers and criminals also use it. The game of cat and mouse has become more intense with the development of large-language models, highlighted by OpenAI’s ChatGPT.

Impact of Generative AI on Cybersecurity: Opportunities, Challenges, and The Road Ahead

Microsoft has billions of dollars invested in OpenAI. On Wednesday, the company released a paper indicating that generative AI will likely improve malicious social engineering, resulting in more advanced voice cloning and deepfakes. Increasing misinformation is already threatening democracy in a year when over 50 countries will hold elections.

Microsoft gave a few illustrations. All generative AI accounts and assets belonging to the specified categories were deactivated in each circumstance.

The models are being used by Kimsuky, a North Korean cyber-espionage cell, to gather information from international think tanks covering the nation and create content that might be utilized in spear-phishing assaults.

West fears North Korea cyber warfare cell | SBS News

The Iranian Revolutionary Guard has employed large-language models to help with software bug fixes, social engineering, and even researching ways for hackers to avoid detection in a compromised network. Making phishing emails is one way to accomplish this, as is trying to send prominent feminists to a feminist website that the attacker has built in another or pretending to be an international development organization. Email output is accelerated and enhanced by AI.

Models related to satellite and radar technology that could be relevant to the conflict in Ukraine have been studied by the Russian GRU military intelligence branch, Fancy Bear.

Interactions between the models and the Chinese cyber-espionage group Aquatic Panda “suggest a limited exploration of how LLMs can augment their technical operations.” The group targets various enterprises, higher education, and governments from France to Malaysia.

In interactions with large-language models, the Chinese group Maverick Panda, which has been targeting US defense contractors among other sectors for more than ten years, suggested that it was assessing their suitability as a source of information “on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs.”

OpenAI claimed in a blog post posted on Wednesday that the GPT-4 model chatbot currently has “only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools.” According to cybersecurity analysts, it should change.

ChatGPT creator OpenAI withholds latest GPT-4 AI over fears it's too powerful | The Independent

There are “two epoch-defining threats and challenges,” according to Jen Easterly, the head of the US Cybersecurity and Infrastructure Security Agency, in a speech to Congress last April. The first is artificial intelligence, and the second is China.

According to Easterly, the US needed to make sure AI was developed with security in mind at the time.

Critics of ChatGPT’s November 2022 public release and later releases by rivals like Google and Meta argue that they were excessively hurried. When they were being developed, security was primarily a secondary concern.

The choice to open Pandora’s Box led to the use of large-language models by bad actors, as stated by Amit Yoran, CEO of Tenable, a cybersecurity company.

While it might more responsibly concentrate on making them safer, some cybersecurity experts complain about Microsoft’s development and promotion of solutions to fix weaknesses in large-language models.

Gary McGraw, a seasoned expert in computer security and one of the founding members of Berryville Institute of Machine Learning, questioned, “Instead of providing defensive measures for a problem they contribute to, why develop better secure black-box LLM foundation models instead?”

Applying AI and large-language models may not present a prominent threat right away. Still, according to NYU professor and former AT&T chief security officer Edward Amoroso, they will ultimately emerge amongst the most potent weapons available to the armed forces of every nation-state.

 

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments