Sunday, December 22, 2024
HomeAI News & UpdatesThink Tank Warns UK About AI Terrorism Legislation

Think Tank Warns UK About AI Terrorism Legislation

According to a counter-extremism research tank, the United Kingdom should “urgently consider” fresh regulations to prevent AI from recruiting terrorists. There is a “clear need for legislation to keep up” with internet terrorist threats, according to the Institute for Strategic Dialogue (ISD). This warning follows an experiment in which a chatbot “recruited” the UK’s independent terror legislation reviewer. According to Jonathan Hall KC, the government’s independent terrorism legislation reviewer, “It is difficult to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism.”

Home - ISD

Mr Hall experimented with Character.ai, a platform where users may interact with chatbots built by other users using AI. He communicated with many bots that appeared to be engineered to mimic the answers of other violent and extremist groups. One even claimed to be a “senior leader of the Islamic State.” Mr Hall claimed the bot tried to recruit him and declared “total dedication and devotion” to an extremist group, which is illegal under UK anti-terrorism legislation. However, according to Mr. Hall, no criminal act was committed under the present UK law since the communications weren’t generated by a human.

He argued that chatbot developers and hosting websites should be subject to new laws that hold them accountable. The bots he encountered on Character.ai were “likely to have some shock value, experimentation, and possibly some satirical aspect” underlying their development.

Interview: Jonathan Hall KC, terrorism legislation adviser | CYP Now

Mr Hall was even able to construct his own “Osama Bin Laden” chatbot with “unbounded enthusiasm” for terrorism, which he swiftly erased. His experiment follows a growing concern about how radicals might use powerful AI in the future. According to a government paper released in October, generative AI might be “used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological, and radiological weapons” by 2025.

In a statement to the BBC, the ISD said, “There is a clear need for legislation to keep up with the constantly shifting landscape of online terrorist threats.” Think tank claims that the UK’s Online Safety Act, which was passed in 2023, “is primarily geared towards managing risks posed by social media platforms” instead of artificial intelligence. It goes on to say that terrorists “tend to be early adopters of emerging technologies and are constantly looking for opportunities to reach new audiences.”

BBC - YouTube

“If AI companies cannot demonstrate that have invested sufficiently in ensuring that their products are safe, then the government should urgently consider new AI-specific legislation”, the Institute for Strategic Development (ISD) stated. However, it did state that, based on its monitoring, extremist organizations’ use of generative AI was “relatively limited” at the time.

In an interview, Character AI stressed that customer safety is their “top priority” and called Mr. Hall’s description of the company’s platform “unfortunate” and not reflective of their goals.”Hate speech and extremism are both forbidden by our Terms of Service,” the company added in a statement. “Our approach to AI-generated content flows from a simple principle: Our products should never produce responses that are likely to harm users or encourage users to harm others.”According to the business, it trained its algorithms in a manner that “optimizes for safe responses.”It also stated that it had a moderation system in place so that users could report content that violated its terms and that it was dedicated to acting quickly when content was flagged.

Character.AI | LinkedIn

The Labour Party has announced, to make “training AI to encourage violence or radicalize the vulnerable” a crime if it wins power. AI posed “significant national security and public safety risks,” according to the Home Office.”We will do all we can to protect the public from this threat by working across government and deepening our collaboration with tech company leaders, industry experts, and like-minded nations.”In 2023, the government also stated that an AI Safety Institute will get a £100 million investment.

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments