Wednesday, November 13, 2024
HomeAI News & UpdatesTech Giants Commit to Tackle Deceptive AI in Elections

Tech Giants Commit to Tackle Deceptive AI in Elections

Amazon, Google, Microsoft, and other major tech corporations have committed to handling fraudulent artificial intelligence (AI) in elections.

The twenty companies have agreed to action against information that misleads voters. They claim that technology will be used to identify and neutralize the substance.

However, the voluntary agreement will “do little to prevent harmful content being posted,” according to one industry expert.

At the Munich Security Conference on Friday, the Tech Accord to Combat Deceptive Use of AI in 2024 Elections was unveiled.

Munich Security Conference

The problem has gained significant attention, with an estimated four billion people expected to cast ballots this year in nations including the US, UK, and India.

Developing technology to “mitigate risks” associated with artificial intelligence (AI)-generated fraudulent election content is one of the accord’s promises, along with revealing to the public the steps corporations have taken.

Additional actions include informing the public on how to recognize falsified content and exchanging best practices with one another.

Participants include Adobe, Meta (Facebook, Instagram, WhatsApp), Snapchat, and social networking platform X, formerly known as Twitter.

News | Meta to Introduce Paid Features on Facebook, Instagram, And Whatsapp | TechDemand

Dr Deepak Padmanabhan, a computer scientist from Queen’s University Belfast and co-author of a paper on elections and AI, claims that the agreement has several drawbacks.

According to him, it is encouraging to see businesses recognize the variety of difficulties that artificial intelligence poses.

However, he suggested that rather than waiting for something to be published before risking to remove it, they should be more “proactive.”

He said this could indicate that more dangerous and realistic AI content could remain on the site longer than heinous fakes, which are more straightforward to identify and take down.

According to Dr. Padmanabhan, the agreement’s utility was compromised by its lack of specificity in defining hazardous content.

He cited the instance of Pakistani politician Imran Khan, who was imprisoned and utilized artificial intelligence (AI) to create his election speeches.

He questioned, Must this also be removed?

The parties to the agreement declare that they will aim for any content that dishonestly mimics or modifies prominent political personalities’ appearance, speech, or behaviour.

Additionally, it will address any audio, visual, or video content that misleads voters regarding where, when, and how to cast their ballots.

Microsoft president Brad Smith said that we must prevent the weaponization of these instruments during elections.

Microsoft President Brad Smith on Kara Swisher podcast Recode Decode - Vox

The BBC heard US Deputy Attorney General Lisa Monaco warn that artificial intelligence could “supercharge” misinformation during elections on Wednesday.

In the past, Google and Meta have outlined their rules on AI-generated photos and videos used in political advertising. These policies demand advertisers to identify when utilizing deepfakes or other artificial intelligence-manipulated content.

 

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments