Disinformation generated by artificial intelligence is becoming a growing issue ahead of the South Asian nation’s elections in January. Policymakers of the world are concerned about how artificial intelligence-generated disinformation can be used to mislead voters and incite divides ahead of several major elections next year.
Bangladesh is one country where this is already happening. The South Asian nation of 170 million people will go to the polls in early January, with sitting Prime Minister Sheikh Hasina and her opponents, the Bangladesh Nationalist Party, locked in a heated and polarizing power fight.
In recent months, pro-government media in Bangladesh have spread false material distributed by AI-powered bots using low-cost resources provided by A.I. startups. In one allegedly journalistic footage, an AI-generated anchor criticizes the United States, a country whom Sheikh Hasina’s government has chastised ahead of the election. A related deepfake video, which was later removed, depicted an opposition leader hesitating about supporting Gazans, a potentially disastrous stance in the “Muslim-majority country” with considerable widespread sympathy for Palestinians.
Public pressure is mounting on internet platforms to tighten down on false A.I. content before several major elections in 2024, including those in the United States, the United Kingdom, India, and Indonesia. In reaction, both Meta and Google have implemented standards mandating campaigns to declare whether or not political advertisements have been electronically altered.
However, the examples of Bangladesh demonstrate how these A.I. tools might be used in elections and the challenge in policing their usage in smaller regions that risk being disregarded by American tech firms. According to Miraj Ahmed Chowdhury, managing director of the media research organization Digitally Right in Bangladesh, AI-generated disinformation is “still at an experimentation level” and is mainly made using normal photo or video editing platforms. However, he acknowledges that it demonstrates its potential to gain traction.
“When they have technologies and tools like A.I., which allow them to produce misinformation and disinformation at a mass scale, then you can imagine how big that threat is,” he added. He went on to say that “a platform’s attention to a certain jurisdiction depends on how important it is as a market.”
The advent of sophisticated tools such as OpenAI’s ChatGPT as well as A.I. video makers has increased global awareness of the capacity to utilize A.I. to produce misleading or fake political content during the last year.
Earlier this year, the Republican National Committee of the United States published an attack commercial depicting an uncertain future with President Joe Biden utilizing AI-generated visuals. In Venezuela, YouTube terminated multiple accounts that used AI-generated news anchors to spread falsehoods favorable to President Nicolás Maduro’s dictatorship.
Disinformation is fueling a heated political climate in Bangladesh before the early January elections. Sheikh Hasina has repressed the opposition. U.S. officials have publicly pushed her administration to guarantee fair and free elections after the detention of thousands of activists and leaders was seen by many as an effort to manipulate the results politically.
An online news organization called B.D. Politico published a video on X in September. It featured a T.V. anchor from “World News” talking about political violence in Bangladesh and how U.S. officials were involved in the country’s elections. The video was intercut with footage of rioting.
This film was created using HeyGen, an A.I. video generator located in Los Angeles. Users can create clips featuring A.I. avatars for as low as $24 per month. The same anchor, dubbed “Edward,” may be seen in HeyGen’s advertising material as one of various avatars generated from real performers available to platform users. X, BD Politico, or HeyGen did not return requests for comment.
Among other instances, anti-opposition deepfake movies are shared on Meta’s Facebook page. One of these videos allegedly expelled BNP leader Tarique Rahman, advising the party to “keep quiet” over Gaza to avoid upsetting the U.S. The media non-profit Witness and the tech-focused think tank Tech Global Institute both came to the same conclusion: the phony film was probably created by artificial intelligence.
According to AKM Wahiduzzaman, a BNP spokesman, his party requested that such content be removed, but “most of the time, they don’t bother to reply.” After being contacted for comment by the Financial Times, Meta withdrew the video.
Another deepfake video, developed by Tel Aviv-based AI video service D-ID, shows the BNP’s youth wing leader, Rashed Iqbal Khan’ lying about his age to discredit him, according to the Tech Global Institute.
The company said it was “taking measures to block the user from our platform as they breached our terms of use” and added that it had “strict guidelines that stipulate that our technology is not to be used to spread disinformation or other unethical usage.”
Sabhanaz Rashid Diya, founder of the Tech Global Institute and a former executive at Meta, stated that the absence of trustworthy AI-detection techniques was a significant obstacle in the fight against disinformation, especially when it came to non-English language content.
She went on to say that the remedies presented by global tech companies, which have concentrated on limiting A.I. in political advertisements, will have little impact in countries like Bangladesh, where commercials constitute a lesser proportion of political communication.
“The solutions coming out to address this onslaught of A.I. misinformation are very Western-centric.” Tech platforms “are not taking this as seriously in other parts of the world.”
The lack of regulation or limited enforcement by authorities exacerbates the problem. Bangladesh’s Cyber Security Act, for instance, has been chastised for granting the government severe powers to suppress online criticism. The internet authority in Bangladesh has not responded to what it is doing to combat online disinformation yet.
Diya suggested that the sheer idea of deepfakes posed a greater threat than AI-generated content itself. For instance, in neighboring India, a politician claimed the tape was phony in response to a leaked recording in which he supposedly discussed party corruption; fact-checkers later debunked the assertion.
“It’s easy for a politician to say, ‘This is deepfake,’ or ‘This is AI-generated’, and sow a sense of confusion,” she remarked. “The challenge for the global south . . . is going to be how the idea of AI-generated content is being weaponized to erode what people believe to be true versus false.”