AI risks in Bangladesh’s elections: A reality check

In recent days, my social media feeds have been filled with high-definition images of women in saris. At first, they seemed ordinary, but my journalistic instinct said otherwise. Most of these women are not the "dolled-up" type, nor was it a festive season. As an AI researcher, I recognised the trend as synthetic. The images looked strikingly real, and what surprised me most was how quickly prompts spread through comments and captions—AI knowledge shared in real time.
This comes just as the Election Commission, in early September, issued guidelines ahead of February's parliamentary polls, which included banning the misuse of AI in campaigns and banning posters. Together, they show that social media will be the main political arena, raising the question: what happens when AI becomes the weapon of choice inside it?
September also marks the 13th anniversary of the Ramu attack. On September 29, 2012, a fake Facebook post triggered violence in Cox's Bazar, where 12 Buddhist temples and more than 50, mostly belonging to the Buddhist community, were destroyed. Thirteen years later, justice has still not been served. That tragedy began with a single fabricated image, long before AI was part of the story. Today, the risks are far greater. In 2012, Bangladesh had 30 million internet users. Now, according to the Bangladesh Telecommunication Regulatory Commission (BTRC), the number has soared to nearly 136 million. With smartphones everywhere, it takes only a few prompts to generate photos, videos, or audio, and spreading them is effortless.
Synthetic content at scale, detection in doubt
Last summer, at the New York University Journalism Institute, I mapped market-available AI tools. Many cost as little as $20 to $40 a month, and many more can be accessed for free simply by rotating email addresses. I identified 40 dedicated audio tools alone—text-to-audio, audio-to-audio, voice cloning, background noise removal, even noise replacement. And that was just audio. Add to that the countless tools for generating photos, videos, avatars, where you can upload a photo and prompt it to speak any words you want. The barrier is so low that even my ten-year-old can prompt with ease and get results.
But recognising AI-generated content is no one-click fix. We spent three months testing detection tools. Whether audio, video or photo, the results were inconclusive. Even faculty at the Poynter Institute, when consulted, could not offer optimism.
Detection tools do exist, but none are foolproof. For instance, Google's SynthID helps users identify AI-generated content. SynthID uses an invisible watermarking system to embed signals inside pixels of AI-generated content—images, audio, video, even text—which survive cropping or compression. Yet, SynthID is not widely available; applying for access can mean waiting up to a year.
Other industry efforts have also emerged. In 2019, Adobe, The New York Times Company, and Twitter launched the Content Authenticity Initiative. By 2021, it had evolved into the Coalition for Content Provenance and Authenticity (C2PA), joined by Microsoft, the BBC, Intel, and others. Their aim is not to "detect" AI but to make provenance transparent—embedding metadata into files to show who created them, how they were edited, and whether AI was involved. If such metadata were made mandatory, audiences could more easily identify synthetic content.
For now, however, these "content credentials" remain optional. No country has yet mandated their use across all forms of content. The European Union has taken a first step with its new AI Act, which requires AI-generated or manipulated content to be clearly labelled, and obliges general-purpose AI models to disclose provenance information about their training data. This makes the EU the first major jurisdiction to begin turning provenance standards into legal obligations. Still, consumers cannot rely on provenance as a guaranteed safeguard, since metadata can be removed or altered.
In early September, the Bangladesh Election Commission banned the misuse of AI in the February polls and tightened penalties for online defamation. Candidates must now submit their names, account IDs and other identifying information for their campaign and party-related social media. Yet, algorithms are not so simple, and AI's deep roots in the creator economy make regulation even harder.
The Ramu example remains instructive. A fake profile was enough to incite communal violence. Creating one requires little more than an email. Authorities have minimal control. Removing incendiary content is slow and tangled. By May 2025, Facebook had over 67 million users in Bangladesh—an enormous challenge for any commission that hopes to monitor content in real time.
Social media algorithms also do not surface these risks quickly. A Cornell University study in 2023 found that harmful posts often circulate for long stretches inside echo chambers before reaching opponents. By then, the momentum is already set, sometimes enough to spill into the streets.
Meanwhile, the creator economy thrives on AI. Platforms need scale, and AI delivers scale instantly. Algorithms are tuned not to suppress but to amplify. Every reaction, like or dislike, boosts visibility. More reactions mean more views, and more views mean more revenue. The creator economy is about $250 billion today—almost half of Bangladesh's 2024 GDP, which was around $450 billion, and could become $480 billion by 2027. For the sake of this business, platforms have shown little willingness to restrain creators—not for the US, and certainly not for Bangladesh.
What Bangladesh can learn from other countries
Before we consider what Bangladesh can learn from others, it is important to note what it cannot. In the US, lawmakers recently passed the Take It Down Act, 2025, requiring platforms to remove non-consensual intimate images, including AI-generated deepfakes, within 48 hours. This was achievable largely because most major platforms are headquartered in the US and fall under its jurisdiction. Bangladesh is not in that position. Past attempts to pressure platforms here have instead resulted in temporary shutdowns of Facebook or the wider internet—moves that backfired politically and eroded public trust. Clearly, Bangladesh cannot simply replicate another country's solution.
We have a unique political position right now, unlike any previous year. Past contests were dominated by one party or alliance; this time, the field is fragmented. In such a volatile environment, even a single provocative post could ignite conflict, making the Election Commission's hope of controlling candidates' social media footprint unrealistic. If AI is misused, it will spread far beyond central leaders, weaponised by party actors across the country, even against rivals within the same party. The fallout could be sweeping and unpredictable.
Amid these circumstances, Chief Election Commissioner AMM Nasir Uddin has said Bangladesh is seeking Canada's help in curbing AI misuse, which is a commendable step. Canada's approach combines public awareness, clear voter guidelines, MoUs with platforms such as Meta and Google, and technical monitoring cells. Whether such a broad effort can be replicated in Bangladesh is uncertain. However, if ethics education, awareness and law enforcement can be brought together, the country may yet steer through this perilous election with its democracy intact. Why not begin now, by preparing citizens to question and spot a fake before the damage is already done?
Maksuda Aziz is a journalist and media trainer exploring the future of AI in journalism.
Views expressed in this article are the author's own.
Follow The Daily Star Opinion on Facebook for the latest opinions, commentaries and analyses by experts and professionals. To contribute your article or letter to The Daily Star Opinion, see our guidelines for submission.
Comments