16 days of activism: AI fuelling cyber violence against women
Niva (not her real name), 26, is an IT specialist who has spent years training students at a reputed Narayanganj institute to navigate technology.
Yet it was technology that became the source of her trauma and humiliation.
It began in September after a minor argument with students during a routine training session. A few days later, she discovered photos of her and her husband, previously posted on Facebook, digitally altered with obscenities and circulated across multiple groups.
Contacted by The Daily Star, Niva said, "What I saw left me shaken. I had no idea how to react."
After confiding in her husband and family, she approached the police, who found several of her students -- all minors -- involved. They were counselled and handed over to their parents. Police assured her the doctored images had been removed.
But the trauma stayed. "Even now, I live in constant fear. The social humiliation hasn't gone away."
Niva's experience reflects a rising trend where AI-generated deepfakes and digitally manipulated sexual content are weaponised against women and girls, undermining dignity and affecting victims physically, psychologically, and professionally.
From media personalities and political leaders to activists, young professionals, and ordinary women, no one is exempt.
Last December, a fabricated image involving Environment Adviser Syeda Rizwana Hasan and actress Mehazabien Chowdhury was widely circulated. The AI-generated photocard falsely depicted Mehazabien in an "environment-friendly condom dress", with a caption implying the adviser endorsed it.
According to a UN Women report, AI is creating new forms of abuse and amplifying existing ones at alarming rates. Studies show that technology-facilitated violence against women affects between 16 and 58 percent of women globally.
The National Violence Against Women (VAW) Survey-2024, conducted by BBS with UNFPA assistance, showed that 8.3 percent of women -- especially younger, digitally connected ones in urban areas – experience technology-facilitated gender-based violence (TFGBV), including unwanted sexual communication, blackmail, image-based abuse, or other forms of online control.
A report by Cyber Support for Women and Children (CSWC), a coalition of 14 rights groups led by BLAST, found that offenders are increasingly weaponising AI, making crimes more layered and harder to trace.
Supreme Court lawyer Barrister Priya Ahsan Chowdhury said, "With AI, it has become very easy to create fake or sexually suggestive images -- even if offenders don't have any personal photos. Previously, people would cut and paste faces. Now it takes a single image and a free AI tool."
Cybercrime consultant Gazi Mahfuz Ul Kabir said, "In the past, leaking private photos required technical effort. Now anyone can create explicit or misleading content simply by uploading one image. This no-skills process is extremely dangerous."
PROBING TFGBV CHALLENGING
AHM Shahadat Hossain, assistant inspector general (Media) at the Police Headquarters, said social stigma remains a key barrier. "Victims feel shame and fear the consequences. Many don't want to be exposed. When we advise them to lodge a general diary or file a case, they often withdraw or negotiate with perpetrators due to pressure, preventing police from pursuing offenders."
Gazi Mahfuz Ul said offenders usually share files in closed groups on platforms such as Telegram or Terabox, leaving police without direct links for removal. "Major tech companies, operating outside Bangladesh, follow global rather than local policies, slowing down takedown responses…. When an image is repeatedly shared, its metadata changes, making it nearly impossible to trace the original uploader."
The VAW-2024 survey noted that police and judges require specialised training on digital violence. It recorded instances where officers downplayed image-based abuse or focused only on Facebook, leaving platforms like Snapchat, Telegram, and TikTok largely unmonitored.
AIG Shahadat said police also face technological limitations. "Investigative tools are often misunderstood as intrusive surveillance, but they're used solely to identify offenders and ensure justice."
JUSTICE SYSTEM ILL-EQUIPPED
The new Cyber Protection Ordinance-2025 criminalises harmful or intimidating materials generated or edited using AI.
However, Women and Children Affairs Adviser Sharmeen S Murshid said on November 25, "While Bangladesh has strong laws to combat cybercrime, weak enforcement is allowing offenders to act with increasing impunity."
Barrister Priya said the justice system is "not prepared at all". "While cyber units and helplines exist, survivors often receive little meaningful assistance."
She cited cases where police discouraged victims from filing complaints. In one instance, a survivor was warned that a case could be filed against her as well, prompting her to withdraw. In another case, the officers were unaware of the cyber protection laws.
"Such gaps leave survivors exposed from the outset, and the absence of victim or witness protection mechanisms discourages many from reporting."
Priya also pointed to a critical gap: forensic evidence remains optional, weakening the legal framework for digital crimes. "Courts currently accept screenshots as evidence, although these can be manipulated and lack context. More advanced recording methods used abroad have not been adopted."
A 2024 study by the Media, Law, and Digital Space Cohort said Bangladesh urgently needs more digital forensic experts, as collaboration between police and specialists remains limited, and cases rarely receive technical support.
"Digital evidence and forensic reports are highly technical, and justice sector actors -- including judges -- require further training. Mandatory forensic evidence, improved lab capacity, and technical expertise are essential," Priya said.
NATION LACKS DIGITAL READINESS
Kazi Mustafiz, president of the Cyber Crime Awareness Foundation, said most incidents stem from poor digital literacy and weak understanding of data protection. "Despite having multiple digital forensic labs, services remain inaccessible to ordinary citizens. Increasing capacity and skilled manpower is essential."
He stressed that careless sharing of personal data continues to put individuals at risk. "Widespread awareness of AI-related offences is essential, especially among political leaders.
Gazi Mahfuz Ul urged the government to establish a one-stop cyber crisis centre for rapid takedowns, victim support, and direct liaison with global tech companies.
He also called for large-scale digital literacy programmes in schools, universities, and communities. "One simple rule is never to share unnecessary personal photos or videos. Once online, they can be misused to create AI-generated content. In rural areas especially, the consequences can be devastating."


Comments