Views
DATA PRIVACY DAY

Don’t the poor have a right to privacy?

Salman Sakib Shahryar

In Bangladesh, a common joke revolves around the absence of a Bangla equivalent for "privacy." Goponiyota suggests confidentiality, and ekaante thaka, used as a verb, implies being left alone. Some argue the lack of a specific term in our vocabulary might suggest a perceived absence of the right to privacy. There are TikTok videos humorously referencing the lack of privacy in joint families, or how parents get into the business of their teenagers, or how even a newlywed couple can't catch a break in a house full of guests.

Do these real-world privacy principles, or lack thereof, apply to the digital ecosystem? Sharing mobile devices is a common behaviour across South Asia, attributed to economic and cultural factors, according to research. In other words, if people are comfortable sharing devices, does this indicate that they do not care about their privacy? By the same assumption, is it fair to infer that the public lacks a reasonable expectation of privacy when sharing their personal data while buying a SIM card or on digital platforms?

Several months ago, while travelling in southern Bangladesh, I met with women in savings circles. Some had their own devices, while others shared a mobile phone with their spouses or parents. When asked about their concerns, almost everyone indicated that they wanted more privacy. This ranges from owning their own devices, to finding more secure ways to send mobile payments, to safely accessing their social media accounts. An overwhelming majority had accounts on Facebook, TikTok, or Imo, where they feared not only "abusive" content attacking them but also the possibility that reporting to law enforcement would grant unauthorised access to all of their data. For a community that's facing long-standing societal discrimination, these women—housewives, small business owners, farmers, and garment workers—were well attuned to their expectations around privacy.

This shouldn't come as a surprise because women and minority communities worldwide bear the brunt of "digital abuse." They face a disproportionate risk of privacy erosion, including invasion of their personal spaces, non-consensual sharing of visual content, and the use of personal data for surveillance and blackmail. However, these risks are more acute for communities in low- and middle-income countries (referred as the Global Majority), who lack the institutional safeguards that wealthier Western democracies can sometimes take for granted. Moreover, the rights of the poor are frequently undermined with the promise of techno-solutionism, an idea that the "right" technologies can solve society's problems.

A year ago, Marium Akter (pseudonym) received her smart national identification (NID) card. At the time of collecting her personal information and biometric data, the "officer" promised that this would make receiving her social safety benefits easier and provide security against fraud when accessing any device or online services. Weeks after signing up, Marium started receiving strange phone calls at midnight. The caller claimed to have access to her NID information, even shared some of it accurately, and blackmailed her for money in exchange for not leaking her information online. They threatened to provide false criminal allegations about her to the local police, suggesting it would impact her government benefits. Based on the threats, it appeared that the police and local government officers might be involved in the scam, leaving her unsure on whom to approach. She eventually disconnected her device out of fear.

Marium's case may seem anecdotal, but last year, TechCrunch, along with multiple national dailies, reported that a Bangladeshi government website leaked personal information of more than five lakh citizens. To grasp the scale and severity of the breach, it's worth noting that personal information in the government's NID database is tied to an individual's birth certificate, SIM card registration, bank account/s, passport, voter cards, and pretty much every service imaginable. At the time, the government acknowledged the breach and attributed it to "weak web applications" and "poor security features" of "some government organisations." A few months later, NID data was available on Telegram, easily accessible and searchable using a bot. The then system manager of the NID wing of the Election Commission confirmed that 174 organisations had access to the NID server; anyone could have their security compromised.

In the months preceding the leaks, the then Home Minister Asaduzzaman Khan told the press that there was a process underway to shift the central NID database from the EC to the Ministry of Home Affairs, referring that most countries maintain their citizen records more securely under the executive branch. The National Identification Registration Act was passed in September last year, confirming the move. In November, a Wired story found that millions of NID data, along with other sensitive personal information, was left exposed online by the National Telecommunications Monitoring Center, a national intelligence outfit under the home ministry.

But that's just the tip of the iceberg.

For nearly two decades, social media companies have collected vast amounts of personal data, extending from activities on the platforms themselves to third-party websites, browsers, and devices. The data is not only used to micro-target ads, but also to decide what should appear on someone's feed, which product features they can access, recommend "friends," and impact the entirety of their online experiences. Although there is increasing public and regulatory pressure to protect user data, leading to some product changes globally, these have little to no impact on communities outside of Western democracies. The privacy policies are not written for the average non-native English speaker and, even with translations, are framed in ways that are incongruous with Global Majority behaviours. Similarly, transparency features like "Why Am I Seeing this Ad?" or standard privacy controls are opaque, contextually inappropriate, and do not address the needs of non-Western communities. Eighty-nine percent of social media users in 19 surveyed countries, including Bangladesh, indicated they do not understand platform privacy policies or product features, according to a study conducted by the Tech Global Institute.

And if large platforms are one side of the dystopian coin, the other side belongs to a plethora of app-based startups. Women's health apps (under mHealth) are increasingly popular in low- and middle-income countries. But research on 23 of the most popular mHealth apps have found that all of them allow behavioural tracking. Sixty-one percent of the apps also allow location tracking and 87 percent shared data with third parties. A separate research on 224 fintech and loan apps, targeting African and Asian customers, found 72 percent had some level of cybersecurity risks that exposed sensitive personal and financial data—and shared data—without explicit consent, with third parties.

To where does the individual citizen turn? Neither the government nor private entities can be trusted to safeguard their privacy.

In an ideal system, legislative action would have been a way forward to hold both the public and private sectors accountable. The draft Personal Data Protection Act, having received in-principle approval from the Cabinet Division, should have been a step in the right direction. However, it became a concoction of provisions drawn from the EU's General Data Protection Regulation, India's Digital Personal Data Protection Act, and Singapore's Personal Data Protection Act, while retrofitting within Bangladesh's legacy institutional frameworks. In simpler terms, the draft act consists of arbitrary consent mechanisms, undue compliance burdens, and weak grievance redressal systems, combined with data access obligations without procedural safeguards, similar to requirements under the Cyber Security Act and the Bangladesh Telecommunication Regulatory Act. When read together, Sections 33 and 34 of the draft act imply that government institutions do not have the same duty of care as private entities towards safeguarding personal data.

In a nutshell, by replicating existing frameworks, the draft Personal Data Protection Act misses out on critical local nuances, rendering it likely ineffective in addressing privacy concerns.

An alternative approach could have been for the draft act, and other data protection and privacy interventions, to mandate product and policy changes that would meet privacy expectations. For example, firstly, it could have required tech companies and digital products to simplify their terms and privacy policies, including providing visual cues and modularising consent, and ensuring they can be easily understood by all communities. Secondly, the draft act could have instituted a robust grievance redressal mechanism within tech companies and government agencies, with clear timelines for resolution that can be used by anyone, irrespective of their digital literacy skills.

These changes, however, are not about one legislation or lever. Fundamentally, privacy as a practice within digital ecosystems has never been investigated in Global Majority contexts. It is largely still seen through either legacy or imperialistic lens, resulting in weak regulatory interventions and performative safeguards that pose significant risks of undermining fundamental rights. For decades, people in poor countries have been made to believe they have to choose between using a great product and expecting it to protect privacy, be safe, and respect human values. And that it is their fault, their lack of knowledge, that made technologies difficult, intimidating, and harmful. More often than not, mitigation approaches try to change the behaviours of the end consumer, rather than centering design, development, and governance around what works for the people.

Research indicates that mobile devices equipped with multiple profiles, akin to Windows or Mac operating systems, offer privacy safeguards rather than attempting to alter device-sharing behaviour in collectivist societies like Bangladesh. While there are recent efforts to incorporate human-centered design into pro-poor technology solutions, this is built on economic values rather than human rights. And perhaps this is the fundamental frame-shifting that we need to do: to begin respecting the rights of the poor on par with meeting their economic aspirations, instead of believing in the fallacy of a zero-sum game.


Sabhanaz Rashid Diya is founding board director at Tech Global Institute.


Views expressed in this article are the authors' own. 


Follow The Daily Star Opinion on Facebook for the latest opinions, commentaries and analyses by experts and professionals. To contribute your article or letter to The Daily Star Opinion, see our guidelines for submission.

Comments

DATA PRIVACY DAY

Don’t the poor have a right to privacy?

Salman Sakib Shahryar

In Bangladesh, a common joke revolves around the absence of a Bangla equivalent for "privacy." Goponiyota suggests confidentiality, and ekaante thaka, used as a verb, implies being left alone. Some argue the lack of a specific term in our vocabulary might suggest a perceived absence of the right to privacy. There are TikTok videos humorously referencing the lack of privacy in joint families, or how parents get into the business of their teenagers, or how even a newlywed couple can't catch a break in a house full of guests.

Do these real-world privacy principles, or lack thereof, apply to the digital ecosystem? Sharing mobile devices is a common behaviour across South Asia, attributed to economic and cultural factors, according to research. In other words, if people are comfortable sharing devices, does this indicate that they do not care about their privacy? By the same assumption, is it fair to infer that the public lacks a reasonable expectation of privacy when sharing their personal data while buying a SIM card or on digital platforms?

Several months ago, while travelling in southern Bangladesh, I met with women in savings circles. Some had their own devices, while others shared a mobile phone with their spouses or parents. When asked about their concerns, almost everyone indicated that they wanted more privacy. This ranges from owning their own devices, to finding more secure ways to send mobile payments, to safely accessing their social media accounts. An overwhelming majority had accounts on Facebook, TikTok, or Imo, where they feared not only "abusive" content attacking them but also the possibility that reporting to law enforcement would grant unauthorised access to all of their data. For a community that's facing long-standing societal discrimination, these women—housewives, small business owners, farmers, and garment workers—were well attuned to their expectations around privacy.

This shouldn't come as a surprise because women and minority communities worldwide bear the brunt of "digital abuse." They face a disproportionate risk of privacy erosion, including invasion of their personal spaces, non-consensual sharing of visual content, and the use of personal data for surveillance and blackmail. However, these risks are more acute for communities in low- and middle-income countries (referred as the Global Majority), who lack the institutional safeguards that wealthier Western democracies can sometimes take for granted. Moreover, the rights of the poor are frequently undermined with the promise of techno-solutionism, an idea that the "right" technologies can solve society's problems.

A year ago, Marium Akter (pseudonym) received her smart national identification (NID) card. At the time of collecting her personal information and biometric data, the "officer" promised that this would make receiving her social safety benefits easier and provide security against fraud when accessing any device or online services. Weeks after signing up, Marium started receiving strange phone calls at midnight. The caller claimed to have access to her NID information, even shared some of it accurately, and blackmailed her for money in exchange for not leaking her information online. They threatened to provide false criminal allegations about her to the local police, suggesting it would impact her government benefits. Based on the threats, it appeared that the police and local government officers might be involved in the scam, leaving her unsure on whom to approach. She eventually disconnected her device out of fear.

Marium's case may seem anecdotal, but last year, TechCrunch, along with multiple national dailies, reported that a Bangladeshi government website leaked personal information of more than five lakh citizens. To grasp the scale and severity of the breach, it's worth noting that personal information in the government's NID database is tied to an individual's birth certificate, SIM card registration, bank account/s, passport, voter cards, and pretty much every service imaginable. At the time, the government acknowledged the breach and attributed it to "weak web applications" and "poor security features" of "some government organisations." A few months later, NID data was available on Telegram, easily accessible and searchable using a bot. The then system manager of the NID wing of the Election Commission confirmed that 174 organisations had access to the NID server; anyone could have their security compromised.

In the months preceding the leaks, the then Home Minister Asaduzzaman Khan told the press that there was a process underway to shift the central NID database from the EC to the Ministry of Home Affairs, referring that most countries maintain their citizen records more securely under the executive branch. The National Identification Registration Act was passed in September last year, confirming the move. In November, a Wired story found that millions of NID data, along with other sensitive personal information, was left exposed online by the National Telecommunications Monitoring Center, a national intelligence outfit under the home ministry.

But that's just the tip of the iceberg.

For nearly two decades, social media companies have collected vast amounts of personal data, extending from activities on the platforms themselves to third-party websites, browsers, and devices. The data is not only used to micro-target ads, but also to decide what should appear on someone's feed, which product features they can access, recommend "friends," and impact the entirety of their online experiences. Although there is increasing public and regulatory pressure to protect user data, leading to some product changes globally, these have little to no impact on communities outside of Western democracies. The privacy policies are not written for the average non-native English speaker and, even with translations, are framed in ways that are incongruous with Global Majority behaviours. Similarly, transparency features like "Why Am I Seeing this Ad?" or standard privacy controls are opaque, contextually inappropriate, and do not address the needs of non-Western communities. Eighty-nine percent of social media users in 19 surveyed countries, including Bangladesh, indicated they do not understand platform privacy policies or product features, according to a study conducted by the Tech Global Institute.

And if large platforms are one side of the dystopian coin, the other side belongs to a plethora of app-based startups. Women's health apps (under mHealth) are increasingly popular in low- and middle-income countries. But research on 23 of the most popular mHealth apps have found that all of them allow behavioural tracking. Sixty-one percent of the apps also allow location tracking and 87 percent shared data with third parties. A separate research on 224 fintech and loan apps, targeting African and Asian customers, found 72 percent had some level of cybersecurity risks that exposed sensitive personal and financial data—and shared data—without explicit consent, with third parties.

To where does the individual citizen turn? Neither the government nor private entities can be trusted to safeguard their privacy.

In an ideal system, legislative action would have been a way forward to hold both the public and private sectors accountable. The draft Personal Data Protection Act, having received in-principle approval from the Cabinet Division, should have been a step in the right direction. However, it became a concoction of provisions drawn from the EU's General Data Protection Regulation, India's Digital Personal Data Protection Act, and Singapore's Personal Data Protection Act, while retrofitting within Bangladesh's legacy institutional frameworks. In simpler terms, the draft act consists of arbitrary consent mechanisms, undue compliance burdens, and weak grievance redressal systems, combined with data access obligations without procedural safeguards, similar to requirements under the Cyber Security Act and the Bangladesh Telecommunication Regulatory Act. When read together, Sections 33 and 34 of the draft act imply that government institutions do not have the same duty of care as private entities towards safeguarding personal data.

In a nutshell, by replicating existing frameworks, the draft Personal Data Protection Act misses out on critical local nuances, rendering it likely ineffective in addressing privacy concerns.

An alternative approach could have been for the draft act, and other data protection and privacy interventions, to mandate product and policy changes that would meet privacy expectations. For example, firstly, it could have required tech companies and digital products to simplify their terms and privacy policies, including providing visual cues and modularising consent, and ensuring they can be easily understood by all communities. Secondly, the draft act could have instituted a robust grievance redressal mechanism within tech companies and government agencies, with clear timelines for resolution that can be used by anyone, irrespective of their digital literacy skills.

These changes, however, are not about one legislation or lever. Fundamentally, privacy as a practice within digital ecosystems has never been investigated in Global Majority contexts. It is largely still seen through either legacy or imperialistic lens, resulting in weak regulatory interventions and performative safeguards that pose significant risks of undermining fundamental rights. For decades, people in poor countries have been made to believe they have to choose between using a great product and expecting it to protect privacy, be safe, and respect human values. And that it is their fault, their lack of knowledge, that made technologies difficult, intimidating, and harmful. More often than not, mitigation approaches try to change the behaviours of the end consumer, rather than centering design, development, and governance around what works for the people.

Research indicates that mobile devices equipped with multiple profiles, akin to Windows or Mac operating systems, offer privacy safeguards rather than attempting to alter device-sharing behaviour in collectivist societies like Bangladesh. While there are recent efforts to incorporate human-centered design into pro-poor technology solutions, this is built on economic values rather than human rights. And perhaps this is the fundamental frame-shifting that we need to do: to begin respecting the rights of the poor on par with meeting their economic aspirations, instead of believing in the fallacy of a zero-sum game.


Sabhanaz Rashid Diya is founding board director at Tech Global Institute.


Views expressed in this article are the authors' own. 


Follow The Daily Star Opinion on Facebook for the latest opinions, commentaries and analyses by experts and professionals. To contribute your article or letter to The Daily Star Opinion, see our guidelines for submission.

Comments

সড়ক দুর্ঘটনা কাঠামোগত হত্যাকাণ্ড: তথ্য ও সম্প্রচার উপদেষ্টা

সড়ক দুর্ঘটনাকে কাঠামোগত হত্যাকাণ্ড হিসেবে বিবেচনা করা হচ্ছে উল্লেখ করে অন্তর্বর্তীকালীন সরকারের তথ্য উপদেষ্টা মো. নাহিদ ইসলাম বলেছেন, সড়কে বিশৃঙ্খলার জন্য প্রাতিষ্ঠানিক ও কাঠামোগত দুর্বলতা অনেকাংশে...

৩৪ মিনিট আগে