National AI Policy must address vulnerabilities of artificial intelligence

Last year, the Bangladesh government released a draft of the National Artificial Intelligence (AI) Policy, which, given the changing political situation and a lack of public interest in general, did not get enough attention. Despite projecting a lofty vision of catapulting Bangladesh into an era of AI innovation and adoption by harnessing it for the well-being of citizens, economic prosperity, and sustainable development, the draft policy falls short in some aspects, especially compared to the more detailed and exemplary regulatory frameworks such as the Artificial Intelligence Act in the European Union (EU) and AI regulations in China.
One of the vital goals of any national AI policy should be ensuring that AI systems are legally required to be reasonably safe, secure, reliable, and protected against errors and biases. That is precisely one of the areas where our policy could be more comprehensive.
One of the major risks of AI lies in its susceptibility to errors and biases, which raises a wide range of ethical concerns. Unlike conventional computer programmes, AI systems powered by machine learning can learn from data without explicit directives from human operators. While this gives AI transformative potential—such as in the case of operating self-driving cars and creating art or poetry—it also makes them susceptible to errors.
The first type of error comes from the data used to train the AI. It is easier to explain this with a few examples. In 2018, Amazon, the world's largest e-commerce retailer, used an AI recruitment tool that helped check applications. The tool had been trained on checking the CVs submitted to the company in the last 10 years, most of which came from men. Consequently, the algorithm learnt to favour male candidates, and downgrade CVs that included the word "women." After the issue was discovered, Amazon scrapped the tool. Similarly, in 2020, Twitter faced backlash when users discovered that its image-cropping algorithm favoured white faces over black ones. The algorithm was found to have been trained on datasets that did not adequately represent different skin tones, leading to biased decisions that favoured lighter-skinned individuals. Therefore, there is a critical need for diverse and representative datasets when training machine learning models to avoid biases and ensure fair and accurate performance.
The second source of errors is the algorithms themselves. For instance, in 2023, autonomous vehicle manufacturer Cruise had to recall its entire fleet of self-driving cars in the US after a vehicle struck a pedestrian, causing severe injuries. These cars were using a particular class of machine learning—deep learning, which is the technology behind today's most advanced AI systems, but it is not yet advanced enough to anticipate incidents like this. Regardless of the volume of training data, it is impossible to prepare these systems for every conceivable scenario. Thus, the potential for such errors always exists.
Finally, the third source of errors made by artificial systems is that machines lack morality and ethical point of views. In many fields, AI is being used to make decisions with moral implications, often without human oversight. For example, AI systems can identify and track individuals, potentially leading to biased or unjust actions. In some countries, AI algorithms are already being used in the justice system to assess the likelihood of a defendant committing future crimes, influencing decisions on bail, sentencing, and parole. These algorithms analyse various factors, from a person's criminal history to their socioeconomic background, to predict the risk level they pose. Without human supervision, AI can often misread the nuances of criminal cases and generate wrong verdicts. Even with human oversight, AI's decisions can influence a human judge's decision-making process in a wrong way.
An examination of Bangladesh's AI policy shows that, even though it addresses several potential sources of errors, some parts of it require further fine-tuning and expansion. For example, while the policy emphasises the need to prevent prejudice, bias, and discrimination in AI (Section 6.1.4), it lacks specific guidelines on ensuring diversity in training datasets, which is crucial for such prevention. Even though the legal and regulatory framework is meant to be established separately in the National Strategy for AI, which is a companion to the National AI Policy, more explicit instructions on data diversity, algorithmic oversight, and ethical considerations would benefit it.
The issue of data diversity is probably the most crucial for Bangladesh. Currently, most AI systems are being trained using datasets from developed countries. This can have profound negative implications for developing countries like ours. For example, in the near future, when AI becomes more commonplace, state-of-the-art AI algorithms used in medical diagnostics may not work well for patients in our country because the training dataset did not include enough information about our weather, food habits, and genetic makeup, which are pertinent to our health. Therefore, while data privacy and security are crucial, developing a representative demographic dataset under government supervision is also essential.
Bangladesh must ensure its AI system is fair, secure, and beneficial for all. While the draft National AI Policy marks a first step in that direction, policy gaps in data diversity, algorithmic oversight, and ethical considerations must be addressed more comprehensively in future revisions.
Amio Galib Chowdhury is graduate research student at the McCoy College of Business, Texas State University, US.
Views expressed in this article are the author's own.
Follow The Daily Star Opinion on Facebook for the latest opinions, commentaries, and analyses by experts and professionals. To contribute your article or letter to The Daily Star Opinion, see our submission guidelines.
Comments