AI governance and ethical usage: Skills or mere abuse in higher education?
The emergence of advanced generative AI, such as ChatGPT and Gemini, has had a significant impact on higher education. The early concerns centred around issues of plagiarism and the potential collapse of writing and critical thinking in higher education. However, by concentrating solely on these immediate issues, higher education overlooks a much deeper and persistent challenge: the pressing necessity to create strong governance and ethical structures for AI within our educational institutions.
The discussion in the professor's lounge has stalled, with students debating whether to outright ban AI or awkwardly integrate it into their practices. This represents a significant shortcoming in intelligence. Universities play an active role in shaping the landscape of technology. They serve as crucial platforms where future innovators, analysts, and informed individuals are cultivated. It is our ethical and intellectual responsibility to take the lead.
Platforms, such as ChatGPT and Gemini, provide significant benefits for both students and educators. They facilitate real-time academic assistance, accelerate the process of composing literature reviews, automate presentation preparation, facilitate language editing and proofreading, and support the brainstorming of intricate research designs. For institutions facing resource constraints, AI serves as a powerful equaliser, reducing obstacles for students who may otherwise find it challenging to access educational resources or keep up with their learning progress. If implemented effectively, AI has the potential to reinvent higher education, creating a more inclusive, personalised, and efficient setting for students of all abilities.
However, in the absence of careful AI governance, the potential drawbacks may surpass the benefits. When academic writing is refined by algorithms, the challenge of identifying academic dishonesty intensifies. Algorithms have the potential to exhibit bias, influencing knowledge in ways that implicitly reinforce societal disparities. The issue of data is also significant. When students engage with AI, what information is being collected, and what are the intended future uses? The lack of clear guidelines puts students and faculty in a vulnerable position.
The underlying concern centres around governance. Who determines the application of these tools? What foundational principles inform their implementation in classroom assignments, research, and the evaluation of students? Many universities do not provide definitive responses.
This sloppy strategy is bound to lead to trouble. The situation fosters disparity, as tech-savvy students utilise AI to their advantage while others lag, and many educators are compelled to develop their own varied, frequently uninformed, strategies. Universities face significant risks, including potential data privacy and security breaches when confidential study or faculty and student information is entered into large-scale AI platforms, as well as threats to academic integrity stemming from our inability to define ethical AI usage.
This is where our unique role becomes evident. The purpose of higher education lies in the search for truth, the cultivation of critical thinking, and the commitment to serving the community's interests. We must escape the state of panic and put together an ensemble of pillars for the ethics and governance of AI.
Firstly, institutional governance, not individual caution. It is essential for every university to establish a task force, sanctioned by the senate or academic council, that includes faculty members from diverse disciplines such as Computer Science, Business, Philosophy, Psychology, Law, and the Humanities, as well as students and administrative representatives, to develop clear and principled guidelines. The policies should encompass guidelines on permissible use, data security and privacy, academic integrity, data governance, and accessibility. This ensures clear communication and uniformity for all parties involved.
Secondly, it's about skills rather than a mere ban. It is essential to incorporate the AI usage policy into the foundational curricula of modern-day learning. The policy indicates the inclusion of courses and modules focusing on AI ethics, algorithmic bias, and its societal implications across various fields of study. For example, a business student is expected to grasp the capabilities of large language models (LLMs) that can identify customers' sentiment. A pharmacy student should possess the skills necessary to evaluate an algorithm for racial bias. This is fundamental expertise; it is a crucial skill for the 21st century.
Third, critical thinking, rather uncertain adoption. Higher-education institutions should take an active role in scrutinising and serving as the moral compass for this technology. It is crucial to conduct a thorough investigation into the limitations of AI, as well as its detrimental effects on the learning environment. The actions that we undertake today will shape the future leaders of tomorrow. Will they remain passive consumers of unclear technology, or will they take on the role of thoughtful professionals of their future?
The issue at hand is not about the place of AI within the realm of higher education. The fundamental inquiry is whether we possess whatever is necessary to lead it effectively.
Dr Najmul Hasan is an assistant professor in Information Systems at BRAC Business School, BRAC University.


Comments