Channarong Intahchomphoo is an adjunct professor at the School of Engineering Design and Teaching Innovation of the University of Ottawa. His research explores the real-world impact of artificial intelligence (AI) and robotics. He was recently invited to the United Nations to make a presentation on racism and racial bias in AI. We discussed the presentation, titled AI Fairness, Bias and Discrimination, which sheds light on the vital need for equitable AI practices in our increasingly digital world.
Question: How do you define bias and discrimination in the context of AI, and what are their implications in the development and deployment of AI systems?
Intahchomphoo: First and foremost, I define bias as a problem because it is unfair, favours particular groups and does not treat people equally, which leads to discrimination. In the context of AI, based on my research working with various underserved communities on emerging technologies, I have learned that AI can have both positive and negative impacts.
AI has the potential to unintentionally reinforce unfairness, bias and discrimination. Engineers may include these concepts unknowingly in AI systems while rushing to develop AI products quickly to be first to market, often without thorough consideration and rigorous testing before deployment.
Therefore, it is important to promptly address and mitigate the risks and harms associated with AI. I believe that engineers, policymakers and business leaders themselves need to have a sense of ethics to see fairness, bias and discrimination at every stage of AI development to deployment.
Q: In your review of AI and the human race, what were the main findings and insights?
Intahchomphoo: In one of my recent publications, I explained that there are four (aspects to) relationships between AI and the human race:
- AI causes unequal opportunities for people from certain racial groups. This includes unfair decisions on mortgage loan applications from people of colour and actual racial discrimination by some AI applications against people of colour when they search for jobs online.
- AI also can help to detect racial discrimination. Like the famous expression, AI has two sides: good and bad. For example, AI has been developed to identify topics and individuals who experienced racism offline and then shared their stories on social media. In this case, AI aims to understand how the online communities responded and supported the victims of racism.
Additionally, AI could also detect hate speech posted on the internet and social media, filtering out individuals trying to engage in conversations with the intention of sharing hate speech and harmful content before it gets widely disseminated and out-of-control. - AI is applied to study health conditions of special racial populations groups. For example, in the case of cardiovascular disease, AI can help determine whether doctors should make different medical treatment decisions based on the patient’s race.
- AI is used to study demographics and facial images of people from different racial backgrounds. AI facial recognition tools have been applied to criminal investigations, facial surgery and virtual reality.
During the time of this research project in 2020, we did not have powerful generative AI tools like ChatGPT, which are capable of providing reasoning and processing text, images and audio. Researchers were primarily focusing their efforts on computer vision, which was an essential component of AI systems in autonomous vehicles at that time.
The future of AI co-pilots and companions is that they will start to live alongside us in various aspects of life, such as browsing webpages for shopping, studying or playing video games. AI co-pilots will be able to see, hear and interact with us in real time.
“AI co-pilots will be able to see, hear and interact with us in real time.”
Channarong Intahchomphoo
— Adjunct professor at the School of Engineering Design and Teaching Innovation
Q: What policy recommendations do you propose to address bias and discrimination in AI systems, and how can these recommendations be implemented on a global scale?
Intahchomphoo: During my recent keynote speech at the UN offices in Geneva, I made two policy recommendations.
My first recommendation emphasizes the importance of advocacy. The UN should advocate for local and national governments to engage in the development of international guidelines that establish best practices, ensuring AI remains fair, unbiased and non-discriminatory, and is deployed responsibly.
My second recommendation highlights the need for enhanced collaboration among all stakeholders. The UN should actively promote collaboration among tech companies, local and national governments, NGOs and representatives from underserved communities. This is crucial to embedding fairness, unbiased practices and non-discrimination in AI as a core societal and industrial framework.
Importantly, those two recommendations must be implemented on a global scale, not just in certain regions or continents.
Q: How can international collaboration and the development of guidelines help ensure that AI technologies are fair, unbiased and respectful of human rights and dignity?
Intahchomphoo: International collaboration can help us understand which AI development practices and guidelines work and which do not, across different regions, social norms and cultures. By bringing together a broad range of viewpoints on AI technologies, we can establish global benchmarks that all regions can adopt. This will ensure a consistent and inclusive approach to AI fairness, bias and respect for human rights. Moreover, international collaboration can facilitate the sharing of expertise and resources.
Q: How can stakeholders, including governments, industry leaders and civil society organizations, work together to promote transparency, accountability and inclusivity in the design and deployment of AI systems?
Intahchomphoo: Governments could host conferences and forums where all stakeholders can discuss responsible AI design and deployment policies, share best practices and address concerns. They should also establish clear regulations and guidelines that mandate responsible AI design and deployment.
Additionally, governments should lead in implementing educational programs to raise awareness among citizens about the implications and effects of AI and how to use AI responsibly. Business leaders could provide training and resources to their AI engineers to help them understand the societal impact of their AI products they are developing.
Civil society organizations could create public campaigns to inform communities about the impacts of AI and how to protect their consumers rights.
Each stakeholder has a role to play, but we all are working towards the same goal.
Professor Intahchomphoo presented his work on the theme “Does AI Reinforce Racism and Racial Discrimination?” at the United Nations’ 10th session of the Group of Independent Eminent Experts on the Implementation of the Durban Declaration and Programme of Action, in Geneva, Switzerland, in June.
Members of the media can directly contact Professor Intahchomphoo at:
Channarong Intahchomphoo (PhD)
Adjunct Professor, School of Engineering Design and Teaching Innovation
Faculty of Engineering, University of Ottawa
Affiliated researcher at the uOttawa’s Canadian Robotics and AI Ethical Design Lab (CRAiEDL)
cintahch@uOttawa.ca
For additional information: media@uOttawa.ca