19.08.2021 . 17:00—19:00

What will it take to solve the bias problem in Artificial Intelligence?

Theme: General

A Community Event from MKAI

Location: Virtual
Format: Talks

𝗪𝗵𝗮𝘁 𝘄𝗶𝗹𝗹 𝗶𝘁 𝘁𝗮𝗸𝗲 𝘁𝗼 𝘀𝗼𝗹𝘃𝗲 𝘁𝗵𝗲 𝗯𝗶𝗮𝘀 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗶𝗻 𝗔𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 (𝗔𝗜)?

𝗙𝗼𝗿𝘂𝗺 𝗢𝗯𝗷𝗲𝗰𝘁𝗶𝘃𝗲:

In this Inclusive Artificial Intelligence (AI) Forum we will discover the concept of AI, or algorithm bias, what risks it presents and to what degree we can eliminate it.

𝗙𝗼𝗿𝘂𝗺 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁:
AI bias, also known as algorithmic bias is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous or incorrect assumptions in the machine learning process or from the use of incomplete, faulty or prejudicial data sets to train and/or validate the machine learning systems. AI bias often stems from problems introduced by the individuals who design and/or train the machine learning systems and often leads to the creation of algorithms that reflect unintended cognitive or social biases or prejudices. As AI systems are making their way into the military, banking, and bio-medical sector and assisting humans continuously, in this Inclusive AI Forum we will examine in what ways bias in an algorithm is a threat to humans and what can be done about this.

To frame this discussion, we can classify the source of bias in AI systems in 3 ways. Bias in the data, Bias in the human, Bias in the process.

Bias in the data

When the data sample does not represent all the dimensions of actual data, there is a huge chance of the algorithm producing biased output based on trained data.

Bias in the humans

The individuals that are training the algorithms have their own biases, these biases are very closely tied to their ethnic, cultural, linguistic values. Many of these biases involuntarily enter into AI training and results in biased output and the individuals, therefore, can create algorithms that reflect unintended cognitive or social biases or prejudices.

Bias in the process

When the AI training process does not meet certain requirements or criteria there is a significant chance of producing biased output by algorithms. For example, an algorithm predicting weather conditions in the United Kingdom cannot be trained on the weather data collected from India. So AI training process should be continuously monitored and follow certain protocols.

𝗙𝗼𝗿𝘂𝗺 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗢𝘂𝘁𝗰𝗼𝗺𝗲𝘀:

Complete elimination of bias in AI may never be possible, but that doesn't mean that huge strides cannot be made to make AI fairer, more representative of the people it is intended to serve and more inclusive. This MKAI Inclusive Artificial Intelligence (AI) Forum explores what steps we can take steps to reduce the biases in AI.

𝗙𝗼𝗿𝘂𝗺 𝗔𝘁𝘁𝗲𝗻𝗱𝗮𝗻𝗰𝗲:
MKAI events are inclusive. Our expert speakers are carefully chosen for their ability to make the subject approachable and comprehensible. MKAI aims to help all people improve their AI-fluency and understanding of this domain. Everyone is welcome!

This event was inspired by the work of Aditya Paturi