Algorithms of Bias

Rimsha Salam talks about gender bias in AI and machine learning.
Algorithms of Bias

Gender bias is prevalent in our society, and while numerous efforts are being made to counter it, we are still long ways from eliminating or even reducing it. So it should not come as a surprise that Artificial Intelligence models and frameworks are learning gender bias as well. There is a growing awareness of the consequences of bias in machine learning. For example, a system that is used by judges to set parole will undoubtedly be biased against black defendants. Similarly, facial recognition software will work better for users that are white or fair-skinned and male. This is because of the majority of training data that is fed to such systems.

Another example is, natural language processing (NLP), a crucial ingredient of common AI systems like Microsoft’s Cortana, Amazon’s Alexa and Apple’s Siri, among others, has been found to show significant gender biases. There have also been several high profile instances of gender bias, including computer vision systems for gender recognition that recorded higher error rates for identifying women, specifically those with darker skin tones. In order to produce technology that is fairer, there must be a joint effort from researchers and machine learning teams across the industry to correct this shortcoming. Luckily, we are starting to see new work that looks at exactly how that can be achieved.

Few pieces of research have estimated the effects of gender bias in speech with respect to emotion, and emotion AI is starting to play a more notable role in the future of work, marketing, and almost every industry you can imagine. In humans, bias occurs when a person misunderstands the emotions of one demographic category more often than another. For example, wrongly thinking that one gender category is angry more often than another. This same bias is now being witnessed in machines and how they misclassify information related to emotions. To understand why this is, and how we can fix it, it’s important to first look at the causes of AI bias.

Causes and Elements that Influence Gender Bias in AI and Machine Learning

When discussing machine learning and artificial intelligence, bias can mean that there’s a greater level of error for certain demographic types. Because there is no one root cause of this type of gender bias, there are numerous variables and factors that researchers and analysts must take into account when developing and training machine-learning models. The main factors that need to be noted include:

Incomplete Or Skewed Data Set

When demographic categories are missing from the training data the data set collected will be skewed and incomplete. Models produced with this data can then fail to calibrate properly when applied to new data containing those absent categories. For example, if female speakers make up just 10% of your training data, then when you apply a trained machine learning model to women, it is liable to produce a higher degree of errors.

Biased Labels Used For Training

The vast bulk of commercial AI systems use supervised machine learning. This means that the training data is identified in order to teach the model how to behave. More often than not, humans come up with these labels, and given that people frequently exhibit bias (both conscious and unconscious), they can be inadvertently encoded into the resulting machine-learning models. Given that the machine-learning models are trained to evaluate these labels, this misclassification and unfairness towards the selective gender class will be encoded into the model, leading to bias.

Modeling Techniques & Features

The measurements and data used as input for machine learning models and training models can also bring in gender and diversity bias. For example, over many decades, text-to-speech technology also known as field speech synthesis (e.g., Stephen Hawking’s voice) as well as automatic speech recognition that is speech-to-text technology (e.g., closed captioning) performed quite poorly for female speakers and those with high pitched voices as compared to males and those with low pitched voices. This is connected to the fact that the way speech was analyzed and created was more accurate for taller speakers with longer vocal cords and lower-pitched voices. Which is why, speech technology was most accurate for speakers with these characteristics, which are typically males, and far less accurate for those with higher-pitched voices, which are usually female.

Practices Addressing Gender Bias

While the gender bias in the field of AI and ML is persistent there are ways to tackle it. Dealing with the divide is not an easy feat. Nor are the solution as clear as night and day. Here are a few points to be kept in mind by executives leading AI and Machine Learning teams in order to counter gender bias and the unwanted divide.

Diversity in Training Samples

Ensure that the samples collected by the teams are diverse and varying. Collect equal amounts of male and female audio samples to eliminate gender bias. Also, make sure to collect audio samples from diverse backgrounds. The humans labeling the samples should also be from diverse backgrounds.

Treat Demographic Data Fairly

Enable and encourage ML teams to measure accuracy levels of different demographic types separately. And also work on identifying which categories are being treated unfairly.

Collect Diverse Training Data 

Deal with bias by collecting a wide amount of data associated with sensitive and diverse groups. Then apply modern machine learning de-biasing methods that offer ways to penalize not just for errors in identifying the primary variable, but that also have additional penalties for producing unfairness and bias.

While figuring out and planning the causes and solutions of gender bias is an important first step, there are still many open questions that remain to be answered. The tech industry needs to develop more holistic approaches that address the three main elements of bias, as outlined above. We all have a duty to create technology that is free of bias, is fair, and inclusive. The benefits of AI far outweigh the risk and hence it’s up to all the leaders, tech moguls, and practitioners in the field o collaborate, research, and develop solutions that reduce bias in AI for all.

 

Rimsha Salam is a tech-enthusiast, writer, blogger, ex-quality assurance engineer, and freelancer. She writes on the latest tech trends, gadgets, Information technology, and more. Always eager to learn and ready for new experiences, she is a self-proclaimed tech geek, bookaholic, introvert, and gamer.

 

Image by Gerd Altmann from Pixabay

 

Share This:

Categories
ArticlesTechnology

RELATED BY