The danger of gender bias in AI and ML

The dangers of gender bias in AI extend beyond gender equality. AI outcomes will be as biased as the data that trains them.

Key takeaway

We spoke with data scientist and founder of Inspiring Women in Data Science Laura Da Silva, who walked us through the basic process of developing an AI algorithm. She outlines areas where bias might occur when women are not part of the team creating the algorithm, and why this is dangerous to AI innovation in the future.


Issues in gender pay gaps and gender imbalances across industries – particularly in the tech and finance sectors – have been in the spotlight around the world for the last few years. Citigroup recently revealed its own ‘ugly number’ voluntarily, according to Fortune, publicizing its 29% gap. Citigroup CEO Michael Corbat said in Davos that this reveals the firm has an “imbalance at the senior job and leadership level” and has now set a goal of having 40% women at the assistant VP to MD level in the U.S. by the end of 2021.

Meanwhile, Harvard Business Review released a study which showed that companies who were forced to reveal their gender pay gap by law saw the gap shrink over a period of a few years, and saw more females hired and more promoted to management positions, while metrics in a control group remained the same.

Beyond pay, there has been a significant gender gap in the STEM space as a whole, and particularly in the AI and advanced technologies space. As these engineers begin to build the systems and technologies we may come to rely on in the future, many fear this gap could have more significant long term effects.

The Gender Gap in AI and Data Science

 

Why do we we need more women in AI and data science?

 

On a personal level, I found that I’m really good at solving problems – at finding ways to solve challenges and seeing how I can manipulate data. That led me to data science in the first place.

Women in general are good at seeing patterns and leveraging their intuition. You need this when you’re exploring data. They see all the possibilities when it comes to how you can bring data into a set or find similarities. They think about different relationships in data that men might not think about.

Because of this you need both women and men to come with their own ideas about how data can bring more value to a particular model. I started a group called Inspiring women in Data Science because it wasn’t easy to find women in the sector. I created it so we can meet each other and to show women who might be thinking about opportunities in the space that there are other women doing it.

Source: Hacker Noon

The World Economic Forum spoke about the significant gender gap that still exists when it comes to AI. According to WEF, women currently make up 22% of AI professionals – a gender gap three times larger than other industries. Do you feel the gender bias is an issue?

 

In general, when you have bias in business it’s because those creating it aren’t thinking about people that are different from themselves. One example I can think of where this became clearly evident was the racial bias discovered in popular automatic hand soap dispensers. They were designed to distribute just enough soap for white people’s hands but didn’t work for other skin types. So this is not for everyone.

Author’s note:

This particular example was raised in a Facebook video by Chukwuemeka Afigbo. According to x.ai, it showed “a black hand under a soap dispenser and no soap coming out, followed immediately by a white hand under the dispenser. Voila. It works.” In this piece, David Dennis Jr. warns that the danger with this and other types of AI that may be developed to cater to a white, male audience is that “a program is only as progressive as its programmers….The future implications are terrifying for people of color, precisely because our present racial biases are being coded into the technology of the future.”


One interesting thing as it relates to gender is that women are the main people consuming and moving in the economy. The marketing strategy is often targeted toward women. Women 20-40 years old are consuming a lot of apps, games, fashion, etc. For example in the gaming industry – which has long been considered male-dominated – more women are playing than ever before. In fact, according to Game Analytics, women make up nearly half – 49% – of all mobile gamers. It would be a problem if decision makers don’t take women into consideration women when they are creating strategies.

 

What are the dangers of having a team that is training an AI which is not representative of both genders?

 

When we think about training data, there are different features. One of course is related to gender. Today we have to be very careful. There is more variety than just men and women. If you are creating a recommendation system, gender becomes really important. Your suggestions from the beginning may not have precision if there is bias.

In the data science process, you always follow a life cycle called CRISP DM (cross-industry standard process for data mining). In simple terms, it looks like this:

Phase 1 is business understanding

You have to understand how the business works and the specific purpose of the AI solution, as well as the problem around it. This requires precise answers to a single question.

Phase 2 is data understanding

You need to  review and explore  the data that is available in order to answer this question

  • Looking at the data available you might think you have everything required to answer your question. But there will inevitably be factors you might not think about that would impact the validity of your data, such as extreme values (outliers), biases, duplicates, incorrect data types, etc.
  • You need to think about other things that might not be explicit that might help to create a better prediction such as derived features, feature scaling, etc.

Phase 3 is data preparation

You often can’t consume data as it is – you need to clean it and transform it. You may have empty or incorrect data points. To do this, data scientists keep in mind the ML technique they will use, supervised or unsupervised learning, NLP, recommendations, etc. and transform the data accordingly.

  • You have to format the data to see if things make sense and explore the data set.
  • You also need to explore the distribution of the data sample using descriptive statistics to observe if the values are  more or less in the same range or if there are outliers, skewness, etc. For example, if you see that there is a reading that says it’s -10 degrees C in Spain in summer, that’s a fault. If no significant amount of the data is an outlier, you might choose to remove it. If it’s more, you may consider correcting it if possible.
  • Once you understand and treat outliers or biases, and do appropriate transformations, you have to split your data set into two subsets (training and testing) to start modeling.
  • Depending on the models we will use, I also consider transforming the data, before the split, into something that is normalized or standardized because some models perform better.

Phase 4 is modeling

You often want to start with classical machine learning models – neural networks is the last thing I would suggest to use given the expensive computational requirements and the tedious hyperparameter tuning. Try different models and compare between them.

Next is evaluation

You need to test how accurate your results are and then compare different metrics depending on the problems.. For example, if you’re evaluating a classification problem, you may look  at the confusion matrix, the AUC-ROC curve, also accuracy, precision, recall, etc – all these metrics help you to decide if your solution (trained model) is good enough.

  • If you get poor results, far from what you expected, then you have to go back and review the different stages of  the cycle again.

 

The final step if you’re happy with the accuracy of your model, is to deploy it into the system within a company. As you receive new data, you will have to prepare and transform the new data in order to re-train the actual model and evaluate if it’s still good enough.

Within this cycle, bias arises at the beginning when we’re thinking about the business case and the features we’re going to use. As well, if the team creating the AI, who is evaluating the data for accuracy and searching for what might have been missed, is not representative they may not think about perspectives of other genders or races in the beginning. The result is that they won’t end up creating a solution that works for all.

 

What advice do you have for women interested in AI or data science?

 

A lot of women think they need a PhD to become a data scientist. There is a lot of mystery and rumors around what you need to be – almost like a unicorn.

I try to demystify this. It is true that you need to be a good problem solver. You have to work hard every day. You need to persist and keep looking for new solutions. It’s never finished. You have to explore, think, and keep investigating other ways. Sometimes it’s difficult to see the end point.

I tell them they can apply this thinking to anything, any business. If you have data you can apply it. There are basics that need to be learned – maths, coding skills, and the ability to explain results clearly to non-technical people. They have to learn how to be consistent and persist even if they don’t find a solution and collaborate.

I think everyone can be a data scientist. They just have to want to do it and want to solve problems.

Learn more about Laura’s work at Inspiring Women for Data Science here. Follow her on Twitter here.


Recent Articles