Regulating AI: can we eliminate bias?

As AI continues to develop at rapid pace, governments are working to produce policies that aim to stamp out bias. But is it possible?


As the pace at which AI technology is being created continues to ramp up, regulators and decision-makers are becoming more acutely aware of potential biases that can exist in automated technologies.

We know that the insights provided by AI are a result of the data that feeds it. If that data and the team developing AI are biased, so too will be the recommendations and insights the AI reveals. As we learned from data scientist Laura Da Silva, AI being developed by male-heavy teams, for example, can produce biased results if appropriate steps are not taken to ensure that the data used to train algorithms is comprehensive and addresses multiple perspectives.

Amazon famously made a major error when it comes to AI bias, when it was revealed that their automated recruiting platform was unintentionally biased against women based on previous employee data used to train the algorithms.

“Computers are increasingly involved in the most important decisions affecting Americans’ lives — whether or not someone can buy a home, get a job or even go to jail,” Democratic Sen. Ron Wyden said in a statement. “But instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color.”

In response, the US government is looking at new potential policies that would force companies developing AI, and organizations using the technology to inform significant decisions, to regulate their data and address potential bias.

Here are a few regulations up for debate that may impact use of AI and machine learning algorithms and automated systems used for decision-making in the US.

 

The Algorithmic Accountability Act of 2019

This new bill is sponsored by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR), and has a House equivalent sponsored by Rep. Yvette Clarke (D-NY). If passed, the bill would require companies with revenue over $50B to assess automated decision-making systems for “accuracy, fairness, bias, discrimination, privacy and security.”

This comes after Facebook, Google, Amazon and other large tech firms have faced legal action in relation to their use of consumer data, AI bias and privacy concerns.

According to The Verge, “The bill seems designed to cover countless other controversial AI tools — as well as the training data that can produce biased outcomes in the first place. A facial recognition algorithm trained mostly on white subjects, for example, can misidentify people of other races.”

 

Source: metamorworks via Getty Images

Commercial Facial Recognition Privacy Act

Introduced in March, this bill is specifically focused on eliminating bias in facial recognition technology. The bipartisan bill is sponsored by Senators Brian Schatz (D-HI) and Roy Blunt (R-MO) and is the first of its kind. According to The Verge, “under the bill, users would need to be notified whenever their FR data is used or collected….It also would require third-party testing before the tech could be introduced into the market to ensure it is unbiased and doesn’t harm consumers.”

Microsoft president Brad Smith said, “Facial recognition technology creates many new benefits for society and should continue to be developed. Its use, however, needs to be regulated to protect against acts of bias and discrimination, preserve consumer privacy, and uphold our basic democratic freedoms.”

New York City Task Force on AI

Recode reported the New York City Council passed a law in 2017 that would create a special task force to investigate city agencies’ use of algorithms and deliver a report with recommendations. Time is running out, however, on their ability to deliver this report.

Many are worried they still lack information about how exactly these algorithms will be used, as well as lack of public input. A big part of the problem stems from proprietary information contained in organizations using these technologies, which some are hesitant to release.

Some fear these regulations may prove too difficult to implement.

 


Steps are being taken to develop technology that can help keep AI development on the right path. Currently researchers at IBM Watson are working on identifying and mitigating bias in AI, by devising an independent bias rating system that can determine the fairness of an AI system. These are just the first steps in ensuring a fair and unbiased relationship with this powerful technology moving forward.


Recent Articles