Can AI be a responsible citizen? - Outside Insight
External DataOI for Risk Management

Can AI be a responsible citizen?

AI is increasingly part of our everyday lives. Now there is a new crop of companies working to ensure our models are informed responsibly

Key takeaway

Artificial Intelligence will increasingly be working alongside humans in the coming years. But it’s crucial that as machines learn and algorithms are trained with more and more data, our AI is ‘raised’ to be responsible, fair and transparent. From government regulation to tools created to keep autonomous vehicles in check, oversight is needed to ensure businesses can create an efficient, collaborative and powerful environment where humans and AI can effectively work together.

Deploying AI in business today involves more than just training algorithms to search or predict a certain task. Rather, those creating and feeding these tools with data must be conscious of “raising” them to act in a way that is unbiased, conscious of legal considerations and fair.

‘Raising’ AI responsibly may at times present challenges that appear similar to those faced in human education and growth. Just like parents raising children, companies should be conscious of teaching their AI systems to learn, communicate and make unbiased decisions, reflect company values and also adhere to governmental and societal regulations. Doing so requires the right data inputs – many of which are found outside a company’s walls.

In a recent report from Beth Kowitt in Fortune, following their MPW International Summit, many are fearing the white-male bias felt in Silicon Valley may impact the AI being created in the future. Said Mastercard Vice Chairman Ann Cairnes, “if we don’t have a diverse group of people working on AI, ‘we’re going to get computers that think like some bro culture on the West Coast.’ She added, ‘That’s not so great.'”

Businesses must raise their AI systems to act responsibly

The European Parliament is currently in talks about giving robots with AI capabilities an “electronic personality” – especially those that have the ability to learn, adapt and act for themselves. This idea of ‘personhood’ applied to robots, similar to that granted to corporations, would make the robots liable for their actions, including any harm that they may cause. The motion involves:

“Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.”

German authorities, alternatively, have attempted to codify rules around the way AI-powered self-driving cars should behave in a preordained accident: prioritising the value of human life over damage to property or animals.

These guidelines stress the fact that “property damage precedes personal injury: in dangerous situations, the protection of human life is always the highest priority. Also, in every driving situation, it must be clearly defined and identifiable who is responsible for the driving task: the human or the computer, and this data must be documented and stored (including clarification of possible liability issues).”

Making socially acceptable decisions is at the crux of raising AI to be responsible. On one hand it sheds light on the increasing responsibility we’ve placed on AI in our society, and on the other it reflects on the fact that social responsibility is something even we as humans are continuing to adapt, regulate and make sense of. Ensuring AI takes responsibility would involve AI being able to understand what’s right and what’s wrong, what responsibility is and what it involves, how to avoid bias, and how to hone skills in order to make decisions in the context of one’s surroundings.

A number of technological advancements are being created to address this very issue. We can look at the case of Germany’s Audi 8, and it’s introduction of the Traffic Jam Pilot AI, as a key example.

Who takes responsibility for accidents in an autonomous vehicle?

Audi’s new Traffic Jam Pilot AI allows drivers to go hands free in a traffic jam, if the car is traveling at a slow speed ( <60km/h) on separated highways. In the event of an unavoidable accident, however, keeping in line with German regulations for autonomous cars, Audi has acknowledged that if the car makes a wrong decision within its specified AI operating conditions, the car manufacturer will be held responsible for any damages that may arise.

Audi introduced its automated driving system in their new model, the A8 saloon. The new AI system looks to combat the stress and boredom drivers often feel in traffic jams and takes over the monotonous stop-and-go activities, leaving the driver to be able to complete other tasks. With the touch of a button, the driver can go hands free, and the car takes responsibility for the braking and accelerating whilst in a jam.

The concept behind the Traffic Jam Pilot was to be able to use time more productively whilst stuck in a traffic jam. More importantly, it forces the driver to maintain a safe distance between cars, ideally making the traffic jam build up much slower, and break up more quickly, if applied on a large scale.

Steffan Rietdorf, an engineer at Audi, explains safety features of the new system: “Assume the driver can’t take over straight away at any given point. Clearly this was a huge challenge, as safety regulations require a car to be ‘fail operational’ – in other words, if the technology stops working and the driver isn’t paying attention, the car would still need to stop safely.”

Intel AI

Data giant Intel is also jumping onboard when it comes to AI regulation. It has released a statement focusing on its public policy principles for developing AI. Emphasis is placed on the importance of creating new human employment opportunities and protecting human welfare, fostering for innovation and open development, liberating data responsibly, and requiring accountability for ethical design and implementation.

To learn more about Intel AI see their video here.

 

A global movement to enforce responsible AI

Newer technological advances are often accompanied with justified fears of greater vulnerability and disruption in business practices. Business leaders will be faced with tougher questions about AI as it continues to play a greater role in day to day decision-making.

The World Economic Forum’s Center for the Fourth Industrial Revolutions, the IEEE, AI Now, The Partnership on AI, Future of Life, AI for Good, and DeepMind, amongst others have initiated the movement for responsible AI by releasing a set of principles that aim to look at maximizing the benefits that AI brings to humanity and to curb the risks it brings. These principles include a focus on eliminating bias and ensuring AI is fed with the right external data sources to produce insights and results business leaders can trust moving forward.

Outside Insight Book

Are you leveraging insights from external data? Learn how to implement Outside Insight in practice.