IBM Watson CTO Rob High spoke out about his views on AI ethics at this year’s Mobile World Congress. Meltwater CEO and Outside Insight author Jorn Lyseggen discussed issues of AI ethics during the global Outside Insight launch tour. Experts across the board agree transparency and an understanding of the assumptions informing any algorithms are key.
More data means more privacy and security implications
Not all data are created equal. According to IBM Watson CTO Rob High at this year’s Mobile World Congress, it’s important that individuals and businesses understand which of their data is being analyzed, and by whom. Alternately, on the part of those businesses that trust AI for decision-making, it’s extremely important that they understand the underlying data and assumptions fueling AI outputs so they can make a judgment call on what the algorithms are telling them, rather than taking AI at face value.
“One of the things we have to realize about AI – it’s relatively new to all of us. There’s a lot about it that we don’t all fully understand. As with any new technology, it’s really important that we be thinking now about how we do that ethically and responsibly. For us, that comes down to three basic principles. Trust, respect, and privacy,” High said.
For High, this means questioning assumptions and approaching AI implementation with transparency and privacy rights at the core.
“Transparency comes down to can we identify what sources of information are being used? Have we established the right properties, the right principles in place when we train these systems to use data that is representative of who we are, and the information that we’re using?”
Similarly, Meltwater CEO Jorn Lyseggen discussed ethics, transparency and regulation in AI with global industry experts during the launch of Outside Insight.
“AI is so mystified,” Lyseggen said. “Only people who work with AI know what it means. My surprise was that artificial intelligence has zero intelligence. My biggest concern is that people believe too much in it. It’s very difficult to completely remove bias. AI is fundamentally biased in how it was created, trained, programmed.”
As such, he believes there will be some unintended consequences, creating need for policy and regulation. “I do think there is a role for regulation to come in, because I don’t think companies can be expected to regulate themselves.”
Lyseggen emphasized the importance of the human element when evaluating AI output, as well as the need for a deep understanding of the assumptions that inform the AI algorithms, in order to establish trust.
“You can’t blindly follow your AI; you have to challenge it. You can look at it as a GPS – it helps you understand where you are and where you want to go. But it will be the judgment of the executives that decide ‘Do I want to climb that mountain or do I want to walk around it?’ That is the role of the human in decision-making and the future role of executives.”
In this sense, Lyseggen sees a future where AI and business decision-makers work together.
“I don’t think AI will ever replace corporate executives or managers. It’s going to be a great partnership, where corporate executives will bring skill sets of human judgment, gut feeling, creativity, entrepreneurial mindset and passion. Machines or AI will make some of the repetitive transactional, pattern matching work much faster, more cost effective, and that’s going to result in increased productivity which will create new opportunities and goals.”
One of the most important things for AI to be successful, he argued, is that executives and decision-makers using this technology have the data science literacy or sophistication to challenge the model and to fully understand what the underlying assumptions are.
“We do think people are ready, but I think that is is going to happen very quickly. The companies that are ready are the innovative companies, that fast companies, the smaller/medium sized companies.”
You can’t blindly follow your AI; you have to challenge it
Making headway in AI regulation
Several advocacy groups and funds have been established in an effort to change the way AI is created and implemented, with a policy at their core. Here are a few, backed by some of the biggest names in tech and research:
1. Partnership on AI: created in partnership with brands like Apple, Amazon, IBM, Facebook, and Microsoft. Their goal is to “advance public understanding of artificial intelligence technologies and formulate best practices on the challenges and opportunities within the field.” Executive Director Terah Lyons was formerly policy advisor to the U.S. chief technology officer at the White House Office of Science and Technology Policy (OSTP), where, according to MIT Technology Review, she was behind the Obama administration’s deep dive into AI’s potential to change the world.
2. AI Now Institute: a research center at NYU, launched by Kate Crawford, a principal researcher at Microsoft Research, and Meredith Whittaker, a founder of Google Open Research. Their goal is to ensure solutions are being developed for the issues that need solving, and that engineers developing AI are working closely with potential users.
3. Ethics and Governance of AI Fund: intended to support work around the world that advances the development of AI in the public interest, with an emphasis on research and education. Members of the fund include executives from Omidyar, Knight Frank, Greylock, MIT Media Lab and Berkman Klein Center for Internet & Society at Harvard University.
4. Association for the Advancement of Artificial Intelligence (AAAI): a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI aims to promote research in, and responsible use of, artificial intelligence. Officers hail from a number of research centers in US Universities