World Economic Forum Offers Top Brands AI Insights
Jan 24, 2017
On January 18th, 2017 at the World Economic Forum in Davos, Switzerland, four technology influencers joined a panel discussion on artificial intelligence (AI):
- Ginni Rometty, CEO of IBM
- Joi Ito, Director of the MIT Media Lab
- Ron Gutman, Founder and CEO of HealthTap
- Satya Nadella, CEO of Microsoft
As leaders of some of the world’s most powerful artificial intelligence brands, we learned how each are positioning their work against the technology’s pressing concerns.
Host Robert F. Smith asked the panelists for a “rousing discussion” on the future of AI and “the issues we face as technologists, humans, and a society.” He cited early disruptions in the global economy and other international systems. “The great minds and humanistic thinkers must…develop elegant solutions to the complex problems caused by applying these systems.”
Ginni Rometty started by framing the problem solved by AI: we’re so overwhelmed with data, it’s impossible for us to use it. This causes “cognitive overload.” She advised that data is a competitive advantage only when unlocked by AI.
But, the disruption caused by artificial intelligence puts its adoption at risk. To help mitigate the risk, IBM’s CEO has aligned her organization to three key ethical principles:
- Purpose—An AI system should augment, not replace, a person’s ability to think.
- Transparency—Openness is an essential building block of trust. A user must be clear on when and where an AI system is in use, how it’s training, and the quality of its outputs. The user needs to remain in control.
- Skills—Training is an obligation of organizations that are developing AI platforms and applying it. They must build up the skills of the human workforce that uses the software. To enable such training, businesses must build AI in partnership with the communities they serve (i.e. medical, legal, financial etc.).
These principles will be critical in the moments AI causes harm (i.e. an automated vehicle crash). Rometty advocated for the training of human operators in real-world use cases. This would make them better prepared for managing its missteps. In the spirit of purpose and transparency, she urged collaboration with regulatory bodies. Together, businesses and governments should determine safe and ethical guidelines for AI.
Joi Ito echoed Rometty’s call for definition of AI’s boundaries, “The market alone can’t do it.” He offered an example in a recent MIT study. It found that most people expect self-driving cars to sacrifice the lives of its occupants before taking those around it. But, the majority of those respondents also noted they would never want to own such a car. So, instead of leaving these difficult issues to the market, Ito called on experts from across disciplines to help define regulations around AI. He expanded on this, citing Larry Lessig’s book on ethical and moral values of technology. “The formative forces of ethics around AI will be technical architecture, law, markets, and [social] norms.”
Satya Nadella underscored the urgency for such a conversation. Presently, AI is in a stage he described as “supervised.” Humans are “in the loop,” labeling data and reviewing outcomes. As comfort with AI grows, Nadella predicted that more will operate in “unsupervised” situations—or what he called the “black box.”
Rometty rebutted: AI should never operate in “black boxes.” She offered medicine as an example. Doctors understand that Watson, IBM’s artificial intelligence platform, is versed in millions of medical documents. Still, they want transparency around the agent’s output. This requires Watson to share its “degree of confidence” and the “homework” it did to come to its conclusion.
But, when it comes to the acceptance of AI, IBM’s CEO is less concerned with “black boxes.” She worries most about the spectre of economic inequality and job displacement. Rometty said that people view the ballooning wealth of technology leaders as “unfair, economically.” Furthermore, they’re worried about losing their jobs.
But, job displacement is a myth she wants to bust. “There will be jobs.” Rometty predicted, “But to fulfill them, we need new skills.” To seize the opportunities in this “once-in-a-generation transformation” of labor, she outlined three objectives:
- Defining “new collar” job functions for all people, including those who aren’t college educated
- Encouraging companies to retrain their workers whose jobs AI will augment
- And producing AI solutions that help employees transfer existing skills
“What we need to remember is the word ‘augment.’” Rometty said in her closing remarks. She recalled an earlier conversation about helping humans think (not replacing them). “We need to invest in the benefits and protect against the downside.”
This is just one discussion in a broader conversation around AI. In sharing IBM’s ethical principles, Ginni Rometty has helped set the tone for organizations positioning their technology to the public. Given the stakes, those that aren’t, should.
From other emerging technology brands, we’ve seen on-display principles similar to those of IBM. Last year, Uber’s deployment of self-driving cars in Pittsburgh gave us their take on purpose and transparency. Read our analysis of their pilot project and how it can help you define principles for your own brand.