Last week, the California Senate advanced a bill that would require Amazon (AMZN) to reveal details about the productivity-tracking algorithm used in its warehouses; meanwhile, Facebook (FB) came under fire this week after a Wall Street Journal report revealed the company is aware that its Instagram feed makes some teenage girls feel bad about themselves.
These changes reflect a pushback against big tech’s algorithms, which apply artificial intelligence (AI) to tailor performance for particular users or staff. Externalities, personal data threats, failure to explain significant choices, and warfare are the top four perils of emerging AI technology, according to AI specialist Kai-Fu Lee, who previously worked as an executive at Google (GOOG, GOOGL), Apple (AAPL), and Microsoft (MSFT).“The single largest danger is autonomous weapons,” he says.
“That’s when AI can be trained to kill, and more specifically trained to assassinate,” adds Lee, the co-author of a new book entitled “AI 2041: Ten Visions for Our Future.” “Imagine a drone that can fly itself and seek specific people out either with facial recognition or cell signals or whatever.”
A prohibition on autonomous weapons has gained support from 30 countries, despite the fact that an in-depth assessment commissioned by Congress advised the United States to oppose the ban because it may prevent the US from utilizing weapons it currently has. Thousands of AI researchers, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, signed an open letter in 2015 calling for the prohibition of such weapons.
According to Lee, autonomous weapons will revolutionize warfare since their affordability and precision will make it simpler to wreak havoc while making it nearly difficult to track down the perpetrator. Kai-Fu Lee has been at the forefront of AI development for decades, even when he was a doctorate student at Carnegie Mellon University and helped develop voice recognition and automated speech technologies.
He has been the CEO of Sinovation Ventures, a Chinese tech-focused venture capital firm with over $2.5 billion in assets under management, since 2009.
In an interview with Yahoo Finance, Lee mentioned a final set of AI risks including sensitive personal data and the difficulty to explain the technology’s judgments.
AI decisions are especially important in life-or-death scenarios, such as the trolley issue, in which a decision-maker must choose whether to stop a runaway train from killing many people in its path, at the risk of killing fewer people on a different one, according to Lee.
“Can AI explain to us why it made decisions that it made?” he says. “In four key things like driving autonomous vehicles, the trolley problem, medical decision-making, surgeries.”
“It gets serious,” he adds.Â