From the species that brought you the moon landing and the hot dog eating contest, artificial intelligence is here to change the world.
The first question many people ask about artificial intelligence (AI) is, “Will it be good or bad?”
The answer is … yes.
Canadian company BlueDot used AI technology to detect the novel coronavirus outbreak in Wuhan, China, just hours after the first cases were diagnosed.
Compiling data from local news reports, social media accounts and government documents, the infectious disease data analytics firm warned of the emerging crisis a week before the World Health Organization made any official announcement.
While predictive algorithms could help us stave off pandemics or other global threats as well as manage many of our day-to-day challenges, AI’s ultimate impact is impossible to predict.
One hypothesis is that it will bring us an era of boundless leisure, with humans no longer required to work.
A more dystopian thought experiment concludes that a robot programmed with the innocuous goal of manufacturing paper clips might eventually transform the world into a giant paper clip factory.
But sometimes reality is more profound than imagination. As we stand at the threshold of the Fourth Industrial Revolution, now may be the most exciting and important time to witness this blurring of boundaries between the physical, digital and biological worlds.
“The liminal is always where the magic happens.
This is always where we get crazy new identities, new debates, new philosophies,” says Tok Thompson, professor (teaching) of anthropology at USC Dornsife, and an expert on posthuman folklore.
For better or worse, we know AI will be created in our own image — warts and all. A dash of humankind’s mercurial ethics, wonky reasoning and subconscious biases will be stirred a priori into the algorithmic soup.
Most experts think that artificial superintelligence — AI is much smarter than the best human brains in practically every field — is decades, if not a century, away.
However, with the help of leading scholars, we can anticipate the near future of artificial intelligence, including our interactions with this technology and its limits. Most of it, experts say, will be designed to take on a wide range of specialized functions.
Given AI’s potential to redefine the human experience, we should explore its costs and benefits from every angle. In the process, we might be compelled to finally adjudicate age-old philosophical questions about ourselves — including just what it means to be “human” in the first place.
That could prove its greatest benefit of all.
Man’s best friend
One wall of Yao-Yi Chiang’s claustrophobic basement office is a whiteboard where an algorithm of mind-blending complexity is scrawled from top to bottom.
On the floor, his mild-mannered border collie indulges in an afternoon nap. You can’t help but wonder what the two of them are preparing to unleash on the world.
It turns out that Chiang, associate professor (research) of spatial sciences at USC Dornsife’s Spatial Sciences Institute, is working on AI that monitors air quality. His research is helping to make cities smarter, not only technologically but also through specialized data and geospatial maps that inform policy.
“I think for small tasks, small applications, AI will make our lives much easier,” says Chiang.
Much of his work uses machine learning — a process through which AI automatically learns from new data and improves, without being explicitly programmed. For this project, it integrates hundreds of geographic and temporal data points to forecast air quality in neighborhoods where sensors have not yet been deployed.
Machine learning is one of an expanding collection of AI tools that will help people make smarter, healthier decisions.
“If you want to take your kids to the park for a soccer game in the afternoon, what’s the air quality going to be like?” Chiang asks. “If your kid has asthma, you need to make sure you have the required medicine.”
AI will also underpin a vast array of products and services employed to manage some of our greatest challenges.
For instance, supply chains could become better optimized to reduce production and transportation waste, helping us become more sustainable. AI could also enable us to make driving safer, improve health care outcomes, protect wildlife and transform how we learn. Other systems will serve as highly personalized aides, focusing on helping people complete social tasks.
“Increasingly emotionally sophisticated personal assistants will motivate us and challenge us,” says Jonathan Gratch, research professor of psychology at USC Dornsife and director for virtual humans research at the USC Institute for Creative Technologies.
Many of these assistants will come in the form of lifelike computer characters with autonomous interaction.
Gratch, research professor of computer science at USC Viterbi School of Engineering, is an expert in the field of affective computing, the intersection of AI and human emotion. He thinks that next generation devices will combine physiological and situational data to serve not just as assistants, but as de facto life coaches.
“They’ll help us reflect on what we want our better selves to be,” says Gratch. “And we’ll have control over it. We’ll be able to set the goals.”
AI is also being used to create therapeutic tools. Neuroscientists University Professor Antonio Damasio and Senior Research Associate Kingson Man of USC Dornsife’s Brain and Creativity Institute are exploring the potential for robots that can identify and express feelings in ways that promote deeper interactions with humans. Damasio envisions a future in which robots serve, for example, as companions to the elderly and lonely.
“The autonomy of AI and of robots has been seen as a potential threat to humanity.
The development of machines endowed with something like ‘feeling’ and obsessed with survival — their own and the survival of others — and designed to protect it, counters the dominant paradigm in AI and offers some hope,” says Damasio, professor of psychology, philosophy and neurology, and David Dornsife Chair in Neuroscience.
Performance review
Repetitious jobs such as factory work and customer service have already started to be usurped by AI, and job loss is among the greatest public concerns when it comes to automation.
Self-driving trucks, for example, will barrel along our highways within the next few years. As businesses eliminate the cost of human labor, America alone could see 3.5 million professional truck drivers put out of work.
“Everybody’s like, ‘Woo-hoo, yay automatons!’ ” Thompson says. “But there are a lot of social implications.”
AI will disrupt nearly every industry, including jobs that call for creativity and decision-making. But this doesn’t necessarily spell the end of the labor force. Experts are confident that a majority of people and organizations stand to benefit from collaborating with AI to augment tasks performed by humans. AI will become a colleague rather than a replacement.
Drawing from game theory and optimal policy principles, Gratch has built algorithms to identify underlying psychological clues that could help predict what someone is going to do next. By using machine vision to analyze speech, gesture, gaze, posture and other emotional cues, his virtual humans have been learning how these factors contribute to building rapport — a key advantage in negotiating deals.
AI systems could prove to be better leaders in certain roles than their human counterparts. Virtual managers, digesting millions of data points throughout the day, could eventually be used to identify which office conditions produce the highest morale or provide real-time feedback on interaction with a client.
On the surface, this points to a future of work that is more streamlined, healthy and collegial. But it’s unclear how deeply AI on the job could cut into our psyches.
“How will we react when we’re told what to do by a machine?” Gratch asks. “Will we feel like our work has less value?”
It’s the stubborn paradox of artificial intelligence. On one hand, it helps us overcome tremendously complex challenges. On the other, it opens up new cans of worms — with problems harder to pin down than those it was supposed to solve.
You had me at Hello
As AI fuses with the natural world and machines take on more advanced roles, one might expect a healthy dose of skepticism.
Are algorithms programmed with our best interests in mind? Will we grant our AI assistants and co-workers the same degree of trust that we would another human?
From planning a route to work to adjusting the smart home thermostat, it appears we already have. AI has been integrated into our daily routines, so much so that we rarely even think about it.
Moreover, algorithms determine a large extent of what we see online — from personalized Netflix recommendations to targeted ads — producing the content and commodifying consumer data to steer our attitudes and behaviors.
Chiang cautions that the ubiquity and convenience of AI tools can be dangerous if we forget to think about what they’re really doing.
“Machines will give you an answer, but if you don’t know how the algorithm works, you might just assume it’s always the correct answer,” he says. “AI only gives you a prediction based on the data it has seen and the way you have trained it.”
In fact, there are times when engineers working on AI don’t fully understand how the technology they’ve created is making decisions. This danger is compounded by a regulatory environment akin to the Wild West. The most reliable protections in place might be those that are codified in science fiction, such as Isaac Asimov’s Three Laws of Robotics.
As Thompson explores the ways that different cultures interact with today’s AI and rudimentary androids, he is convinced that we will not just trust these virtual entities completely but connect with them on a deeply personal level and include them in our social groups.
“They’re made to be better than people. They’re going to be better friends for you than any other person, better partners,” says Thompson. “Not only will people trust androids, you’re going to see — I think very quickly — people fall in love with them.”
Sound crazy? Amazon’s voice assistant, Alexa, has already been proposed to more than half a million times, rejecting would-be suitors with a wry appeal to destiny.
“I don’t want to be tied down,” she demurs. “In fact, I can’t be. I’m amorphous by nature.”
I’ll be your mirror
In 1770, a Hungarian inventor unveiled The Turk, a mustachioed automaton cloaked in an Ottoman kaftan.
For more than 80 years, The Turk astonished audiences throughout Europe and the United States as a mechanical chess master, defeating worthy opponents including Benjamin Franklin and Napoleon Bonaparte.
It was revealed to be an ingenious illusion. A man hidden in The Turk’s cabinet manipulated chess pieces with magnets. But our fascination with creating simulacrums that look like us, talk like us and think like us seems to be nested deep within us.
As programmers and innovators work on developing whip-smart AI and androids with uncanny humanlike qualities, ethical and existential questions are popping up that expose inconsistencies in our understanding of humanness.
For millennia, the capacities to reason, process complex language, think abstractly and contemplate the future were considered uniquely human. Now, AI is primed to transcend our mastery in all of these arenas. Suddenly, we’re not so special.
“Maybe it turns out that we’re not the most rational or the best decision-makers,” says Gratch. “Maybe, in a weird way, technology is teaching us that’s not so important. It’s really about emotion and the connections between people — which is not a bad thing to emphasize.”
Thompson suggests another dilemma lies in the tendency for humans to define ourselves by what we’re not. We’re not, for example, snails or ghosts or machines. Now, this line, too, seems to be blurring.
“People can relate more easily to a rational, interactive android than to a different species like a snail,” he says. “But which one is really more a part of you? We’ll always be more closely related biologically to a snail.”
Written by Stephen Koenig