Understanding Human Doubts
Machines often think that humans always know what they’re doing. But we know that’s not true. Humans sometimes feel unsure or make mistakes.
This is normal for us, but machines, especially those built to learn from humans, don’t understand this.
This is a big problem, especially if we want to trust machines to help us with important jobs, like checking if someone is sick from a medical scan.
Research on Machines and Human Uncertainty
A group of smart people from the University of Cambridge, The Alan Turing Institute, Princeton, and Google DeepMind thought about this problem.
They wanted to help machines understand when humans feel unsure. This could make machines more helpful and safe, especially when the job is very important.
The team asked humans to look at pictures and say what they see. For some pictures, people weren’t sure. Maybe they saw a bird but weren’t sure if it was red or orange.
The researchers taught machines using these unsure answers. They found that machines can learn to understand human doubt, but when machines only learn from humans, they don’t do as well as when they learn from sure data.
They’re going to talk about their work in a big meeting in Montréal.
Why This Matters
Humans often have to guess. If we see someone who looks like a friend from behind, we might wave. If it’s a stranger, it’s okay.
But in serious jobs, like when doctors use machines to help decide if someone is sick, being unsure can be risky.
Katherine Collins, who led the study, said that many machines assume humans are always sure. But that’s not true. Humans sometimes doubt. Machines need to understand this, especially when the job is very important.
Matthew Barker, another person in the study, said that machines need to be made in a way that they understand when humans aren’t sure.
They used different tests, like looking at numbers and bird pictures, to study this. When humans said they weren’t sure, machines didn’t do as well.
But Katherine Collins said that being unsure can be a good thing. It’s a way to be honest. We need to know when to trust a machine and when to trust a person. If machines understand human doubt, they can be more helpful.
In the end, even though they have more questions, the researchers believe that understanding human doubt can make machines safer and more trustworthy.
They hope other smart people will use their work to make even better machines in the future.
Follow us on Twitter for more articles about this topic.
Source: University of Cambridge