In a world where seeing used to mean believing, distinguishing between real and computer-generated images is becoming tougher than ever.
A recent study by the University of Waterloo reveals that many of us can be tricked by photos of people made by artificial intelligence (AI), almost 40% of the time.
The research involved 260 people who were shown 20 pictures without being told which were real and which were created by AI programs like Stable Diffusion or DALL-E.
Out of these, 10 were actual photos of people found through Google searches, and the other 10 were crafted by AI.
Participants had to decide which ones were real and which ones weren’t, but it turns out that only 61% managed to correctly identify the AI-generated images. This was surprising because the researchers thought at least 85% would get it right.
Andreea Pocol, the lead researcher of the study, pointed out that we’re not as good at spotting these fake images as we might think. Even when people took their time to look for clues, such as how real the fingers, teeth, or eyes appeared, they often guessed wrong.
This task becomes even harder when people casually scroll through images online without paying much attention.
The study, titled “Seeing Is No Longer Believing,” highlights how quickly AI technology is advancing, making it harder to tell apart real images from fake ones.
This fast pace is a challenge for keeping up with academic research and laws. Since the study started in late 2022, AI-generated images have only gotten more convincing.
These convincingly real images pose a significant risk as they can be used for misleading purposes, especially in politics and culture.
Imagine someone creating a fake image of a politician in an embarrassing situation—such images could spread misinformation widely.
Pocol warns that disinformation is not a new problem, but the tools for spreading it are evolving rapidly. AI-generated images represent a new front in the battle against false information.
She suggests that as these fake images become increasingly realistic, identifying them will become even more challenging for everyone, regardless of their training or skills. This situation calls for the development of new tools to help spot and counteract fake images, akin to an arms race in the realm of AI technology.
In summary, the University of Waterloo’s study serves as a wake-up call about the power of AI to create images that many can’t distinguish from reality.
As AI continues to evolve, understanding and mitigating its potential misuse becomes crucial for maintaining trust in the visuals we encounter daily.