Are AI chatbots just high-tech Magic 8 Balls? Expert weighs in

Credit: Pixabay.

Have you ever played with a Magic 8 Ball, asking it a question and then eagerly waiting for the random answer it’ll show?

Now, imagine a Magic 8 Ball that can talk back to you with detailed replies.

That’s kind of like what big language models (LLMs) like ChatGPT are, according to Anton Dahbura, a cybersecurity and artificial intelligence expert at Johns Hopkins University.

Dahbura warns us that these AI models can sometimes give answers that are made up or biased, which he calls “hallucinations.”

He’s not talking about seeing unicorns or rainbows, but rather, the AI making things up.

He argues that this could be a problem, especially if people start relying too much on these AI models.

For instance, imagine your self-driving car deciding to take a turn because it “hallucinated” a road where there’s actually a wall!

Dahbura suggests a few ways to make AI safer. One is using better data, another is companies taking responsibility for their AI’s mistakes, and finally, we all need to learn more about AI’s strengths and weaknesses.

In a chat with The Hub, Dahbura breaks down why he compares ChatGPT to a Magic 8 Ball.

He explains that AI is used to solve complex problems where there are no straightforward rules. For example, telling the difference between a dog and a cat in a picture isn’t as simple as following a rule like “if it barks, it’s a dog.”

Because of this complexity, there’s a level of unpredictability to AI, which Dahbura dubs the “AI uncertainty principle.”

What this means is that you can’t prepare an AI for every possible situation, so its responses can sometimes surprise you, just like a Magic 8 Ball.

As for what he means by “hallucinations,” Dahbura gives an example of a simple one: an AI telling a classroom there are 13 planets in the solar system.

A quick Google search can correct this mistake. But other hallucinations could be more harmful, like an AI wrongly identifying cancer in a medical scan.

Dahbura believes that companies and even the government need to make sure we all understand these “Magic 8 Ball” tendencies of AI.

This is especially important because AI technologies are becoming part of our daily lives, from AI chatbots to self-driving cars.

Despite the issues, Dahbura is hopeful. He believes things will improve as companies develop “guardrails” to control AI behavior.

But until then, it’s important for us to stay informed and a bit skeptical when dealing with AI, so we can separate the facts from the “hallucinations.”

Follow us on Twitter for more articles about this topic.