How AI technology could fight against dark side of the internet

Credit: Unsplash+.

The internet is full of information—some of it is helpful, while some of it is harmful or just untrue.

Misinformation, propaganda, and fake news are just some of the ways “bad” information is spread online, and can sometimes lead to serious problems like cyberbullying or social conflict.

But guess what?

A team at the Information Sciences Institute (ISI), part of the University of Southern California, is working on ways to fight back against this problem.

They are creating artificial intelligence (AI) technology that can reason like a human when it encounters bad information, helping to identify and explain it.

The first project is about detecting logical fallacies. You might ask, what’s a logical fallacy?

Simply put, it’s a mistake in reasoning that can make an argument seem true, even when it’s not. The team believes that if they can teach AI to spot these logical errors, it can be a useful tool in the fight against misinformation, propaganda, and fake news.

Filip Ilievski, the lead researcher, said their project goes a step further than just identifying fallacies.

The AI will also explain the type of fallacy and why it’s wrong. To do this, they use techniques called case based reasoning and prototyping methods.

These methods teach the AI to learn from past examples and apply that knowledge to new situations.

The goal? To create an AI assistant that can help human moderators, who monitor online platforms, to spot and understand false arguments more quickly and easily.

The second project focuses on identifying harmful content in memes.

This could include anything from hateful comments to misogyny (prejudice against women). The challenge with memes is that they can contain complex cultural references which are not always easy to explain or understand.

To tackle this, the team also used case based reasoning. They taught the AI to build a library of examples so it can identify problematic themes in new memes. For instance, if a meme was considered misogynistic, they would ask the AI: “Why is this meme misogynistic? Is it shaming, stereotyping, or objectifying women?”

Just like the first project, the team developed a way for the AI to visualize its reasoning. This can help humans to understand the AI’s decisions and improve its skills.

These two projects show exciting possibilities for AI to help humans fight against harmful content online. Still, the researchers caution that we should not rely entirely on AI. They see AI as a tool to assist humans, not replace them.

Both researchers enjoyed the complexity and creativity involved in working with memes and AI. They have made their findings and code available for other researchers to use, hoping to inspire more work in this area.

In the future, AI might just be our best partner in maintaining a safer and more truthful internet.

Follow us on Twitter for more articles about this topic.