AI and Conspiracy Theory: Can AI get us out of the trap of conspiracy theory?

AI and Conspiracy Theory: Can AI get us out of the trap of conspiracy theory?


Melbourne: New research published in Science shows that for some people who believe in conspiracy theories, fact-based conversations with artificial intelligence (AI) chatbots can “get them out of the rabbit hole”. Even better, it seems to keep them out for at least two months.
This research was done by Thomas Costello. Massachusetts Institute of Technology This study by and colleagues shows the potential for a solution to a challenging societal problem: Belief in conspiracy theories,
Some conspiracy theories are relatively harmless, such as believing that Finland doesn’t exist (which is fine, until you meet a Finn). However, other theories undermine trust in public institutions and science.
This becomes a problem when conspiracy theories lead people to avoid getting vaccinated or take action against climate change. At the most extreme, belief in conspiracy theories has been linked to deaths.
Conspiracy theories are ‘sticky’
Despite the negative effects of conspiracy theories, they have proven to be very “sticky”. Once people believe in a conspiracy theory, it is difficult to change their minds.
The reasons behind this are complex. Conspiracy theorists’ beliefs tend to be rooted in communities, and conspiracy theorists have often conducted extensive research to reach their conclusions.
When someone doesn’t trust science or anyone outside of their community, it becomes harder for them to change their beliefs.
Enter AI
eruption of Generative AI Entering the public sphere has increased the concern of people believing things that are not true. AI has made it much easier to create believable fake content.
Even when used in good faith, AI systems can get the facts wrong. (ChatGPT and other chatbots also warn users that they may be wrong on certain topics.)
AI systems also contain widespread bias, meaning they can foster negative beliefs about certain groups of people.
Given all this, it is quite surprising that interacting with a system known to produce fake news could persuade some people to abandon conspiracy theories, and this change appears to be long-lasting.
However, this new research brings us to a good news/bad news problem.
It’s great that we’ve identified something that has some bearing on the beliefs of conspiracy theorists! But if AI chatbots are good at weeding out people with anti-scientific beliefs, what does that mean for real beliefs?
What can chatbots do?
Let’s take a closer look at the new research. The researchers were interested in seeing if people could be persuaded against the beliefs of conspiracy theorists using factual arguments.
The research used more than 2,000 participants across two studies, all of whom interacted with an AI chatbot after being told about a conspiracy theory. All participants were told they were talking to an AI chatbot.
People in the “treatment” group (60 percent of all participants) interacted with a chatbot that was personalized according to their particular conspiracy theory and reasons for believing it.
The chatbot tried to convince these participants that their beliefs were wrong by using factual arguments in three rounds of conversation (participant and chatbot each took turns in one round). The other half of the participants had a normal discussion with the chatbot.
The researchers found that about 20 percent of participants in the treatment group showed less belief in conspiracy theories after the discussion. When the researchers checked in with the participants two months later, most of these people still showed less belief in conspiracy theories. The scientists also checked whether the AI ​​chatbots were accurate, and they were (mostly) accurate.
We can see that at least for some people, three rounds of interaction with the chatbot may persuade them against the conspiracy theory.
So can we fix things with chatbots?
Chatbots offer some hope with two challenges in addressing misconceptions.
Because they are computers, they are not assumed to have any “agenda,” making what they say more credible (especially to a population that has lost faith in public institutions).
Chatbots can also provide reasoning, which is better than facts alone. Simple recitation of facts is much less effective against false beliefs.
Chatbots aren’t the solution to all problems, though. This study found that they were most effective for people who had no strong personal reasons to believe in a conspiracy theory, which means they probably won’t help those for whom a conspiracy community does.
So should I use ChatGPT to check my facts?
This study shows just how persuasive chatbots can be. It’s great when they’re willing to convince people of the facts, but what if they can’t?
A key way that chatbots promote misinformation or conspiracies is when their underlying data is inaccurate or biased: the chatbot will reflect this.
Some chatbots are deliberately designed to reflect bias or to enhance or limit transparency. You can even chat on versions of ChatGPT that are optimized to argue that the Earth is flat.
The second, more worrying possibility is that because chatbots respond to partisan signals (that searchers may not be aware are biased), they may promote misinformation (including conspiracy beliefs).
We already know that people are bad at fact-checking and when they use search engines to do so, those search engines respond to their (unconsciously biased) search terms, reinforcing belief in misinformation. Chatbots probably do the same.
After all, chatbots are a tool. They can be helpful Refutation of conspiracy theories – But like any tool, the skill and intent of the tool’s maker and user matter. Conspiracy theories start with people, and they’re people who end them.




Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *