Fixing algorithms won’t curb fake news on social media: Researchers

Read Article

Just tweaking algorithms and infusing Machine Learning (ML) into them will not protect us from misinformation and fake news on social media platforms, researchers have warned. Technological fixes cannot stop countries from spreading disinformation on social media platforms like Facebook and Twitter, said Erik Nisbet and Olga Kamenchuk of The Ohio State University.

Policymakers and diplomats need to focus more on the psychology behind why citizens are so vulnerable to disinformation campaigns, they stressed.

“There is so much attention on how social media companies can adjust their algorithms and ban bots to stop the flood of false information. But the human dimension is being left out. Why do people believe these inaccurate stories,” said Nisbet, Associate Professor of Communication.

Governments the world over are fighting the menace of fake news, including political interference from nation-state actors.

In a paper published in The Hague Journal of Diplomacy, Nisbet and Kamenchuk, Research Associate at Ohio State’s Mershon Center for International Security Studies, discussed how to use psychology to battle these disinformation campaigns.

The researchers discussed three types of disinformation campaigns: identity-grievance, information gaslighting and incidental exposure. Identity-grievance campaigns focus on exploiting real or perceived divisions within a country.

“The Russian Facebook advertisements during the 2016 election in the US are a perfect example,” Nisbet said, adding, “Many of these ads tried to inflame racial resentment in the country.”

Another disinformation strategy is information gaslighting, in which a country is flooded with false or misleading information through social media, blogs, fake news, online comments and advertising. A recent Ohio State study showed that social media has only a small influence on how much people believe fake news.

“But the goal of information gaslighting is not so much to persuade the audience as it is to distract and sow uncertainty,” Nisbet added.

A third kind of disinformation campaign simply aims to increase a foreign audience’s everyday, incidental exposure to “fake news.”

“The more people are exposed to some piece of false information, the more familiar it becomes, and the more willing they are to accept it. If citizens can’t tell fact from fiction, at some point they give up trying,” Kamenchuk said.

These three types of disinformation campaigns can be difficult to combat, Nisbet noted.

“It sometimes seems easier to point to the technology and criticize Facebook or Twitter or Instagram, rather than take on the larger issues, like our psychological vulnerabilities or societal polarization,” he said.

But there are ways to use psychology to battle disinformation campaigns. More generally, diplomats and policymakers must work to address the political and social conditions that allow disinformation to succeed, such as the loss of confidence in democratic institutions, the researchers noted.


If you have an interesting article / experience / case study to share, please get in touch with us at editors@expresscomputeronline.com

AI Algorithmsfake newsmachine-learningOhio State Universitysocial media
Comments (0)
Add Comment