KNIGHT PROTOTYPE FUND WINNERS OUTLINE TOOLS TO FIGHT DISINFORMATION
Article by Patrick Butler published on IJnet.org
11. April 2018
|Article by Patrick Butler published on IJnet.org
11. April 2018
|Four winners of the John S. and James L. Knight Foundation’s Prototype Fund Challenge presented tools and strategies to fight disinformation and increase trust in the media during a panel at the International Symposium on Online Journalism in Austin, Texas. The topics ranged from using machine learning to separate true stories from fake ones to a network of citizens in Chicago who report on otherwise ignored public meetings.
Introducing the session, Knight Foundation’s Vice President for Journalism Jennifer Preston cited a Knight Foundation-Gallup survey released earlier this year that showed a wide partisan divide in Americans’ trust of media. Respondents on the whole believed that media have a vital role to play in U.S. democracy, but fewer than half can identify a source they believe reports the news objectively. Republicans are far more likely to distrust the media than Democrats, the survey found.
Frederic Filloux, a John S. Knight Fellow at Stanford University, developed a tool called Deepnews.ai that uses machine learning to separate high-quality stories from trash. Filloux believes this will improve the economic value of great journalism to help the best media become more profitable.
The best way to separate the good from the bad is through decisions made by real people, Filloux said. But with “100 million links per day injected on the internet,” having people do it is like trying to purify the Ganges a glass of water at a time, he said.
Filloux started by pulling 10 million stories from a range of sources — from the best content producers to the worst — and having people rate them on various measures of quality. His tool learned from those ratings and is now able to rate stories itself. He says it has a 90 percent accuracy rate compared to stories rated by people.
Publishers can plug their stories in to Deepnews.ai’s API and receive a rating. Filloux hopes that rating will enable them to “match the price of advertising to the quality of content” and to market themselves better to audiences who want quality news.
Lisa Fazio of Vanderbilt University presented CrossCheck, a platform developed with First Draft to monitor misinformation during the 2017 French elections. CrossCheck debunked false stories, then Fazio and her team studied how people’s perceptions of those stories changed after they read the “debunks.”
Before reading the debunks, people tended to give middling ratings to the stories on a scale of true to false. After reading the debunks, they were much more likely to rate the story as false — but Americans were more likely to change their minds than French people, who knew the issues better and may have ingrained beliefs they weren’t willing to change. A week later, participants were asked to rate the story again. On average, they still believed the stories were false, but not quite as strongly as they did immediately after reading the debunk.
Perhaps most importantly, Fazio said, the study did not find evidence of the “backfire effect” — a controversial theory that giving people factual information actually causes them to stick even more firmly to false beliefs.
Read full article by Patrick Butler published on April 30, 2018 on IJNET here.