The “fake news” phenomenon was on vivid display in social media during the bitter 2016 presidential campaign. A typical example of the nonsense was a story, shared a million times on Facebook, stating falsely that Pope Francis had endorsed the Republican candidate, Donald Trump. BuzzFeed reported that during the last three months of the campaign, more Facebook members shared, “liked” or commented on the top 20 fake news stories than engaged with the 20 most important news stories on real news websites.
Bogus information has always been a threat to a democratic society, dependent as it is on an informed citizenry. As falsehoods spread at the speed of light in the digital age, news consumers more than ever must equip themselves to distinguish fact from fiction. Lacking news literacy skills, they might be tempted to disbelieve even news that is reliably reported.
Facebook today acknowledged the problem and released a plan to address fake news on its platform. But how and why are these stories developed in the first place? For some purveyors of phony news, it’s all about the cash: They collect payments from advertisers whose messages are embedded in their stories on Facebook and Twitter — each of which, according to a Pew Research Center survey in 2015, is a source of news for 63 percent of Americans.
Fabricators typically employ a shrewd understanding of their audience. Those insights give them a good sense of what kinds of lies will be eagerly shared, driving up their profits.
Terrence McCoy of The Washington Post tracked down two of them — a team of twentysomethings who post made-up stories from laptops in their home in Long Beach, California.
As McCoy described it, one of the men, Paris Wade, awakens from a nap and decides to do a story on President Obama’s advocacy for the Trans-Pacific Partnership. He composes a headline — “CAN’T TRUST OBAMA” — and in “ten minutes and nearly 200 words … he is done with a story that is all opinion, innuendo and rumor.” Then he publishes the story to his website (LibertyWritersNews.com), puts it on Facebook, and watches on a monitor as the post goes viral.
The fabricators’ success underscores evidence that many consumers who retweet or share fake news do so not necessarily because they think the stories are true, but because the stories confirm their own biases. (For more on this, see “Confirmation and Other Biases,” a lesson developed by the News Literacy Project and Facing History and Ourselves.)
As an example, Sapna Maheshwari of The New York Times investigated a tweet about paid demonstrators being bused to Austin, Texas, to join a protest against Trump’s election. Maheshwari traced the tweet to Eric Tucker, a 35-year-old co-founder of an Austin marketing company. He wrote that Tucker took pictures of buses he happened to see downtown, then “saw reports of the protests against Mr. Trump in the city and decided the two were connected.”
Tucker’s tweet was shared at least 16,000 times on Twitter and more than 350,000 times on Facebook. The president-elect himself tweeted a complaint about “professional protesters, incited by the media.” But the buses had no connection to the protest; they had been rented to bring people to a software conference. Maheshwari said that when Tucker learned he was wrong, he deleted the original tweet and later posted an image of it stamped “false.” By way of explanation, Tucker told the Times reporter he was “a very busy businessman and I don’t have time to fact-check everything that I put out there.”
Another source of bogus stories is the use of automated bots to generate tweets, which often include links to fake news. “Twitter bots are computer programs based on artificial intelligence that are able to mimic tweets by human beings,” Jeff Nesbit, a former journalist and onetime director of legislative and public affairs at the National Science Foundation, wrote in an article for USNews.com. “They are able to retweet and post based on phrases or key words. They are sophisticated enough that it can be difficult to detect whether something has been generated by a human being or an automated robot.”
And why is this important? A study by two University of Southern California researchers, Alessandro Bessi and Emilio Ferrara, showed that almost 20 percent of the 20.7 million political tweets in the month before the presidential election came from just 400,000 bots. Their conclusions: “First, influence can be redistributed across suspicious accounts that may be operated with malicious purposes; second, the political conversation can become further polarized; third, the spreading of misinformation and unverified information can be enhanced.”
Illustrating how the reading public is susceptible to fake news, a survey of 7,800 teenagers by Stanford University, conducted between January 2015 and June 2016, showed that over 80 percent had difficulty judging the credibility of news sources. Professor Sam Wineburg, the study’s lead author, described one phase of the study in an interview on NPR: Students were shown a picture of daisies that appeared to be deformed, accompanied by a claim on the web that they were the result of the nuclear disaster at Fukushima in Japan. The students were asked, “Does this photograph provide proof that the kind of nuclear disaster caused these aberrations in nature?”
Although the photograph had no attribution or other documentation, Wineburg said, nearly all the students “had an extremely difficult time making that determination. They didn’t ask where it came from. They didn’t verify it. They simply accepted the picture as fact.” Wineburg said educational programs are needed to help people determine whether information on the internet is believable or not. Ascertaining credibility is, he said, “a new basic skill in our society.”
So how do you determine what’s fake and what’s real?
The News Literacy Project has been working to address misinformation – including viral rumors, hoaxes, propaganda and other forms of fake information – in classrooms for years. NLP’s checkology™ virtual classroom includes a lesson focused on helping students “immunize” themselves from viral rumors, and a “Check Tool” to help student ingrain news-literate habits of mind and build the critical thinking skills needed to assess the credibility of any news and information they encounter online.
In response to the recent focus on fake news, NLP offers several resources, including “Ten Questions for Fake News Detection,” and “Virology Report: Online Rumor Breakdown." NLP also is working on a guide for the general public. “Fake News: A Guide – Tools, Tips and Resources to Combat Misinformation Online” will be released on NLP’s social media channels in coming days.
Other experts, such as Lori Robertson and Eugene Kiely of FactCheck, have posted guidelines for “how to spot fake news,” too. Joyce Valenza of School Library Journal has published a comprehensive analysis of fake news that includes “a literacy toolkit for a ‘post-truth’ world.” Richard Hornik, who teaches news literacy at Stony Brook University, offers “7 ways to spot and debunk fake news.”
These sample tips show how a news consumer should approach suspicious “news” on the internet.
- Gauge your emotional reaction: “Is it strong? Are you angry? Are you intensely hoping that the information turns out to be true? False?” (NLP)
- Consider the headline or main message: “Does it use excessive punctuation(!!) or ALL CAPS for emphasis? Does it make a claim about containing a secret or telling you something that ‘the media’ doesn’t want you to know?” (NLP)
- Check the author: An ABC.com.co story, headlined “Obama Signs Executive Order Banning The Pledge of Allegiance In Schools Nationwide,” bears the byline Jimmy Rustling. “Who is he? Well, his author page claims he is a ‘doctor’ who won ‘fourteen Peabody awards and a handful of Pulitzer Prizes.’ Pretty impressive, if true. But it’s not.” (Robertson and Kiely)
- What’s the support?: “The banning-the-pledge story cites the number of an actual executive order – you can look it up. It doesn’t have anything to do with the Pledge of Allegiance.” (Robertson and Kiely)
- Triangulate: “Try to verify the information in multiple sources, including traditional media and library databases. You can begin to rule out the hoaxes and by checking out sites like the nonprofit, nonpartisan FactCheck.org, or popular sites like Snopes or Hoax-Slayer.” (Valenza)
- Be suspicious of pictures: “Not all photographs tell truth or unfiltered truth. … [S]ometimes they are digitally manipulated. Some are born digital. A Google reverse image search can help discover the source of an image and its possible variations. Remember Time Magazine’s darkening of the OJ mugshot?” (Valenza)
- Check the source of the story itself: “Beware of stories that come from people you trust – even from your friends and relatives. Don’t confuse the sender with the source of the information.” (Hornik)
After retiring from the Inquirer in 1998, Foreman taught for nine years at Pennsylvania State University’s College of Communications, where he was the inaugural Larry and Ellen Foster Professor. He is the author of “The Ethical Journalist,” published in 2009 and revised in 2015.