Copy
View this email in your browser
Monday, Jan. 22, 2018
Facebook asks users to judge credibility

In the second significant change to its News Feed algorithm in the last two weeks, Facebook said Friday that it will let its users determine which sources of information are most credible and that it will include that user-generated trust as a factor in determining what is published in an individual’s News Feed.

CEO Mark Zuckerberg explained in a personal Facebook post that determining which sources of information are most credible is not something the company is comfortable with, and that asking experts would not be as objective as simply asking users. Some critics of the move see it as a surrender to the mindset that ushered in the “post-truth” era to begin with: one in which everyone believes or dismisses information in accordance with existing beliefs and ideas. But others ask: If not actual readers or viewers, whom can we expect to determine credibility?

Friday’s announcement also contained an indication that Facebook intends to make it “easier for people to see local news and information in a dedicated section.” 

  • Discuss: Who is best positioned to evaluate information’s credibility? What are the pros and cons of Facebook’s approach?
  • Idea: Teach students how to adjust their News Feed preferences to see posts from a particular person or organization first.
    • Discuss: How could people use this preference adjustment to increase their exposure to credible news and opinions that differ from their own?
  • Idea 2: Let students form “opinion camps” about this decision, then present arguments and counter-arguments. The team with the most converts wins.
  • Act: Test Facebook’s new approach by creating a poll that approximates it. Select 10 or more sources of news and information of varying degrees of neutrality and credibility; then have students ask friends and family to rank each in four different areas:
    • Is it familiar?
    • Is it trustworthy?
    • Is it informative?
    • Is it relevant?
Compile the results and share an analysis of the findings on social media. Will your students find the approach effective, or will the results support skeptics’ concerns?
Google's bias, or a bug?

Google has suspended the “reviewed claims” feature of its Knowledge Panel after being accused of bias against conservative media in the feature’s execution.
 

The Knowledge Panel was created to provide additional information about publishers in search results. The “reviewed claims” feature was introduced last November, along with summaries about the topics a publisher typically covers and awards it has won. “Reviewed claims” gave an overview of disputed articles from that publisher and the results of fact checks conducted by independent fact-checking groups.

Google now says there were bugs in the algorithmic functions that try to discern the origin of a claim contested or debunked by fact-checkers. Some critics complained that the rollout’s inconsistencies extended beyond mere bugs and revealed a liberal bias that they say is widespread in the tech sector. Google says it plans to relaunch the feature after improvements are in place.

  • Discuss: Are Google’s “reviewed claims” a good idea? Can algorithms be biased? Do all algorithms reflect their creators’ biases? Why or why not? What are some ways to measure algorithmic bias? What could Google do to “fix” this feature?
  • Idea: Have students explore other accusations of algorithmic bias in teams, then analyze the class’s combined findings for patterns.

NLP's winter online PD series, Teaching News Literacy, starts tomorrow.
See details and registration here.

Viral rumor rundown

  • YES: Eating Tide Pods really is a trendy internet challenge meme. NO: The Tide brand did not tweet that the product would be removed from shelves Feb. 1. YES: An artisanal confectioner in Tallahassee, Fla., created a Tide Pods-themed candy. YES: A doughnut shop in Wichita, Kan., created a Tide Pods-themed doughnutYES: Some grocery stores lock Tide Pods behind anti-theft glass. NO: This is not because of the challenge; it’s because the product is a high-theft item. YES: YouTube and Facebook have removed videos of people eating Tide Pods. NO: U.S. Marine Corps leaders have not warned troops about eating them. YES: A number of satirical five-star reviews praising Tide Pods’ flavor have been posted to Amazon.
    • Discuss: Are satirical product reviews and Tide Pod-themed products such as doughnuts innocent fun, or could they worsen the problem? Were YouTube and Facebook right to remove content showing people consuming the pods?
  • NO: Demi Moore did not say that she doesn’t want supporters of President Donald Trump as fans, nor did she call the U.S. an “awful country with disgusting people.”
    • Misinfo pattern: Celebrities saying shocking things is a longstanding premise of viral rumors. Celebrities voicing strong support for or condemnation of Trump is just the latest incarnation.
  • NO: The image on the left is still not President Trump swinging a golf club. YES: The original is a photo of PGA Tour professional John Daly.
    • Misinfo pattern: Viral rumors often recirculate when a new context for belief arises. Remarks by Dr. Ronny Jackson, the White House physician, at a Jan. 16 news conference about Trump’s recent physical provided new context for an old fake image.
  • NO: There is no evidence that ISIS was behind Stephen Paddock’s shooting rampage that killed 58 people in Las Vegas in October. YES: Rep. Scott Perry, a Republican from Pennsylvania, repeated this conspiracy theory on Fox News’ Tucker Carlson Tonight last Thursday.
  • NO: A nonexistent British T-shirt company did not get shamed on social media for selling a shirt with a sexist message, nor did anyone defend the non-shirt, nor did the non-company apologize. This scenario was all a piece of brilliant — if not well-labeled — satire by BuzzFeed’s Tom Phillips.
    • Discuss: Could this piece of satire have been mistaken for an actual event? Should it have been labeled differently to make sure it didn’t start viral rumors about this fictional incident?
  • YES: U.S. Border Patrol officers boarded a Greyhound bus and checked passengers' IDs in Fort Lauderdale, Fla., on Saturday. One woman was taken into custody. NO:  This is not a new practice.
A Twitterbot army

Twitter has identified more than 50,000 automated accounts connected to the Internet Research Agency, the Russian government-sponsored disinformation organization that sought to disrupt the 2016 presidential election, the social media company said in a blog post on Friday. It also shared a number of posts from those accounts, and it revealed it has notified 677,775 U.S. Twitter users who followed one of these accounts or liked or shared one of the bots’ posts. (One of those users was Sen. John Cornyn of Texas, the No. 2 Republican in the Senate.)

A BuzzFeed News investigation, published on Saturday, found that despite Twitter’s removal of the original disinformation campaign posts, many are still circulating on social media (including Twitter) in the form of reposts from other platforms, such as Instagram.

  • Discuss: Why does Twitter have a bigger problem with these types of accounts than do other social media platforms, such as Facebook and Snapchat? (Hint: What’s required to create an account?) Can Twitter combat these types of accounts while preserving its openness? Is anonymity a valuable or a dangerous feature on Twitter?
  • Idea: Explore Hamilton68, a dashboard created by the bipartisan Alliance for Securing Democracy to monitor Russian propaganda activity on Twitter.
  • Related: “Russia-linked Twitter accounts are working overtime to help Devin Nunes and WikiLeaks” (Natasha Bertrand, Business Insider).
More global responses to fake news

Last week’s issue of The Sift described how some governments are enacting legislation and implementing policy changes as a response to falsehoods masked as “news” and other kinds of misinformation. Sift reader Adrienne Michetti, an educator in Singapore, encouraged us to share other recent initiatives to counter misinformation around the world. For example:

  • A cybersecurity agency in Indonesia has been charged with, among other things, fighting “fake news.”
  • As Brazil prepares for a presidential election in October, the country’s Federal Police Police has formed a group to identify and punish purveyors of deliberate misinformation. (Such direct action is not without precedent; last August, police in Kenya arrested the head of a WhatsApp group for allegedly spreading falsehoods.)
  • Singapore’s Parliament unanimously approved creation of a select committee to explore the causes and effects of deliberate misinformation; the panel is currently collecting ideas from the public and will hold an open hearing in March.
(Special thanks to Alexios Mantzarlis at the Poynter Institute and Mike Caulfield at Washington State University Vancouver for responding to my tweet.)
Please share this newsletter with others who may find this information useful (subscribe here). For more examples and ideas like these, you can follow me on Twitter (@PeterD_Adams). Also follow @TheNewsLP and @MrSilva.

If you have suggestions for future issues of The Sift, please share them here.

If you're looking for engaging and effective news literacy resources, check out NLP's checkology® virtual classroom. We’re giving away student licenses for 1:1 functionality for the rest of the 2017-18 school year. Yes, it’s free.
Copyright © 2018 The News Literacy Project. All rights reserved.
You are receiving this email because you signed up for resources and updates from the News Literacy Project.

5335 Wisconsin Ave. NW
Suite 440
Washington, D.C. 20015

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.