Copy
View this email in your browser
Monday, Mar. 5, 2018
Problematic search suggestions

Questionable search term predictions by Google’s algorithm are getting renewed attention following the Feb. 20 publication of Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Umoja Noble, an assistant professor at the University of Southern California’s Annenberg School of Communication.

That same day, Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism, published “#NotOKGoogle Search Suggestions: 2018 Edition” on Medium, highlighting a series of problematic Google search suggestions he had turned up. Albright later tweeted a series of such screenshots to the hashtag #NotOKGoogle. Others followed suit the next day and were joined by Google’s public search liaison, Danny Sullivan, who explained that such suggestions aren’t always what they seem.

  • Note: This is not a new controversy. Noble has been studying the problem since 2011, and Carole Cadwalladr wrote about it in The Guardian in December 2016.
  • Discuss: What should the goal of Google’s search algorithm be? Should Google offer predictive search term suggestions as you type? Do such suggestions have the power to steer people toward specific ideas that they might not have been searching for? Should these predictions be based solely on patterns recognized by Google’s search algorithm, or should it take other factors into consideration? If you were in charge of Google, how would you respond to this issue?
  • Idea: Introduce the predictive search controversy, then have students try to document their own examples of suggestions they feel are inappropriate or harmful in some way. 
  • Act: Use a class Twitter account to share screenshots of the most egregious examples to the #NotOKGoogle hashtag, then use Google’s “Report inappropriate predictions” link — which was added in April 2017 — to help the company improve the algorithm.


A screenshot of a search prediction I captured on March 5, 2018, with the “Report inappropriate predictions” link highlighted.

  • Also note: The suggestions students see will vary depending on whether they are signed into Google as they are searching. Searching in incognito mode will also change predictions.
How 'free' should expression be?

Last week YouTube took disciplinary action against more than a dozen far-right and conspiracy theory accounts for violating its community guidelines. Most prominent among them is InfoWars.com, a website run by Alex Jones with a network of affiliated social media channels, including several on YouTube.

In recent weeks, YouTube has issued InfoWars’ main account — the Alex Jones Channel, which has nearly 2.3 million subscribers — two “strikes” for pushing conspiracy theories about survivors of the Parkland, Fla., school shooting. Channels that accumulate three strikes in a three-month period are terminated.

Jones and other InfoWars staff members have responded by contending that YouTube is engaging in ideological censorship. On Sunday, Jones claimed on Twitter that his channel was “frozen” and would be taken down the following day. YouTube denied that it had any such plans.

 

On March 1, YouTube did terminate an InfoWars-affiliated channel run by Jerome Corsi, InfoWars’ Washington bureau chief. The move came just two days after USA Today published an op-ed by Corsi arguing that teachers should be armed to prevent mass shootings in schools. USA Today’s decision to give Corsi a voice drew fierce backlash.

  • Note: Corsi’s op-ed was run as the “Opposing View” to the “Our View” opinion of USA Today’s editorial board published the same day. In a Jan. 26 column, USA Today’s editorial page editor, Bill Sternberg, explained the rationale for the Opposing View column, saying that “readers appreciate getting more than one point of view on an issue” and that this “debate format” reinforces the paper’s reputation for fairness. He also wrote that publishing an opposition view forces the editorial board to be “more intellectually honest” and consider the strongest counterarguments, instead of ignoring them. In addition, Sternberg addressed the accusations that Opposing View “legitimizes the illegitimate” by pointing out the importance and value of trying to understand positions we may find unacceptable.
  • Also note: Atomwaffen Division, a neo-Nazi group that has been implicated in five murders, was banned by YouTube last week, two days after the Daily Beast exposed its presence on the video platform. Even though numerous users had flagged the messages as violating the platform’s terms of service, YouTube had initially argued that language it had placed preceding the videos — noting that the content might be considered “inappropriate or offensive” — “strikes a balance between allowing free expression and limiting affected videos’ ability to be widely promoted on YouTube.”
  • Discuss: Do social media platforms have a right to ban users for specific kinds of expression? How should those decisions be made? Is it important for people to try to understand ideas they strongly disagree with? Is it important for people to try to understand ideas they find reprehensible? What is the right balance between encouraging free expression and an open exchange of ideas on the one hand and cultivating a respectful, productive community of voices on the other?
  • Idea: Use this controversy to introduce your students to the concept of the Overton Window.
  • Related: “Advertisers flee InfoWars founder Alex Jones’ YouTube channel” (Paul P. Murphy and Gianluca Mezzofiore, CNN)
Looking for The Sift archives? You can find them at:
 
 

Viral rumor rundown

 NO: The percentage of mass shootings that do or do not take place in “gun-free zones” is not clear. YES: Numbers compiled by the nonprofit Crime Prevention Research Center — which is run by John Lott, the author of the book More Guns, Less Crime — found that 98.4 percent of “mass public shootings” between 1950 and July 10, 2016, were in “gun-free zones.” YES: President Donald Trump cited this statistic in a Feb. 28 meeting with members of Congress. YES: This study, along with others by Lott, has been contested by academics and other experts, many of whom accuse him of manipulating data to reach his conclusions.

  • Note: As this PolitiFact entry explains, such claims frequently use different criteria for what counts as a “mass shooting” and a “gun-free zone.” For example, Lott excludes mass shootings that occur on private property and includes as “gun-free zones” those locations that have armed security but prohibit citizens from carrying concealed weapons.
  • Discuss: How should a “mass shooting” be defined? Should mass shootings in public places be counted separately from those in private spaces, such as residences? What are the possible implications of these decisions on public policy debates?
NO: Anti-gun activists are not using red lightbulbs on their porches to signal that their home is gun-free.
  • Note: This claim was originally created in May 2016 by a site called Immediate Safety (its “about us” reads in part, “We have the ability to make the entire world safer, and we have the power to do it without delay”) and has gone viral again. While the piece itself is not clearly labeled satire, it does (as Snopes.com points out) contain several indicators that it’s fiction, including that the article was written by Dr. I.M. Swindler and attributes the “red lighting” movement to the Department of Protecting Everyone (DOPE).
  • Misinformation patterns:
    • Old viral rumors often recirculate when a new context occurs — in this instance, the reinvigorated gun debate following the Feb. 14 school shooting in Parkland, Fla.
    • Elements from satirical pieces are frequently repurposed in new forms elsewhere online. Immediate Safety’s original “satirical” claim was transformed into a viral meme by The Liberty Project, a hyperpartisan content aggregator. The meme circulates online with no disclaimers or attributions to the original context.
  • Discuss: Why are memes such an effective form for the distribution of outrageous claims?

NO:  An emergency medical technician who was called to the scene at Marjory Stoneman Douglas High School in Parkland, Fla., was not told to “stand down.” YES: An EMT who was on the scene told a reporter for a local Fox affiliate that emergency medical responders were asking to go into the school but were not allowed in until police gave the OK.

NO:  U.S. Navy SEALs were not part of a series of attempts to scale border wall prototypes as a test in January. YES: Specially trained units from U.S. Customs and Border Protection and some U.S. military special forces personnel tried and generally failed to breach the wall prototypes “using jackhammers, saws, torches and other tools and climbing devices.”

NO:  CNN did not buy “an industrial-sized washing machine to help its journalists … spin the news,” as claimed in a piece in The Babylon Bee, which calls itself a “Christian news satire” site. YES: Snopes.com debunked this satirical claim, citing instances in which consumers mistook The Babylon Bee’s content as news. YES: Because Snopes is one of several fact-checking organizations that work with Facebook to flag disputed content (which then gets downgraded by the platform’s algorithm), the entry triggered a warning notification to The Babylon Bee’s Facebook page. It also initially caused some users to receive a notification before sharing the link. Facebook later said that the flag was a mistake.

  • Discuss: When does satire become something that needs to be debunked? Does satire that most people find obvious still need to be clearly labeled? Should labels for all pieces of satire be included in social media previews (the teaser visuals and text that appear when a link is shared on a platform)? Do labels “ruin” the effect of satire? Can satire be used to express thoughtful opinions that make a positive contribution to the national conversation?
  • Idea: Look for evidence — on social media and elsewhere online — that people sometimes miss the fact that The Babylon Bee publishes satire.
  • Tip: Start by finding a piece from the site’s archives that is more likely than others to be mistaken as legitimate, then use the central claim(s) of that piece as search terms to see where and how it was shared, or if the false claims have been repeated elsewhere online.
  • Idea 2: Ask students to spend 10 minutes collecting satirical content, then spend the rest of the class period evaluating those examples. Which are most likely to be mistaken as news? Why? Which satire labels are most effective? How many satire labels are included in the preview of the content when the link is shared on social media?
Have Russian bots been overplayed?

As revelations about the coordinated Russian disinformation campaign continue to emerge, some observers are concerned that state-sponsored trolls and bots have received too much blame for the proliferation of misinformation and political polarization in the United States.

Indeed, the Russian tactic of optimizing content for partisan outrage to gain attention and engagement on social media is nothing new, as two recent reports suggest.

Some publishers have been reaping the algorithmic rewards of using “divisive, emotionally charged content” for quite a while, writes HuffPost reporter Paul Blumenthal. In fact, the strategy is, for some, an important part of remaining relevant and solvent in the new information economy.

Ironically, this same human weakness for outrage and simplicity may be responsible for the current tendency to blame Russian bots for everything, according to BuzzFeed News’ Miriam Elder and Charlie Warzel. They question the outsized influence of Hamilton 68 – a dashboard of social media activity for 600 Twitter accounts that the site claims are “linked to Russian influence operations” – on the coverage of Russian disinformation, along with its reliability as an indicator of the themes, sources and hashtags that are being pushed by Russian-sponsored accounts.

  • Note: Russian disinformation strategies and networks have been regularly featured in The Sift, as has the Hamilton 68 dashboard. While the Russian propaganda campaign remains important and continues to offer lessons on the pitfalls and loopholes of today’s information landscape, these two cautionary reports are a good reminder of the extent to which other factors have contributed to the erosion of fact-based common ground among partisans in the U.S.
  • Discuss: Is too much attention being paid to the Russian disinformation campaign, or not enough? Why? Can the effects of this type of disinformation campaign ever be meaningfully measured? In what ways did Americans’ information habits make them vulnerable to this kind of campaign?
Please share this newsletter with others who may find this information useful (subscribe here). For more examples and ideas like these, you can follow me on Twitter (@PeterD_Adams). Also follow @NewsLitProject and @MrSilva.

If you have suggestions for future issues of The Sift, please share them here.

If you're looking for engaging and effective news literacy resources, check out NLP's checkology® virtual classroom. We’re giving away student licenses for 1:1 functionality for the rest of the 2017-18 school year. Yes, it’s free.

Copyright © 2018 The News Literacy Project. All rights reserved.
You are receiving this email because you signed up for resources and updates from the News Literacy Project.

5335 Wisconsin Ave. NW
Suite 440
Washington, DC 20015

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.