How average Facebook users in Germany inspired a wave of violence against refugeesAugust 22, 2018
Recently I ran into a well known tech CEO and asked him how he was feeling about social networks. (I am extremely fun at parties.) The CEO’s unequivocal response surprised me: “shut them down,” he said. His reasoning was simple: the networks undermine democracies in ways that cannot be fixed with software updates, he said. The only logical response, in his mind, was to end them.
Whether social networks can be fixed is the question looming over Amanda Taub and Max Fisher’s deeply unsettling new report in The New York Times. The report, based on academic research and bolstered by extensive on-the-ground reporting, finds a powerful link between Facebook usage and attacks on refugees in Germany:
Karsten Müller and Carlo Schwarz, researchers at the University of Warwick, scrutinized every anti-refugee attack in Germany, 3,335 in all, over a two-year span. In each, they analyzed the local community by any variable that seemed relevant. Wealth. Demographics. Support for far-right politics. Newspaper sales. Number of refugees. History of hate crime. Number of protests.
One thing stuck out. Towns where Facebook use was higher than average, like Altena, reliably experienced more attacks on refugees. That held true in virtually any sort of community — big city or small town; affluent or struggling; liberal haven or far-right stronghold — suggesting that the link applies universally.
The most striking data point in the piece: “wherever per-person Facebook use rose to one standard deviation above the national average,” the authors write, “attacks on refugees increased by about 50 percent.”
From there, the authors explore why this happens. They examine how Facebook promotes more emotional posts over mundane ones, distorting users’ sense of reality. Towns that had been relatively welcoming to immigrants eventually came to encounter an overwhelming tide of anti-refugee sentiment when they opened the Facebook app.
Much of this activity is driven by so-called “superposters,” who flood the service with negative sentiment. This asymmetry of passion makes it appear as if refugees have less support than they actually do, which in turn inspires more people to gang up against them.
One of the most notable features of the study, which you can read in its entirety here, is how it determines that Facebook is uniquely responsible for the surge of anti-immigrant violence in Germany. Here are Taub and Fisher again:
German internet infrastructure tends to be localized, making outages isolated but common. Sure enough, whenever internet access went down in an area with high Facebook use, attacks on refugees dropped significantly.
And they dropped by the same rate at which heavy Facebook use is thought to boost violence. The drop did not occur in areas with high internet usage but average Facebook usage, suggesting it is specific to social media.
Also notable: these attacks happened despite strict laws against hate speech in Germany, which require Facebook to take any offending posts down within 24 hours of being reported. As the authors note, the posts driving the violence largely do not qualify as hate speech. The overall effect of standard political speech has been to convince large swathes of the population that Germany is beset by a foreign menace — which triggered a political crisis in the country earlier this year.
In New York, Brian Feldman says Facebook has two choices:
It can do more to limit user speech on posts that are not explicitly hateful but couched in the rhetoric of civil discussion — the types of posts that seem to fuel anti-refugee violence. Or it can tweak its distribution mechanisms to minimize overall user engagement with Facebook, which would also reduce the amount of ad money it collects.
Surprisingly, Facebook declined to comment on the study or its implications. But even as it was still reverberating around the internet, the company was getting ready to answer for another set of concerns: four new influence campaigns linked to Russia and Iran. From my story:
Facebook removed more pages today as a result of four ongoing influence campaigns on the platform, taking down 652 fake accounts and pages that published political content. The campaigns, whose existence was first uncovered by the cybersecurity firm FireEye, have links to Russia and Iran, Facebook said in a blog post. The existence of the fake accounts was first reported by The New York Times.
“These were networks of accounts that were misleading people about who they were and what they were doing,” CEO Mark Zuckerberg said in a call with reporters. “We ban this kind of behavior because authenticity matters. People need to be able to trust the connections they make on Facebook.
People indeed ought to be able to trust the connections they make on Facebook. But between the study of Facebook’s effects on Germany and news of multiple ongoing state-sponsored attacks on the service, it was hard to say where that trust could come from.
“When you operate a service at the scale of the ones that we do, you’re going to see a lot of the good things, and you’re going to see people abuse the service in every way possible as well,” Zuckerberg told reporters. And yet the thing that troubles me most today wasn’t the people abusing the service. It was the Germans using Facebook just as it was intended to be used.
Facebook relies on user reports to determine whether a post is false or misleading. But users themselves can seek to mislead Facebook by falsely reporting credible information. And so Facebook has begun giving users a score to help them weight reports. It’s a bit less dramatic than it sounded when the story first hit — this is not an equivalent to, say, an Uber rating or Reddit karma — but it does seem like a good and useful thing. Elizabeth Dwoskin reports:
A user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.
Facebook didn’t seem to like the Post story::
“The idea that we have a centralized ‘reputation’ score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading. What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system,” a Facebook spokesperson wrote via email, “The reason we do this is to make sure that our fight against misinformation is as effective as possible.”
After a series of reports by ProPublica and others about how Facebook’s ad platform can enable discrimination, the company said it would remove thousands of targeting capabilities, Alex Kantrowitz reports:
Facebook’s removal of the targeting options comes amid an investigation from the US Department of Housing and Urban Development, which filed a complaint last week alleging Facebook had enabled discriminatory housing practices with its ad targeting options. The complaint began a process that could eventually lead to a federal lawsuit.
Soutik Biswas examines how India is working to educate young people about viral misinformation on WhatsApp, in the hopes that it will reduce the number of murders inspired by hoaxes on the platform:
To combat this, district officials have now begun 40-minute-long fake news classes in 150 of its 600 government schools.
Using an imaginative combination of words, images, videos, simple classroom lectures and skits on the dangers of remaining silent and forwarding things mindlessly, this initiative is the first of its kind in India. This is a war on disinformation from the trenches, and children are the foot soldiers.
Russia is now targeting conservative think tanks who favor stronger sanctions against the country, according to new research from Microsoft, David E. Sanger and Sheera Frenkel report:
The goal of the Russian hacking attempt was unclear, and Microsoft was able to catch the spoofed websites as they were set up.
But Mr. Smith said that “these attempts are the newest security threats to groups connected with both American political parties” ahead of the 2018 midterm elections.
Your Jack Dorsey interview of the day is with Buzzfeed’s Charlie Warzel. He offers lots more big-picture talking about “incentives” and “conversation,” and little in the way of concrete plans. But I’m glad Warzel suggested to Dorsey that he is getting played by conservatives crying wolf about shadow bans:
Dorsey: I want to acknowledge my bias and I also want to acknowledge there’s a separation between me and our company and how we act. We need to show that in our, we need to be a lot more transparent, we need to show that in our product, we need to show that in our policy and we need to show that in our enforcement and I think in all three we have, but it bears repeating again and again and again. The reason we’re talking with more conservatives is just in the past we haven’t really done much. At least I haven’t.
Eric Goldman updates us on a case in which white supremacists sued Twitter in an effort to prevent the company from banning them. An appeals court ruled that Twitter is protected from the suit by section 230 of the Communications Decency Act.
What if GDPR …. is good? Catalin Cimpanu offers a data point:
The number of tracking cookies on EU news sites has gone down by 22% according to a report by the Reuters Institute at the University of Oxford, who looked at cookie usage across EU news sites in two phases, in April 2018 and July 2018, pre and post the introduction of the new EU General Data Protection Regulation (GDPR). […]
“We may be observing a kind of ‘housecleaning’ effect. Modern websites are highly complex and evolve over time in a path-dependent way, sometimes accumulating out-of-date features and code,” researchers said. “The introduction of GDPR may have provided news organizations with a chance to evaluate the utility of various features, including third-party services, and to remove code which is no longer of significant use or which compromises user privacy.”
Kirsten Han profiles Cofacts, a collaborative fact-checking service that uses bots to check information that’s spreading virally on popular Asian messaging app Line. The bot has received more than 46,000 messages, of which chatbot was answered 35,180 automatically:
Any interested volunteers can log into the database of submitted messages and start evaluating the messages, using the Cofacts form. Cofacts offers step-by-step instructions for those who can’t figure out how to use the platform, as well as a set of clear editorial guidelines that helps volunteers weed out uncheckable messages or ones that are “personal opinion,” and what types of reliable sources they can use to back up their fact-checking work.
Based on data collected by the Cofacts team on the messages they’ve received so far, the misinformation debunked on the platform can range from fake promotions and medical misinformation to false claims about government policies.
Speaking of Line, Daniel Funke looks at how public accounts on the service grow big by promising users free stickers and then pivoting to disinformation once they get a large audience. Many of the influence campaigns appear to advertise health care products of dubious value:
Many of the top misinforming accounts on the app publish accurate tips about things like lowering blood pressure alongside spammy ads for things like detoxifying foot pads — and Anutarasoat said channels regularly profit from it.
“The products that some of these networks want to sell, (they’re) not harmful products, but not useful like they advertise — like a fake website that’s selling medicine that can reduce blood pressure, and they’re targeting it for older people who have high blood pressure problem,” he said. “They create a convincing website that has a picture of a doctor and a picture of a witness. In some websites, they actually fake that it is a website from public health ministries.”
Drawing on some new information from researcher Jane Manchun Wong, Josh Constine reminds us that Facebook’s home speaker is still in development.
Tom Simonite examines the state of social media monitoring in schools and finds several companies vying for district dollars with a promise of protecting schools from attack. But their value is unclear, and they could have significant downsides:
There’s little doubt that students share information on social media school administrators might find useful. There is some debate over whether — or how—it can be accurately or ethically extracted by software.
Amanda Lenhart, a New America Foundation researcher who has studied how teens use the internet, says it’s understandable schools like the idea of monitoring social media. “Administrators are concerned with order and safety in the school building and things can move freely from social media—which they don’t manage—into that space,” she says. But Lenhart cautions that research on kids, teens, and social media has shown that it’s difficult for adults peering into those online communities from the outside to easily interpret the meaning of content there.
Sometimes I wonder whether the Time Well Spent movement will ever affect the famously noisy, all-consuming office chat app Slack. The answer so far — no, not at all!
My colleague Ashley Carman reports on the launch of Tinder U, a version of the dating app just for college students. I imagine this will be quite popular, although it may turn out that Tinder itself is good enough.
Tinder’s marketing frames the service as ideal for finding a study buddy or someone to hang out with on the quad. Also, if Tinder can build in a new dedicated user base of 18-year-olds, it can also start converting them to paid users sooner. Facebook employed a similar strategy when it first launched. The platform required a .edu email address to build out a loyal college following before opening widely a few years later. The opposite is happening with Tinder: everyone can use it, but college kids now might want a safe haven from creepy older people.
My colleague Russell Brandom finds evidence of a new podcast app from Google:
Nothing in the trademark filing specifies the kind of audio being accessed, but a Google representative said the focus of the app was on spoken word content. There is little public information about the app, although Google has played with smart captioning, translation, and other AI-assisted features in previous podcast products.
Ramsi Woodcock makes a sweeping case against advertising, saying the internet has made its core function of consumers obsolete, and saying it could even violate antitrust laws. This is a big take but a well considered one:
The courts have long held that Section 2 of the Sherman Act prohibits conduct that harms both competition and consumers, which is just what persuasive advertising does when it cajoles a consumer into buying the advertised product, rather than the substitute the consumer would have purchased without advertising.
That substitute is presumably preferred by the consumer, precisely because the consumer would have purchased it without corporate persuasion. It follows that competition is harmed, because the company that made the product that the consumer actually prefers cannot make the sale. And the consumer is harmed by buying a product that the consumer does not really prefer.
Will Oremus listens to the Radiolab episode I wrote about yesterday and examines it in the context of charges of bias against platforms:
Donald Trump, Ted Cruz, and other Republicans probably won’t buy Dorsey’s claim that he tries to keep his biases out of the company’s decision-making, particularly the next time an Alex Jones gets the boot. Nor will most liberals believe that he isn’t bending over backward to appease the hard right, especially the next time an Alex Jones isn’t ejected from the platform. When a company that shapes the flow of online political speech is making high-stakes decisions about who can talk and who can’t, it’s hard to accept that those decisions are the product of a jury-rigged rulebook or algorithm rather than political calculations or a secret agenda.
But it’s worth remembering, with these controversies, that social media companies do have an agenda, and it isn’t secret. Their agenda is to keep making money, and when it comes to high-stakes decisions about who can say what online, the most lucrative option is often to play dumb
And finally …
The president’s eldest son is just like us — which is to say, he reads the comments. Especially on Instagram, reports Eve Peyser:
He’ll respond to anyone—he frequently ignores comments from verified accounts, instead replying to messages from random accounts, which suggests that he reads all the comments. Which has to got to hurt. But when replying to these so-called “whiny libs,” Don Jr. doesn’t hold back, chiding them for their low follower counts, and/or accusing them of being robots.
Talk to me
Send me tips, questions, comments, academic studies: email@example.com.