One easy thing Facebook should do in MyanmarNovember 10, 2018
In March, human-rights investigators from the United Nations found that Facebook had played a role in spreading hate speech in Myanmar, fueling ethnic violence that has spurred more than 650,000 Rohingya Muslims to flee Myanmar’s Rakhine state into neighboring Bangladesh. The report, which came amid growing concerns about the way that social networks can incite violence, contained some of the most grave charges leveled against Facebook to date.
Chastened by the UN’s findings, Facebook quietly commissioned a study of its own — which it then released on the evening before the US midterm elections, when very few people would be paying attention. The report, which was conducted by the nonprofit Business for Social Responsibility (BSR), is a 62-page document that sets out to understand the dimensions of Facebook’s challenge in Myanmar and offer solutions to mitigate it.
After the dust from the midterms more or less cleared, I read the report. And while I spend more time reading hot takes than nonprofit takes-by-committee, I was struck by the degree to which a report that calls itself a “human rights impact assessment” does so little to assess the impact of Facebook on human rights in Myanmar.
The authors report speaking with about 60 people in Myanmar for their report, but they fail to explore any specific instances of hate speech on the platform or the resulting harms. Their analysis is limited to high-level, who-can-really-say equivocating. Its approach to understanding the situation on the ground in Myanmar appears to be primarily anecdotal, and its conclusions are the same as anyone who read a news wire story about the issue this spring.
“Though the actual relationship between content posted on Facebook and offline harm is not fully understood, Facebook has become a means for those seeking to spread hate and cause harm, and posts have been linked to offline violence,” the authors write, in one of many cases in which the passive voice serves to paper over their refusal to investigate.
I began reading the report in the hopes that it would clarify the connection between hate speech posted on social media and real-world violence. We are starving for knowledge about how unique platform mechanics such as share buttons and encryption contribute to lynch mobs. But instead, the authors choose to explore the current political dynamics in Myanmar at great length, and ultimately offer Facebook a to-do list of tasks that will let the company continue operating with minimal disruption to its business.
Most reports generated by consultants are destined to ride out eternity inside a neglected drawer, and BSR’s contribution to the Myanmar situation deserves a similar fate. (The nonprofit did not respond to a request for comment Friday afternoon.)
Fortunately, though, this week we got a second report on Facebook and Myanmar — and this one, I thought, was much more useful. It comes from the United Nation’s Office of the High Commissioner for Human Rights. Unlike BSR, the UN report asks why Facebook would enter Myanmar — or any other country rife with conflict — without first understanding how it would moderate content on the platform. They write:
Before entering any new market, particularly those with volatile ethnic, religious or other social tensions, Facebook and other social media platforms, including messenger systems, should conduct in-depth human rights impact assessments for their products, policies and operations, based on the national context and take mitigating measures to reduce risks as much as possible.
Instead, Facebook launched a country-specific version of Myanmar in 2015, and added it to its since-discontinued Free Basics program a year later. Soon, the company had 20 million users in the country — despite the fact that, due to peculiarities of the local language and Unicode, its non-Burmese speaking moderators had very little insight into what was happening on the platform.
The UN takes a broad view of the situation in Myanmar. The specific effect of social media is limited to a few pages toward the end of an extremely comprehensive report. And yet Facebook serves as the context for much of what the authors write: in a 444-page report, Facebook is mentioned 289 times.
Like BSR, the UN acknowledges that the free speech made possible on Facebook can contribute positively to Myanmar. But it also suggests Facebook make available examples of the hate speech it has removed from the platform, at least to a subset of researchers, so that its role can be better understood. This has privacy implications, which shouldn’t be taken lightly. But surely a middle ground can be found.
In the meantime, BSR and the UN agree on one thing, and it’s an easy one: Facebook ought to provide country-specific data on hate speech and other violations of the company’s community standards in Myanmar. We may not be able to say with certainty to what degree social networks contribute to ethnic violence — but we ought to be able to monitor flare-ups in hate speech on our largest social networks. Dehumanizing speech is so often the precursor to violence — and Facebook, if it took its role seriously, could help serve as an early-warning system.
Mark Bergen gives Google’s Dragonfly dilemma the feature treatment:
Interviews with more than 18 current and former employees suggest the company’s predicament resulted in part from failing to learn from mistakes that played out a decade earlier, when it first confronted the realities of China’s economic and political might. This history is known to many at Google’s headquarters in Mountain View, Calif., but mostly unknown outside of it. In an interview in September, Downey, 42, elaborates. “There’s this Utopian idea: Technology will come in, and people will take these tools, change their government, and get their freedom,” he says. “We tried that experiment, and it didn’t work.”
Here’s an uncharacteristically clumsy interview from Google’s CEO in which he likens the laws that enable an authoritarian, autocratic regime to Europe’s right-to-be-forgotten laws:
One of the things that’s not well understood, I think, is that we operate in many countries where there is censorship. When we follow “right to be forgotten” laws, we are censoring search results because we’re complying with the law. I’m committed to serving users in China. Whatever form it takes, I actually don’t know the answer. It’s not even clear to me that search in China is the product we need to do today.
Here’s the response from the Google walkout organizers to the company’s concessions so far. In short: good start, but more action is needed:
Organizer Stephanie Parker said of the response, “We demand a truly equitable culture, and Google leadership can achieve this by putting employee representation on the board and giving full rights and protections to contract workers, our most vulnerable workers, many of whom are Black and Brown women.”
Marie Hicks looks at the (inspiring!) history of collective action in the tech industry:
This may seem like a new development, but the Google walkout is steeped in a long history — both of women being minimized and discriminated against in tech, and of women asserting their power to force change. As I wrote my book Programmed Inequality, I found plenty of evidence of discrimination throughout tech history, but I also saw that when women in tech fight back — particularly by organizing or taking their labor elsewhere — the effects are tremendous. For instance, women’s forced exodus from the UK’s burgeoning early computing industry resulted in British computing’s premature decline. When they put their talents to work, however, they created multibillion-dollar software companies. Their experiences show us that not only can undervalued employees often wield an unexpected amount of power, but they also offer a blueprint for what’s to come in the US if the movement started by Google employees continues.
Here’s more positive fallout from the walkout, from Doug MacMillan:
Facebook is ending its policy of requiring employee sexual-harassment claims to be settled in private arbitration, a day after Google rolled back a similar policy under rising pressure from employees.
The rule change, which will let Facebook employees pursue those claims in court, was announced in an internal post to staff on Friday, a spokesman for the company said. The social networking giant has also updated its interoffice dating policy to require any executive at a director level or above to disclose if they are dating someone in the company.
Amazon employees are still unhappy about Rekognition, the controversial facial recognition technology that it is selling to law enforcement authorities. But executives aren’t bending, Davey Alba reports:
Andy Jassy, the CEO of the company’s cloud-computing arm, Amazon Web Services, deflected employee criticisms over how Amazon has aggressively marketed its Amazon Rekognition product to law enforcement agencies across the country and the US Immigration and Customs Enforcement (ICE). “I think we’re going to have people who have opinions that are very wide-ranging, which is great, but we feel really great and really strongly about the value that Amazon Rekognition is providing our customers of all sizes and all types of industries in law enforcement, and out of law enforcement,” Jassy said. He added that he thought it was the government’s responsibility to help specify regulations around the technology.
Bijan Stephen has two more experts to say that yes, the Acosta video was doctored. And here’s another one from the AP.
Pennsylvania’s attorney general sent a subpoena to Epic, Gab’s new DNS provider. That’s raising some (legitimate, to my mind) First Amendment concerns.
In other de-platforming news, PayPal shut down accounts on both the far left and the far right today.
Kevin Systrom still will not tell anyone why he really left Instagram and it is literally driving me mad. However, he didsay that his next company will not be a social one. Which I hope is a lie! If Systrom comes back with a SAAS solution for integrating Kubernetes with your CRM system or whatever, I will flip every table my office.
CNN and the New York Times overtook Fox News to take the top two spots in Facebook’s publisher rankings last month, NewsWhip reports.
Kaitlyn Tiffany talks to Columbia Law professor Tim Wu about his new book about breaking up the big tech companies. He wants to start with Facebook:
WU: Well, I think because they face no serious competition, there’s a couple problems. They’ve been able to get away with unchecked privacy abuses, they’ve been undisciplined in how they’ve dealt with advertisers. They’ve warped politics, they’ve been manipulative, they’ve breached privacy too often. A lot of this has to do with the fact that they haven’t faced effective competition. They’re … it’s not too big to fail, it’s too big to be tolerated.
Today in Twitter not living up to its promises: a colleague of mine gets doxed, and the company does not respond to the initial reports of harassment.
Kerry Flynn reports that Snap courted more than 50 augmented-reality developers recently with Lens Fest, a three-day workshop for creative types.
Julia Alexander tours us through TikTok, an app that she calls “Vine’s spiritual successor.”
It is easy to roll your eyes at TikTok. It’s what we tend to do with any new app that pulls in a mostly young audience — people originally rolled their eyes at Vine, too. It can be difficult to embrace an entire ecosystem of young creators whose community is built around content designed exclusively for their own entertainment. TikTok succeeds on outrageous stunts, new music, and niche interests — but that’s also what makes it such an inviting place to hang out.
I’d love to hear from subscribers interested in content moderation about this one. A guy posts clips of himself playing a popular new Western game in which he assaults and eventually kills a non-playable suffragette. YouTube first banned him, then unbanned him. It never said why, Patricia Hernandez reports:
It is unclear how, exactly, YouTube decided to terminate the channel, and what the appeal process for this entire debacle was. For example, did YouTube only take another look because people with clout made a ruckus over it? If so, do smaller channels have a pathway to dispute unfair platform decisions that’s just as prompt and effective if they don’t have a big enough microphone to broadcast it? The YouTube spokesperson did not offer these answers to The Verge, but Wyatt notes on social media that he often takes a look at things people tell him about on Twitter, and encouraged people to talk to the official Team YouTube account “with as much supportive documentation you have as possible!”
The Archive of our Own, or AO3, is a prominent home for fan fiction online. It’s having a raging debate about sex and content moderation, Elizabeth Minkel reports:
Across the web, platforms and their users are grappling with what digital speech shouldbe protected and the potential links between rhetoric and action. On Facebook, Twitter, Reddit, and YouTube, tech companies are struggling with where to draw lines around free speech and how to moderate and enforce those boundaries. How to limit speech in fiction, however, is a bit more nuanced: do TV shows about serial killers encourage people to commit murder? Does depicting fictional rape create real-life rapists?
When it comes to fan fiction, arguments are usually about the sex, not the violence. When fan-fic readers and writers make moral arguments about disallowing depictions of sex acts, they’re talking about obscenity and all its legal precedents. And because the fight for the legal legitimacy of fanworks — which, if they are noncommercial, are protected under fair use — has been such a challenge, it’s even more difficult to moderate content within fan-fic when you’re still having to defend the cultural belief that fan-fic holds value in and of itself.
Facebook’s murder clone of TikTok is now live in the App Store.
The eventual return of Vine has seen enough stops and starts to make it the Chinese Democracy of social networks. The latest twist is that what was once to be known as V2 will now be called Byte, and founder Dom Hofmann says we can look forward to seeing it next year.
Max Read, echoing other commentators, says Facebook’s real problem these days is that domestic trolls are now running the Russian playbook to spread polarizing information, and it’s unclear what the company can do about it:
Pushers of fake news and Russian trolls represent, essentially, an engineering problem — they’re bad actors whose badness is predefined in specific and identifiable ways — and Facebook is very good at solving engineering problems. But Americans, exercising our American prerogative to distribute material accusing the Democratic presidential candidate of masterminding coded satanic sex rings, are an everything problem. Facebook can’t staunch the free flow of our bullshit without dramatically changing its operating philosophy (by making truth judgments about the content its users post), its business practices (by hiring a vast army of new employees to make those judgments), and, arguably, its entire design (by leaving freely available attention on the table). You can’t put 80 percent of the country on a communications platform, reward them for posting outrageous content, and expect everyone to rigorously fact-check their status updates.
Josh Constine’s mom loves her new Facebook Portal so much and she personally does not care about all that privacy claptrap that tech reporters are always running their mouths about:
“Who am I going to be worried about? Oh Facebook seeing? No, I’m not worried about Facebook seeing. They’re going to look at my great art collection and say they want to come steal it? No, I never really thought about it.” That’s my 72-year-old mother Sally Constine’s response to whether she’s worried about her privacy now that she has a Facebook Portal video chat device.
On the other hand, Joanna Stern did not feel comfortable bringing Portal inside her own house. (She tested it at work instead.)
I’ve had one of Facebook ’s FB -2.12% new video-calling gadgets, the Portal+, in my home for the last week. And by “in my home,” I mean, in the basement, in a closet, in a box, in a bag that’s in another bag that’s covered with old coats.
I just couldn’t bring myself to set up Facebook’s camera-embedded screen in the privacy of my family’s home. Can you blame me when you look at the last 16 months?
And finally …
This story is basically just an aggregation of a great tweet by Taylor Lorenz. (Not Taylor Swift, as I accidentally referred to her earlier this week.) But it appears that as part of his embrace of Facebook Groups, Zuckerberg has decided to join Harvard’s undergraduate meme community — Harvard Memes for Elitist 1% Tweens. When a student asked where fellow dropout Bill Gates was, Zuckerberg replied “Hold on let me get him” — and then tagged him into the room.
Gates has yet to reply.
Talk to me
Send me tips, comments, questions, and Myanmar travel tips: email@example.com.