In the wake of a series of bombings in Sri Lanka that left more than 300 people dead, the country’s government shut down a number of social media platforms, including Facebook and WhatsApp, out of concern they could be used to spread misinformation or incite more violence. But across the region, the problem of fake news is widespread.
Facebook — facing a patchwork of different laws, sectarian tensions, and a constantly growing pool of users to police — is seemingly endlessly behind the curve when it comes to monitoring the content on its platform, including and perhaps especially in South and Southeast Asia. In India, for example, the Menlo Park, California-based company has struggled to control misinformation and hate speech heading into the country’s elections, and Facebook has acknowledged its efforts fell far short in curbing the way its platform was used to incite violence in Myanmar.
Even so, it’s still not clear that Facebook has an adequate plan to deal with the issue, or whether it has committed the necessary resources to try.
In Sri Lanka, government officials are signaling that when it comes to Facebook controlling the platform after a crisis hits, they’ve essentially given up hope the company has the capacity to do so.
Facebook, WhatsApp, Instagram, and Facebook Messenger, along with YouTube, Viber, and Snapchat, were all blocked in Sri Lanka after a wave of attacks on Easter Sunday, according to the internet monitoring group NetBlocks. On Monday, the group also detected that the Sri Lankan government appeared to be blocking the website of a VPN service that would help users get around the ban.
“What the Sri Lankan government did was authoritarian, but it is also probably what needed to be done to prevent social media from really throwing fuel onto this fire afterward,” said Ankit Panda, senior editor at the Diplomat.
This isn’t the first time Sri Lanka has taken such a measure: It temporarily restricted access to Facebook, WhatsApp, and Instagram last year in order to calm anti-Muslim riots there in 2018.
The maneuver has ignited a global discussion about the role of social media in the face of crisis. On the one hand, it can be an important tool for first responders, humanitarian groups, and journalists to gather information and for potential victims to let their loved ones know that they’re safe. On the other, social media platforms can be weaponized to spread false information and potentially cause more violence, and companies do not have a great track record of being able to get things under control.
“It demonstrates this kind of degradation of trust in the platforms,” Ivan Sigal, the executive director of Global Voices, an international blogging and digital rights organization, told me. “And they own that, they have to own some of that.”
Facebook has a rough history in the region
The deadliest and most tragic example of the potential for Facebook’s weaponization has come from Myanmar, where government forces carried out an ethnic cleansing campaign and genocide against the country’s Rohingya Muslim minority group. According to a report from the Human Rights Council, more than 725,000 Rohingya fled to Bangladesh to escape persecution.
In October 2018, Paul Mozur at the New York Times laid out how Myanmar’s military utilized Facebook over multiple years to spread anti-Rohingya propaganda in the country. He described how military personnel set up distribution channels for “lurid photos, false news, and inflammatory posts.” It wasn’t a secret that the platform was being used to incite violence in the country: Observers had for months been flagging that doctored images, unfounded rumors, and other anti-Rohingya propaganda were being spread online.
In November 2018, Facebook released the results of an independent assessment on what was happening in Myanmar and admitted that it hadn’t previously done enough “to help prevent our platform from being used to foment division and incite offline violence.” The company said it agreed that it “can and should do more” and that it had invested heavily to “examine and address the abuse of Facebook in Myanmar.”
But acknowledging the problem didn’t end it. In December 2018, Facebook removed hundreds more accounts sharing anti-Rohingya messages. And as Kurt Wagner explained at Recode at the time, Facebook’s reach in Myanmar makes the situation especially perilous:
The social network is used by an estimated 20 million people in Myanmar, or roughly 40 percent of the population. That’s the same number of people who have the internet there, according to a human rights impact report Facebook commissioned and published in November. The Facebook app comes preinstalled on many smartphones sold in the country.
Officials in India have also sounded the alarm about Facebook’s potential to spread fake news and incite violence, especially in light of its elections, which take place this month and next. In March, an Indian parliamentary panel asked Facebook global policy head Joel Kaplan to tighten controls on WhatsApp and Instagram, and in April, Facebook laid out its approach in a blog post. Facebook India managing director and vice president Ajit Mohan said the company had spent 18 months planning for its handling of the national elections there.
India is contemplating new regulations that would require companies to screen user posts and messages to make sure they’re not sharing anything illegal. It’s a controversial issue because it would likely entail companies such as WhatsApp, which use end-to-end encryption, to fundamentally change their platforms. There are also legitimate concerns about privacy and potential government surveillance.
Earlier this year, false content spread on Facebook, Instagram, and WhatsApp heightened tensions between India and Pakistan.
And as mentioned, Sri Lanka has had issues with misinformation and the incitement of violence via Facebook in the past as well. Last year amid communal violence between Sri Lanka’s majority Sinhala Buddhist community and its minority Muslim community, officials declared a state of emergency and blocked access to multiple social media platforms, including Facebook, WhatsApp, and Viber (which Facebook does not own).
Panda said the problems in India versus Myanmar and Sri Lanka are similar, but because India is more stable politically, the scenario is perhaps less perilous: Myanmar has been rocked by ethnic violence and was ruled by a military dictatorship until 2011; Sri Lanka only emerged from a 25-year-plus civil war in 2009.
“When the baseline level of political cohesion and the ethnic fault lines are much higher … social media becomes a much more dangerous instrument,” Panda said.
“We’ve partnered with 47 third-party fact-checking organizations around the world, hired 30,000 people to work on safety and security, and strengthened our policies to help keep people safe from harm,” a Facebook spokesperson said in a statement. “We’re expanding our efforts every single day to keep people on Facebook safe from those who try to exploit and abuse our service.”
In Sri Lanka specifically, Facebook has taken a number of measures to make improvements since the last ban was lifted in 2018, including significantly increasing the number of Sinhalese language experts it employs and expanding its automatic machine translation capabilities. It has held a roundtable and conducted research in Sri Lanka as well.
Facebook has also put in place a dedicated product team focused on countries where there is a potential link between online content and real-life activity, though the argument could be made that the potential for such a link is everywhere.
There’s no single explanation for what the problem is here
The spread of misinformation and incitement of violence via social media, including on Facebook and the platforms it owns, is an issue in many parts of the world. There’s no way to know if Southeast Asia is the most problematic region because there’s no global data on the matter available, explained Claire Wardle, the founder of First Draft News, a nonprofit that combats misinformation. “But what we do know is that in many places in Southeast Asia, there is a very, very high usage of social media, and there is a very high usage of closed messaging apps,” she said.
In the Philippines, for example, an estimated 97 percent of the population with access to the internet is on Facebook, according to Maria Ressa, a cofounder of the news site Rappler. Ressa is a frequent critic of President Rodrigo Duterte and has been arrested in the country twice. In places such as the Philippines and Myanmar, Facebook basically is the internet for a lot of people. Many people don’t trust traditional sources of information, such as local news sites, and social media functions as the primary source.
In places where Facebook, WhatsApp, and Instagram are zero-rated, meaning users don’t have to use up their data to access them, false information can spread through memes, pictures, and screenshots of headlines. Because people don’t want to use their data, they don’t click through.
Facebook’s staffing is problematic as well. In smaller markets, it often doesn’t have the necessary numbers of people to completely oversee what’s happening. “They just haven’t got the staff in place to be able to understand what’s happening and to be able to moderate what’s happening in the local languages,” Wardle said.
At the start of the year, WhatsApp limited group message forwarding to try to slow the spread of misinformation. But because no one can see what’s going on behind the wall of encryption, there’s little to be done beyond brute-force measures to control virality and investing in literacy campaigns.
Facebook should be motivated to better address the issues it’s facing in South and Southeast Asia, because it’s a region where it has a lot of room for growth, especially compared to the United States and Europe.
Is Sri Lanka a one-off, or a sign that we’re giving up on giving Facebook the benefit of the doubt?
Sri Lanka’s decision to shutter social media entirely in the wake of Sunday’s attacks has not been without scrutiny. Some argue that the move was necessary to stop the situation from potentially worsening, while others say it undermines the benefits social media is supposed to provide in such scenarios.
Michael Pachter, an analyst at Wedbush Securities, said he believes Sri Lanka made the right move. “I read it as, ‘We the government of Sri Lanka are under attack, and our citizens are under attack.’ The indication is that it’s temporary,” he said.
He pointed to the mass shooting in Christchurch, New Zealand, in March, which left 50 people dead. The gunman streamed the act online, and in just a 24-hour span, Facebook and Instagram users tried to upload video of the incident 1.5 million times. “Facebook can’t stop it; they can’t. There’s just no way they can stop it,” Pachter said.
He added that in the grand scheme of things, a temporary shutdown isn’t the end of the world for Facebook, business-wise. “It will almost always be less than 1 percent of the world’s population (75 million people, likely a small fraction of that) and then only for a few days (1 percent of the year),” he said. “That means every time there is an incident like this, Facebook, Google, Twitter, Snap revenues are hit by maybe 0.01 percent. Even if we assume this is a monthly occurrence, it rounds to 0.1 percent, so not significant.”
Others, however, warned that Sri Lanka shuttering social media could set a bad precedent for other countries in the future to follow suit. Yes, platforms can be used to spread fake information and incite violence. But they can also help journalists tell the story of what’s going on, allow first responders and humanitarian groups to identify where they should send aid, and let people get in touch with loved ones to let them know that they are safe.
Alp Toker, the executive director of NetBlocks, which detected the outages in Sri Lanka, said the blockage there made sense last year to prevent misinformation from spreading to incite more violence. But in the current scenario — in the wake of a widespread terrorist attack — he doesn’t believe it’s the right approach. “It’s a particularly bad time to be restricting the platforms, and it’s not traditionally the kind of period where we see social media being used for great harm,” he said.
Facebook has, like it so often does, promised to get better. It has updated its policies on credible violence within its community standards so that it is quicker to remove information when it’s reported as contributing to violence or physical harm, and it has specifically sought out local organizations to partner with in the endeavor.
It is also not clear how effective social media blackouts are in impeding the spread of false rumors and violence. A working paper from Jan Rydzak at Stanford University looking at India suggests shutdowns could potentially lead to an increase in the intensity of violent mobilization. And bad actors often just find workarounds and go somewhere else to spread fake information.
The philosophical debate about whether Sri Lanka’s decision is merited or an infringement on free speech aside, this sends a clear signal of the declining trust in Facebook and other social media platforms.
“These governments ultimately have to rely on the companies to take these problems seriously, and there really haven’t been signs until about the last year that Facebook was taking this seriously at all,” Panda said.
It may, at least in some cases, be too late.