WASHINGTON — While Facebook’s growing network has helped “friends” in hard-to-reach places connect with each other, the tech giant has been slow to see the dangers the platform perpetuates in some parts of the world, one of its leaders said Tuesday.
Facebook is a frequent partner to NGOs working in global health, international development, and humanitarian response. This year's F8 conference had some higher stakes than the organization's annual event in years past, as the social media giant and the organizations it partners with navigate how to handle widespread criticism around privacy and security on the platform.
As Facebook founder Mark Zuckerberg works to convince 2 billion users — and United States lawmakers — that the company can address issues related to privacy, disinformation, and hate speech, they also acknowledge they have struggled to understand how people in “high-risk areas” might use the far-reaching online platform “to create real risks of harm and violence,” said Joel Kaplan, Facebook’s vice president for U.S. public policy, at the PeaceTech Summit in Washington, D.C., on Tuesday.
Kaplan said that building more “context-specific” knowledge represents an opportunity for Facebook to partner with organizations with experience in places such as Myanmar or South Sudan where these kinds of abuses have occurred.
“Even as our platform has grown, we haven’t had people on the ground in some of the most high-risk areas,” Kaplan said.
“I think those are really valuable opportunities for collaboration for you all to help us understand not just what’s happening right now in high-risk countries, but what might happen, and how people might use our platform in a context-specific way to create real risks of harm and violence. Those are the opportunities that we have to really work together,” he told an audience of people working at the intersection of technology and peace building.
“What we’ve learned over the last couple of years … is that we haven’t spent enough time or invested enough in thinking about the ways in which our platform could be abused and the harms that could result from that.”
— Joel Kaplan, Facebook’s vice president for U.S. public policy
In Myanmar, where Facebook partnered with local telecommunications companies to provide free access to its services, as the number of users grew rapidly so did the amount of online hate speech, largely directed against the country’s Muslim minority. Reports emerged earlier this month that Facebook quietly pulled its “Free Basics” internet access program from the country last year. In India and Malaysia, Facebook has taken out full-page ads warning about misinformation, and in Kenya, reports have surfaced that Cambridge Analytica abused Facebook data to influence last year’s presidential election.
Kaplan said that part of the problem is that Facebook’s innovators have been more focused on creating new tools than they have been on imagining how those tools could be misused.
“The people who come to work at these companies, that’s not how they’re hardwired. That’s not what they want to think about. They want to think about building all these cool new tools,” he said, adding that as Facebook’s leadership has focused on mitigating risks, it has created a shift inside the company.
“What we’ve learned over the last couple of years … is that we haven’t spent enough time or invested enough in thinking about the ways in which our platform could be abused and the harms that could result from that. There’s an element of that that’s due to Silicon Valley idealism and optimism. Some of it is due to the fact that we just grew really fast, from Mark’s dorm room in 2004 to a service that connected 2 billion people,” Kaplan said.