Facebook Official Wants Collective Industry Decisions on Content Moderation
Online platforms should strive for some degree of uniformity when deciding how to filter malicious content, said Facebook Global Politics and Government Outreach Director Katie Harbath Thursday. “You don’t necessarily want Facebook making one decision, Google making another decision, Twitter making another decision, too,” Harbath said at a Cato Institute event. “These are conversations we have to be having collectively, to be thinking about what are the right ways to be handling this.” Platforms should draw lines in deciding where regulation is the “right answer,” and where companies should self-regulate, she said.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
Internet companies are taking varying approaches to policing online political advertising. Sens. Amy Klobuchar, D-Minn., and Mark Warner, D-Va., applauded Facebook and Twitter for endorsing and complying with their legislative proposal, the Honest Ads Act (S-1989). Klobuchar and Warner pushed for passage of the legislation and uniform compliance, warning against online platforms adopting a patchwork of self-regulation.
Harbath repeated Facebook CEO Mark Zuckerberg’s opinion that social media ultimately can benefit democracy. But platforms need to mitigate risk, she said, adding the company continues to address a host of issues: combating foreign interference, fake accounts and offline violence, and promoting civic engagement, ad transparency and account security. Facebook’s early work in combating fake news, via red flags for disputed content, showed some people believe and defend flagged content even more vigorously, she said.
George Hawley, author of Making Sense of the Alt-Right, questioned social media’s ability to actually sway large numbers of voters, even with rampant malicious content. Policymakers need to consider how much harmful posts matter from a real-world perspective before government gets involved, he said. “I’m open to treating certain aspects of the internet like public utilities. I’m ambivalent about the issue, and there are a number of reasons that might be a bad idea,” Hawley said. One problem is that it would subject the internet to First Amendment protections, which would make it harder to silence the “most irresponsible voices,” he said. He argued there's ambiguity about definitions of hate speech on Twitter. Some accounts that don’t appear in violation are banned, while accounts that have no place in civil society remain untouched, he said.
American Majority CEO Ned Ryun said he’s hesitant to allow companies to define free speech. Companies are obviously struggling with how to decide where certain freedoms start and stop, he said. He suggested public opinion polling could be useful in deciding how platforms regard certain content.
Harbath said self-policing from users has been useful. Politicians, for example, successfully policed their own pages, filtering harmful content and harassment. The New Yorker Contributing Editor Andrew Marantz, who moderated the discussion, noted that Reddit, which recently passed Facebook as the third most popular site in the U.S. behind Google and YouTube, established an effective moderation system. Harbath agreed it’s better for platforms to handle these issues than for the government to force platforms to act.
On antitrust issues, Ryun argued online platforms don’t face free-market competition like telecom giants do. Government needs to redefine social media platforms and the rules by which they play, he said. Harbath warned against using “old definitions” of antitrust for online platforms. “Regulation is going to have to look different than necessarily just breaking us up,” she said, citing data portability as a tool for increasing competition.