House Homeland Security Chair Suggests Tech Content Moderation Standards
Congress should consider offering the tech industry a set of standards to ensure proper moderation practices for malicious content, House Homeland Security Chairman Bennie Thompson, D-Miss., told reporters after a hearing Wednesday. Democrats from the panel hammered witnesses from Facebook, Google and Twitter, saying industry isn't doing enough to remove content from bad actors like the Christchurch, New Zealand, mass shooter (see 1905150047). Republicans mostly focused criticism on First Amendment issues and claims of anti-conservative bias.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
The committee will seek continued briefings from industry and stakeholders, both large and small, Thompson told reporters. He noted the three companies present had different approaches to handling a recently doctored video that made House Speaker Nancy Pelosi, D-Calif., appear drunk. “I don’t know if there’s a regulatory response to it, but nonetheless, they’re being used by people,” Thompson said, asking if there might be a standard code of conduct all companies could follow.
A major focus of the hearing was the tech industry’s Global Internet Forum to Counter Terrorism. GIFCT was launched by Facebook, Microsoft, Twitter and YouTube at the 2016 EU Internet Forum to curb the spread of terrorist content online. That the association has no full-time employees and no physical location shows the sector isn’t truly taking the issue seriously, said Rep. Max Rose, D-N.Y. Witnesses from the three companies confirmed those details in response to "yes-or-no" questions from Rose.
The Adhesive and Sealant Council association in Bethesda, Maryland, has five full-time staffers and a brick and mortar structure, Rose noted. Tech can’t “get its act together,” he said, criticizing a “technocratic,” elitist approach to highly preventable situations, in which people are dying. GIFCT is an insulting joke of an association, he said.
The companies defended throughout the hearing their efforts to curb online hate. Since 2018, Facebook has “taken action on more than 25 million pieces of terrorist content, and we found over 99 percent of that content before any user reported it,” testified Global Policy Management Head Monika Bickert. Twitter suspended “more than 1.5 million accounts for violations” involving promotion of terrorism between August 2015 and December 2018, testified Global Senior Strategist-Public Policy Nick Pickles. In Q1, YouTube reported “over 75 percent of the more than 8 million videos removed were first flagged by a machine, the majority of which were removed before a single view was received,” said Global Director-Information Policy Derek Slater.
Republicans repeatedly hammered Slater over a Project Veritas video purporting to show Google executives strategizing about how to avoid having a future president like Donald Trump. Rep. Mike Rogers, Ala., led that questioning, with his Republican colleagues railing at Google on the same point. The issue of violent and terror-related content online has peaked in the past decade, but efforts to moderate speech have a chilling impact on the First Amendment, he said.
Google is establishing a reputation for silencing conservative voices, said Rep. Clay Higgins, R-La. There’s nothing more harmful than the restriction of free speech, he suggested. Many conservatives think the industry is leading a conspiracy, said Rep. Debbie Lesko, R-Ariz. Rep. Dan Crenshaw, R-Texas, claimed the platform is operating on the premise that someone like conservative pundit Ben Shapiro is a Nazi and asked Slater if the company has a definition for hate speech.
“Hate speech, updated in our guidelines now, extends to superiority over protected groups to justify discrimination, violence and so forth, based on a number of defining characteristics, whether that’s race, sexual orientation, veteran status,” Slater said. Google’s community guidelines dictate that speech inciting violence and harassment is banned.
There’s no place for terrorists on Facebook, Bickert said. She wouldn’t guarantee that another Christchurch incident, in which a mass shooter livestreamed the video of the act for at least 200 people to see instantly, won’t happen again. The platform altered access to Facebook livestream, but the technology isn’t perfect, she said. The company is trying to get faster, she said.
Twitter didn’t remove the doctored video of Pelosi, a decision Thompson questioned. Pickles defended the decision to keep the video up, saying it doesn’t currently violate any of the platform’s rules. The company is exploring whether such policies are correct, he said.
Congress should be wary of efforts to restrict online terror content because it might not promote counterterrorism efforts, testified New York Law School professor Nadine Strossen. It could also undermine such activity, she said. “Keeping terrorist content online facilitates intelligence gathering and counterterrorism efforts."