Schiff Says Altering Section 230 Worth ‘Serious Consideration’; Nunes Mostly Agrees
Amending Section 230 of the Communications Decency Act to hold tech companies more accountable for false and harmful content is worth “serious consideration,” House Intelligence Committee Chairman Adam Schiff, D-Calif., told reporters Thursday. “If social media companies can’t exercise a proper standard of care when it comes to a whole variety of fraudulent or illicit content, then we have to think about whether that immunity still makes sense.”
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
Ranking member Devin Nunes, R-Calif., “mostly” agrees tech companies should be required to follow “reasonable content moderation practices.” But he's skeptical such a mandate could be implemented. Nunes declined comment outside a hearing. The committee held a hearing on deep fakes, or false and misleading material like the recent altered videos of House Speaker Nancy Pelosi, D-Calif., and Facebook CEO Mark Zuckerberg.
Deep fakes have the potential to cause riots and sink political candidacies, and the content needs to be “suppressed,” Schiff said, arguing it’s not too late for the 2020 election. Asked whether the committee is contemplating legislation to amend Section 230, he said other committees have interest in the societal impacts, separate from Intelligence’s focus on foreign manipulation. “This is an oversight issue for several committees and a legislative issue for several committees, and we’ll be working with our colleagues to determine what’s the right response.”
It’s time to amend Section 230, testified University of Maryland law professor Danielle Citron. Congress should condition immunity on “reasonable content moderation practices,” ensuring platforms are making the right choices, she said. Platforms should have default procedures for removing fabricated material like the Pelosi video, she said.
Platforms shouldn’t be forced to remove every piece of doctored content, testified Foreign Policy Research Institute distinguished research fellow Clint Watts: That could lead to removal of satirical material that has value, and the U.S. shouldn’t force industry to police such a thing. The Zuckerberg video, which Facebook didn’t remove, is a “perfect example” of satirical material that can spur a productive conversation, Citron said.
Platforms that engage in reasonable content moderation practices shouldn’t be treated as publishers of third-party content, Citron said. They can be defined somewhere between publishers and immune online hosts, she said. But it all depends on the subjective definition of “reasonable,” Nunes said.
Schiff's opening comments cited various deep fakes, from face swap technology to artificial voices and faces. The technology allows people to turn world leaders into “ventriloquist dummies,” he said, attributing it to the rapid pace of artificial intelligence technology development. Citron also highlighted another related problem: fake sex videos in which people’s faces are morphed onto pornography The problem is likely to get worse before it gets better, testified University at Buffalo professor David Doermann. Industry needs automated detection at scale, to curtail the problem at the front end of the distribution pipeline, he added.
Maybe false material should be labeled instead of removed, suggested Rep. Brad Wenstrup, R-Ohio. It’s “kind of pathetic,” but maybe that’s the answer, he said. Labeling can be useful, but in some instances it might not be good enough, Citron said: “There’s really no counter-speech for certain falsehoods.” One certainty is that industry needs to be pressured to work together to establish content policing standards, Watts said. Any lag in addressing false content allows conspiracies to grow, he added.
Misinformation was also a problem in the “analog era,” Watts said. Newsstands have always carried sensational material. Sometimes, people don’t even realize articles from the Onion are fake, said Rep. Joaquin Castro, D-Texas, suggesting it’s not strictly an online deep fakes problem. AI is an accelerant of existing problems, Watts said.