Google Touts Moderation Efforts Amid Criticism Over Hamas Attacks
Google representatives defended their content moderation efforts Thursday amid congressional criticism of YouTube, Meta and X, formerly known as Twitter, for their handling of content about Hamas’ attack on Israel.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
House Commerce Committee ranking member Frank Pallone, D-N.J., urged the three platforms to “vigorously enforce their terms of service” to curb the spread of content broadcasting “acts of terror, violence, or extremism” following the attacks. “Disturbing content and deliberate disinformation are spreading across social media like wildfire,” he said Thursday. “Social media companies must not allow their platforms to become agents of terrorist propaganda and violence.” The situation highlights the need for “robust and fully supported content moderation staff” at the platforms, he said.
David Graff, Google vice president-global policy and standards, speaking during a Vanderbilt University event, discussed content moderation efforts in general. Given Google’s scale, automated tools are necessary to manage the millions of pieces of content uploaded to YouTube every day, he said. Graff didn’t address the Hamas attacks directly but said people have suggested in the past that YouTube manually review every video before it’s uploaded. Graff noted 500 hours of video are uploaded to YouTube every minute: “Sometimes people forget the scale at which we operate. ... We have to have automated systems for looking at these things.” The companies didn’t comment on Pallone’s statement.
One could argue the platforms’ increased reliance on algorithms and automation turned them into publishers or content creators, said Middlebury College professor Allison Stanger at the Vanderbilt event. This reliance calls into question the immunity they enjoy under Communications Decency Act Section 230, which was intended to protect platforms for merely hosting content. Section 230 spurred innovation as intended, but it might be time to step back and assess whether platforms should continue to enjoy this immunity, she said.
Generative AI is forcing companies to “reconceive” free speech concepts and a platform’s role as “public square,” said OpenAI Product and Policy Analyst Kim Malfacini. Companies are becoming increasingly responsible for certain models that generate content, said Malfacini: “There’s an understanding that we have greater responsibility by virtue of the kind of control or opportunity for control.”
Graff said he’s cautiously optimistic about AI technology’s potential but not naive about its “significant” risks. He called it a “fascinating exercise” trying to teach generative AI tools not to create harmful material. Google can draw from years of experience, but it’s a “new world” for trust and safety issues because of generative AI, he said. There will be judgment calls when applying abstract concepts in scalable, repeatable systems that humans and machines use, he said: That means inevitable “losses” for engineers reviewing content.
Google is constantly juggling free speech principles from around the world, said Alexandria Walden, policy lead-global human rights and free expression. That includes the First Amendment, the U.N. Guiding Principles on Business and Human Rights, the U.N.’s Article 19 and the International Covenant on Civil and Political Rights, she said. “We want people to have access to information,” but the company needs to ensure platforms aren’t “feeding people” hate speech and misinformation.
Platforms have a constitutional right to allow objectionable content, even racist and hate speech, said UCLA law professor Eugene Volokh: The First Amendment bars the government from intervening. Walden said Google is ultimately a business that relies on user trust, so it must juggle that reality with free speech rights.