Facebook, Twitter Reps Elaborate on Responses to Trump Looting Comments
When a public figure makes a statement violating Twitter rules, the platform doesn’t remove the post in order to allow public discussion and scrutiny, which was the case with President Donald Trump’s recent comments about looting (see 2006160059), Director-Global Public Policy Strategy and Development Nick Pickles told House Intelligence Committee Democrats Thursday.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
Republican members declined to participate in the hearing, said Chairman Adam Schiff, D-Calif. Rep. Raja Krishnamoorthi, D-Ill., asked why Facebook allowed the president’s comments, which Twitter labeled as glorifying violence, to remain posted without any labeling. Facebook Head-Security Policy Nathaniel Gleicher told the committee he personally found the post to be “abhorrent,” but the platform’s approach is anchored in freedom of expression.
Rep. Jackie Speier, D-Calif., questioned Facebook’s handling of a manipulated video which appeared to show House Speaker Nancy Pelosi, D-Calif., drunk (see 1906260051). YouTube took the video down. Facebook allowed it to circulate, labeling it manipulated material. The Pelosi video would still surface around the internet, regardless of Facebook’s action, Gleicher noted. Facebook labeled the video so users knew what they were looking at, he said.
It remains to be seen whether social media companies have the tools to detect and remove such videos at speed, Schiff said. Millions of users on Instagram, YouTube or Twitter can see false content in a matter of hours, he said. The scale of moderation is daunting, and foreign actors constantly test barriers, he added. Schiff said his concern is platforms are designed to optimize sensational content and misinformation.
Schiff cited anecdotal evidence that Google is the least transparent in terms of content moderation. For example, Twitter has a database for actions involving inauthentic behavior on the platform, he said, asking if YouTube will make data available to the public about efforts to address inauthentic content.
That’s a misperception, said Google Director-Law Enforcement and Information Security Richard Salgado. He cited YouTube’s transparency report about objectionable videos and comments and noted the company launched a quarterly bulletin about influence operations. Google and YouTube are under the Alphabet corporate umbrella.
Facebook and other platforms seem to profit from divisive content, said Rep. Jim Himes, D-Conn. Gleicher dismissed any Facebook incentive to prompt divisive content, saying users don’t want to see click bait and divisive content. If the platform generated that kind of content, it would lose users, he said. The platform has refocused content from friends and family and about discussions and public conversations, he added.
Rep. Terri Sewell, D-Ala., asked how the platforms can detect and remove misinformation from foreign campaigns like those originating in West Africa. Moving fast is about investment and partnerships with researchers, Pickles said: Twitter needs stronger partnerships and more information sharing with researchers and government.
Schiff focused on election security. Facebook has 35,000 people working on safety and security, with some 40 teams focused on elections, Gleicher responded. The company disabled 1.7 billion accounts between January and March, he noted.
Online infrastructure might not be prepared to handle political advertising, which is banned on Twitter, Pickles said, citing machine learning optimization of messaging and microtargeting. News organizations controlled by state authorities can’t advertise on Twitter either, he said, which is a result of Russian activity.