Better Automation Tools Can't Replace Human Monitoring of Social Media Platforms
Social media companies are boosting their artificial intelligence systems to identify harmful online content, but that alone won't solve the problem, they and others said. Twitter has suspended thousands of accounts under its violent extremist groups policy, most of which were flagged by its proprietary tools, it told us. Facebook is "scrutinizing how to employ AI more effectively," Public Policy Director Neil Potts told the House Judiciary Committee April 9. Google has "invested heavily" in automated flagging technology, said Global Human Rights and Free Expression Counsel Alexandria Walden at the hearing. But AI can't replace "nuanced human review," said DigitalEurope Director-General Cecilia Bonefeld-Dahl, a member of the European Commission High-Level Expert Group on AI.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
Last year, Twitter introduced more than 50 policy, product and operational changes to make the platform safer, it said. The site also significantly changed its reporting tools and continues to improve them while working to communicate better with users on reports and how policies are drafted. Twitter's transparency report was expanded to include relevant and meaningful data. Less than 1 percent of accounts make up the majority of those reported for abuse, and the company is focused on using technology to undermine their presence on the service.
Twitter proactively takes down terrorist accounts, it said. Its latest transparency report, published in December, showed it suspended more than 205,000 accounts during the first half of 2018, 91 percent of which were proactively flagged by purpose-built technology. The report also showed Twitter received 77 percent fewer government reports of terrorist content than the previous reporting period, the company said: Such reports now are less than 0.1 percent of all suspensions, due to the scale of the site's technological approach.
Many asked why AI didn't detect the video of the New Zealand terrorist attack, Facebook Vice President-Product Management Guy Rosen blogged. AI "has made massive progress" over the years, which has allowed the company to proactively detect the vast majority of the content it removes, "but it's not perfect." AI systems are based on "training data," which needs many thousands of examples of content to train a system to detect certain kinds of text, imagery or video, he said. The approach has worked well for such things as nudity, terrorist propaganda and graphic violence, where many examples are available. But the Christchurch, New Zealand, video didn't trigger the social media site's automatic detection systems because of the scarcity of training data, he said.
"People will continue to be part of the equation," said Rosen. Last year, Facebook doubled the number of employees working on safety and security to more than 30,000, including some 15,000 content reviewers, and now encourages users to report content they find disturbing, he said. Since the Christchurch events, Facebook has "been working to understand how our platform was used so we can prevent such use in the future," Potts told lawmakers. "But this is not an easy problem to solve, and we do not expect easy or immediate solutions." Google invested heavily in automated flagging technology to quickly send potential hate speech for human review, removing around 58,000 videos for hate and harassment in Q4 2018 compared with 49,000 for violent extremism, said Walden.
"There has indeed been some progress in using algorithms for identifying potentially harmful or illicit content online," emailed Bonefeld-Dahl. However, use of algorithms for content recognition purposes "has clear limitations, especially to take into account the proper context or in cases were different interpretations are possible." She advised against relying exclusively or mainly on automated tools, or turning them into a legal requirement: "Powerful as AI may be, it cannot replace an experienced judge or nuanced human review."
"One key question is how exactly the companies are using AI systems in their content moderation," emailed Emma Llanso, who directs the Center for Democracy & Technology's Free Expression project. "It's easy to throw around the term 'AI'" but most forms of automation in content moderation don't rise to that level. The leading social media platforms all participate in a joint hash database that allows them to share digital fingerprints of terrorist propaganda images and videos uploaded to their services, she said. That lets them automatically detect when someone tries to upload the same content again, but "it's far from a sophisticated AI that can detect new images or videos."
Automated systems "won't be able to magically answer questions that we, as humans, have still not resolved amongst ourselves," such as how to cover the Christchurch shootings and whether to include video clips or excerpts of the shooter's manifesto, said Llanso. Machine learning tools work best when they're trained to accomplish a specific, defined task: "It's difficult for humans to agree to definitions of amorphous concepts like 'harmful content,' much less for machines to intuit this from a large data set."