IA CEO: Users Vital to Flagging Threatening Content
Online platforms are willing to monitor and flag violence-threatening content for law enforcement, but user reporting is “essential,” Internet Association CEO Michael Beckerman told the Senate Judiciary Committee Wednesday during a hearing on the Parkland, Florida, school shooting (see 1803090030).…
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
Alleged attacker Nikolas Cruz, who was active on Instagram and YouTube, expressed a desire to become a “professional school shooter.” Chairman Chuck Grassley, R-Iowa, said the social media posts prompted the public to contact the FBI, but the agency never contacted IA members Facebook and Google. Grassley asked Beckerman what internet companies are doing to better monitor content and not rely so much on users. Company policies vary, Beckerman said, but there's uniform agreement on prohibiting credible threats of violence, terrorist propaganda and child exploitation images. Artificial intelligence is improving, Beckerman said, but user input is critical: “Internet users understand and welcome this responsibility, as our member companies receive millions of reports of potentially violating content each week.” He said AI is good at flagging content, but it can’t always decipher whether the images and content are actual threats or data included in a news story, for instance. Companies also have teams to review the flagged content; the third component is an active public.