France, UK Eye Liability for Online Companies That Enable Extremism
The U.K. and France are exploring new ways to hold tech companies liable for failing to remove extremist content, they said Tuesday. British Prime Minister Theresa May and French President Emmanuel Macron, who met this week to talk counterterrorism, said their online anti-radicalism plan could include fines for companies that don't act. The governments want corporations to "do more and abide by their social responsibility to step up their efforts to remove harmful content from their networks." YouTube and Microsoft said they're working on the problem. One digital rights group, however, said neither the European Commission nor most EU countries show much real interest in tackling illegal online materials.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
May signaled her intent to go after internet companies after the June 4 attack in London, saying the UK "cannot allow this ideology the safe space it needs to breed." Yet "that is precisely what the internet -- and the big companies that provide internet-based services -- provide." May called for work with "allied, democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremism and terrorist planning."
In response, the Internet Services Providers' Association UK said it takes the issue "very seriously," but the U.K. government and security services already have substantial powers in that area. Before more regulation, policymakers "need to be fully aware of the effectiveness of existing powers, resources to deal with the threat and the impact any new measures may have, including unintended consequences that could undermine our defences -- for instance the weakening of cyber security," it said.
May's and Macron's statements sparked invitations from Rep. Ro Khanna (D-Calif.) to visit Silicon Valley to meet with tech leaders. In June 7 and June 14 letters, the lawmaker said tech companies are responding to the situation, and he pointed to steps they're taking to remove terrorist content and reduce the number of extremist groups on the internet. He noted Facebook's decision to hire 3,000 new employees to review reports of violent and inappropriate content, and Twitter's suspension of over 636,000 accounts related to terrorism. New technologies also are available to address the threats by, for instance, recognizing inappropriate online content, and an online database created by Facebook, Twitter, YouTube and Microsoft to share the digital fingerprints of terrorist internet material, he said.
YouTube employs thousands of people and invests hundreds of millions of pounds to fight abuse on its platform, in partnership with governments, law enforcement and nongovernmental organizations, a spokesperson said. "We are working urgently to further improve how we deal with content that violates our policies and the law," he said.
Microsoft outlined its approach to online terrorist content in a May 20 corporate blog. It said it's taking a two-pronged approach that involves: (1) Banning the posting of terrorist content on its hosted consumer services; defining such content, taking it down and promoting free speech on Bing. (2) Partnering with others to deal with the challenges. One such initiative is a pilot with the Institute for Strategic Dialogue in which Microsoft will provide in-kind funding to put ads on Bing in response to certain searches that involve extremism. Facebook didn't comment on the U.K./France proposal.
The EC didn't comment on whether it might consider similar regulatory efforts, but European Digital Rights Executive Director Joe McNamee said that would be unlikely. In March, a European Parliament member noted a March 10 EC statement saying 90 percent of the terrorist content referred by Europol to internet companies involved in the EU Internet Forum had been removed. The lawmaker asked the EC what proportion of the content referred by the police agency to the platforms was illegal, and what proportion was then investigated or prosecuted by national law enforcement or judicial bodies. The EC said June 12 it had no statistics about the number of reports of such content or about what happened next with them. "So, this information is so dangerous that action is needed and so trivial that the most basic data is not being collected about whether the content is illegal or whether any investigation ensues," McNamee emailed. The response "shows how seriously the EU and the member states treat illegal content online."
The European Parliament, meanwhile, Thursday urged the EC to define and further clarify notice and takedown procedures for illegal content, and for platforms to "fight illegal goods and content with regulatory or self-regulatory measures."