Biden Signs AI EO With Commerce, FCC, FTC Directives
President Joe Biden on Monday signed an executive order directing the Commerce Department, the FCC, the FTC and other federal agencies to establish new “rigorous” standards for how and when companies can deploy AI systems (see 2310040063).
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
The bulk of the EO’s provisions are directed at Commerce and the Department of Homeland Security, but the document encourages the FCC to explore how the technology affects communications networks and how it could be used to help improve spectrum management. The FCC can coordinate with NTIA on how to share spectrum between public and private operators, the White House said. The EO suggests the FCC examine ways to improve network security and interoperability incorporating AI, 6G technology and open radio access networks. The FCC should consider new rules for combating AI-driven robocalls and robotexts, the White House said.
The FCC is “hard at work doing its part to better explore and adjust to the opportunities and challenges afforded by artificial intelligence,” the agency said in a statement Monday.
There will be more technological changes in the next five to 10 years than in the previous 50 years because of AI, Biden said during a signing ceremony at the White House on Monday. He called the EO the “most significant action any government anywhere in the world has taken on AI safety, security and trust.” He highlighted issues related to youth social media addiction, AI-driven cyberthreats and the need for Congress to pass privacy legislation banning targeted advertising for young online users. The president is scheduled to meet Tuesday at the White House with Senate Majority Leader Chuck Schumer, D-N.Y., and his bipartisan AI working group (see 2309140050).
Vice President Kamala Harris on Tuesday will travel to the U.K. for the Global Summit on AI Safety. The goal is to meet with other leaders and discuss how to promote international norms to ensure global order and stability surrounding AI deployment, she said Monday.
The EO requires companies developing AI technology to share test results with the U.S. government. Citing the Defense Production Act, the administration will require companies “developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety” to notify the government when “training the model,” and these companies “must share the results of all red-team safety tests,” the White House said. These requirements will ensure AI systems are “safe, secure, and trustworthy before companies make them public.”
The Commerce Department is committed to “developing meaningful evaluation guidelines, testing environments, and information resources to help organizations develop, deploy, and use AI technologies that are safe and secure, and that enhance AI trustworthiness,” Secretary Gina Raimondo said in a statement Monday. She noted the National Institute of Standards and Technology, the Bureau of Industry and Security, NTIA and the Patent and Trademark Office will be responsible for “carrying out a significant portion of the EO’s objectives.”
The EO suggests the FTC consider using its rulemaking authority to “ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI,” the White House said.
The EO directs the Copyright Office and PTO to issue recommendations on protecting copyright in AI-generated content and when copyright works are used to train AI systems. The Computer & Communications Industry Association argued existing U.S. copyright law is “capable of ensuring that AI development and creative activity are both promoted.” CCIA urged the administration to take a “thoughtful, risk-based approach to regulating AI technology.”
The EO directs the National Institute of Standards and Technology to set “rigorous standards for extensive red-team testing to ensure safety before public release.” The Department of Homeland Security has been tasked with applying those standards to critical infrastructure sectors and establishing an AI Safety and Security Board. The board will consist of industry experts, academics and government officials, Homeland Security Secretary Alejandro Mayorkas said. Mayorkas will chair the board.
USTelecom looks forward to working with stakeholders on an approach that prioritizes “partnership over regulation,” CEO Jonathan Spalter said in a statement Monday: The broadband sector is “committed to working” with the administration on a “risk-based approach to AI governance ... and ensuring effective international coordination and harmonization.”
Congress must act on legislation to set a “permanent framework for the development and deployment of AI,” Senate Commerce Committee Chair Maria Cantwell, D-Wash., said in a statement Monday. The White House briefed Cantwell on the EO during a meeting Sunday. She welcomed the Commerce Department's issuing standards for third-party testing, the support for federal workers and methods for fighting algorithmic discrimination.
The EO is a good start, but Congress needs to pass legislation, said Senate Intelligence Chairman Mark Warner, D-Va., in a statement Monday. Warner said the U.S. needs to “prioritize security, combat bias and harmful misuse, and responsibly roll out technologies.”
“All executive orders are limited in what they can do, so it is now on Congress to augment, expand, and cement this massive start with legislation,” said Schumer.