Export Compliance Daily is a service of Warren Communications News.

AI Firm Calls on Trump to Tighten Chip Controls, Conditions for Overseas Data Centers

The U.S. should tighten export controls on advanced artificial intelligence chips and bolster security requirements for frontier AI labs, which will slow American adversaries from developing their own AI technologies and keep the U.S. in the lead, AI research and development firm Anthropic told the White House this month.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

The company, submitting comments to the Office of Science and Technology Policy in response to the Trump administration's AI “action plan” (see 2502070065), said the U.S. needs to “strengthen export controls on computational resources,” specifically mentioning Nvidia’s H20 chip, which is being sold to China and “can be used to train and run powerful models.” Although not as powerful as H100s “for initial training, they excel at text generation (‘sampling’) -- a fundamental component of advanced reinforcement learning methodologies critical to current frontier model capability advancements,” Anthropic said.

The company said current export controls don’t apply to the H20, but the Trump administration “has an opportunity to close this loophole.”

It also said the U.S. should require countries to sign “government-to-government agreements” -- with measures aimed at preventing the smuggling of sensitive chips -- as a prerequisite for other nations hosting certain data centers with more than 50,000 chips from U.S. companies. Countries at a “high-risk for chip smuggling” should be required to “align their export control systems with the U.S.,” take “security measures” to address chip smuggling to China and ban their companies from working with the Chinese military, Anthropic said.

The company noted that the Bureau of Industry and Security's AI diffusion rule, released in January, “already contains the possibility for such agreements" -- including a framework for preapproved data facilities under its validated end-user program -- to allow certain facilities to more quickly obtain advanced semiconductors (2501130026). This rule lays “a foundation for further policy development,” Anthropic said.

The Trump administration also should revise the AI diffusion rule to tighten a "no-license required threshold" for Tier 2 countries buying chips with collective computation power of up to roughly 1,700 advanced graphics processing units (GPUs), the company said. It also noted that these orders don’t count against the country caps outlined in the rule. “While these thresholds address legitimate commercial purposes, we believe that they also pose smuggling risks,” Anthropic said.

The U.S. should instead consider reducing the number of H100s that Tier 2 countries can buy under the no-license required threshold, which will help “further mitigate smuggling risks.” The administration should determine a lower threshold after a “comprehensive analysis balancing smuggling prevention against commercial facilitation,” Anthropic said, and that decision should be made by the End-User Review Committee, the interagency group that makes decisions related to the Entity List and the BIS validated end-user program, among other issues.

The administration also needs to increase funding for BIS, the company said. “Export controls are only effective with proper enforcement,” it said. “A thorough assessment of BIS’s current enforcement capabilities and the potential benefits of additional resources would significantly enhance the overall effectiveness of these controls.”

Another portion of Anthropic’s comments recommends the U.S. raise security guardrails at U.S. frontier AI labs, noting that the “theft of even a single frontier model could significantly harm the entire export control regime.” The company called on the government to set up “classified and unclassified communication channels” between frontier AI labs and the intelligence community to share information about threats; expedite security clearances for industry officials to “aid collaboration”; study the possibility of putting in place “advanced security requirements that may become appropriate to ensure sufficient control over and security of highly agentic models;” and more.

“By implementing this comprehensive security framework,” Anthropic said, “the federal government will substantially strengthen the defensive posture of American AI companies, and significantly impede the ability of bad actors to misappropriate cutting-edge American technology and weaponize it against U.S. interests.”