Tech Industry Proposes Tweaks to EU AI Act
Europe's AI policy should be clearer on what uses are barred, said the Computer & Communications Industry Association Tuesday. The European Commission's proposed Artificial Intelligence Act (see 2108070001) is a "good starting point," but several provisions need tweaking, a CCIA…
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
position paper said. Among other concerns is that the definition of AI system is too broad and could encompass almost all modern software systems: "This broad definition, in conjunction with the vague categorization of 'high risk' AI, will overburden companies with compliance measures." Other criticisms included: AI use case prohibitions must be "very targeted" to ensure they don't inadvertently sweep in other uses; and the ban on remote biometric identification systems must clarify that it doesn't cover identity verification technology (facial recognition) used for such things as verifying customers when processing mobile payments. CCIA also criticized the way the draft law predetermines "high-risk" AI uses, saying that could hamper innovation and create a burdensome preapproval process for already heavily regulated systems or processes. Whether an AI use qualifies as high risk should be "based on its foreseeable impact" on people, and its capacity to make final decisions that materially risk their fundamental rights or health and safety, the paper said. It urged the EC to avoid imposing too many mandatory requirements for trustworthy AI on companies seeking to place systems on the European market.