AI Bias Concerns Expected in Upcoming FTC Discussions on Consumer Protection
Artificial intelligence algorithm bias is one issue the FTC will potentially address in upcoming public hearings on emerging consumer protection and competition issues, said Consumer Protection Bureau Senior Attorney Tiffany George Thursday. Chairman Joseph Simons recently testified the agency plans a series of hearings throughout the country on consumer protection (see 1805170073). A spokesman said Thursday the agency is working through details on timing and scope.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
Speaking at a FCBA event, George said algorithms are discriminatory by definition, and consumers should know enough about them to understand the decisions they trigger, and potential for discrimination. “There are levels of transparency,” George said. “I don’t think consumers need or want to know all the different [details] that went into an algorithm … but they do want to know a very simple explanation as to how this decision was made and how the behavior impacted that decision.”
The American Civil Liberties Union’s biggest concerns about AI are flawed decisions that affect people’s lives, in terms of social inequities, said Senior Policy Analyst-Speech, Privacy and Technology Project Jay Stanley. He opposed high levels of secrecy guarding AI algorithms. One example of AI bias he posed was a credit reporting agency lowering a person’s credit limit based on the stores that individual frequents and the types of consumers associated with those stores. It could be that there are “enormously subtle behaviors” that consumers engage in that accurately predict aspects about them, but companies can use those details to sanction consumers, he said: “We don’t want to turn into little quivering, neurotic beings, who are constantly paranoid that every little thing we’re doing is A, being monitored and B, being judged and C, being judged negatively against us because it breaks down the social contact between behavior and punishment.”
AI runs the risk of gender inequity, said Future of Privacy Forum Vice President-Policy John Verdi. He opposed “black box” algorithms, specifically for use in court cases. “We need regulators to be vigilant in monitoring outputs of AI,” Verdi said. George said the FTC encourages people to be thoughtful with new technologies and not rush into use without understanding them. Data collected for one purpose shouldn't necessarily be used in another context, because it may not be reliable for all situations, she said. “Just because something is legal doesn’t mean it’s fair or ethical, and just because something is new doesn’t mean there aren’t legal restrictions on it.”
George noted the agency called for broad-based privacy and data legislation. Verdi said baseline privacy legislation isn't plausible this Congress, but the landscape could change with elections. Stanley said consumer privacy and corporate pursuits in tech innovation are heading “full-speed” toward a collision. Congress doesn't need to wait for the disaster to occur, he said, backing changes incorporating the EU’s general data protection regulation (see 1805240014).