USTelecom, CTIA Say Members Are Already Using AI Against Robocalls, but Scammers Also Use AI
USTelecom and CTIA told the FCC its members already use AI to combat unwanted robocalls. Both groups counseled that regulators adopt a flexible approach but said scammers also use AI. Comments in response to a November notice of inquiry (see 2311160028) were due Monday and posted Tuesday and Wednesday in docket 23-362.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
The commission should clarify that the Telephone Consumer Protection Act applies to AI-generated voice calls, USTelecom said. But it would be “premature and potentially counterproductive” for the FCC to adopt “AI-specific mandates or measures at this time,” it added.
“Scammers will use every technology available to them in their fraudulent schemes,” USTelecom said. The telecom industry “also is at the cutting edge of employing available technologies -- and investing in new ones -- to protect customers,” it said: “Providers and their analytics partners have long relied on machine learning and automation to identify and mitigate illegal calling patterns. Machine learning and other forms of AI are extremely useful for this task, sifting through scores of call detail records in real time to detect and stop the ever-evolving tactics of bad actors.”
The wireless industry is using AI in a “responsible manner to benefit and protect consumers,” CTIA said. “As the Commission seeks to assess the impact of AI on robocalls and robotexts, the Commission should encourage entities to leverage existing approaches to managing nascent and evolving bad actor threats, including technology-neutral policies and previous work on AI,” the group said. CTIA urged the FCC to encourage stakeholders to follow the National Institute of Standards and Technology’s AI Risk Management Framework “which provides a practical approach to incorporating trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems, and encourages stakeholders to use existing frameworks.”
Existing methods used to identify fraudulent calls and texts “have typically relied on analyzing non-content metadata, such as calling telephone numbers, alphanumeric identifiers, and call frequency, while the harm of scam calls is based on the content of those fraudulent communications,” Microsoft said. But focusing on non-content metadata “is not consistently effective and frequently harms legitimate calls by misidentifying them as scams,” the company said. Recent advances in generative AI offer the opportunity to identify scams through content analysis, Microsoft said: “In this proceeding, the Commission can ease the path to deploying effective solutions to identify and notify consumers of fraudulent calls by confirming that these solutions are consistent with existing law.”
AI technologies “may generate new risks for consumers,” but there’s also “promise in the ability of AI technologies to improve the customer experience and develop new methods for robocall and robotext prevention,” Twilio said. “As the U.S. works towards establishing AI risk management practices across the government and critical infrastructure sectors, Twilio encourages the Commission to explore existing frameworks and work with government and industry partners to highlight the potential for AI and how it can help mitigate threats posed by bad actors.”
Security company Numeracle urged the FCC to prohibit use of artificial or prerecorded voices claiming to be a live human caller. “Call recipients should know exactly what entity is calling them and who from that entity -- whether a human or a computer -- is communicating with them,” Numeracle said. The company warned that “lawbreakers will break laws” and “banning bad actors from doing certain things is routinely unsuccessful and increasing fines or punishments without enforcement is futile.”
Digimarc said digital watermarking technology it has made available for decades can help. The use of generative AI technology “to produce human-like dialog and simulate voices of those that consumers know and trust is advancing rapidly, posing an ever-increasing risk of fraud to consumers.” Digimarc said: “The Commission should mandate that audio generated by AI be watermarked and in so doing facilitate further authentication of all calls by making it easy for non-synthetic audio to be watermarked.”