Blumenthal, Hawley Seek Solutions for AI Use of News Content
Generative AI is expanding Big Tech’s data monopoly and worsening news outlets' financial crisis, Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., agreed Wednesday while hearing testimony about The New York Times Co. (NYT) lawsuit against Microsoft and OpenAI.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
NYT filed the lawsuit in December, claiming the companies are violating intellectual property laws when they use news content to train their AI models without offering compensation. OpenAI said in a statement Monday it disagrees with the lawsuit. AI training is fair use and the company is working to eliminate AI-driven regurgitation, it said.
Member revenue for the News Media Alliance has been cut in half during the past 10 years, while content demand has more than doubled in the same time span, NMA CEO Danielle Coffey told the Senate Privacy Subcommittee during a hearing Wednesday. If internet users were to exclusively read AI-generated news summaries instead of the original articles, there would be no revenue to pay journalists who deliver original content, she said. Generative AI is exacerbating the concerns that inspired legislators in Australia and Canada to force tech companies to negotiate compensation for use of news content, she said. Sen. Amy Klobuchar, D-Minn., agreed, noting U.S. newspaper ad revenue decreased from $37 billion to $9 billion between 2008 and 2020.
Local news broadcasters are also under threat from generative AI, said NAB CEO Curtis LeGeyt. He noted how a “well-known AI platform” was recently asked to provide the latest news from Parkersburg, West Virginia, and it generated content nearly word for word from the WVVA Bluefield, West Virginia, website. That TV station didn’t grant permission for the use and wasn't even aware of it, he said.
News content doesn’t appear “magically out of thin air,” and the lack of transparency about how AI models are using content makes it harder for media outlets to protect their work, said Blumenthal, the subcommittee chair. The NYT lawsuit claims AI models plagiarize articles and allow readers to circumvent paywalls, he said: These are just allegations, but they are “certainly more than plausible.”
Generative AI, which is built on “stolen goods,” is rapidly expanding Big Tech’s monopolization of user data, said Hawley, ranking member of the subcommittee. Two or three companies should not control news and information, he said, expressing interest in a licensing regime.
Congress should look carefully when contemplating limiting fair use, which journalists benefit from daily, said City University of New York journalism professor Jeff Jarvis. A similar battle played out between print and radio journalists, and ultimately democracy is better served when journalists are able to crowdsource information, he said. Limitations on fair use are limitations on democratic freedoms, he said.
Blumenthal urged Congress to move faster on regulating generative AI than it has on overseeing social media. He discussed potential points of agreement with Hawley, including a licensing regime guaranteeing news content providers are credited financially and publicly, transparency about AI training model use, and clarification that Communications Decency Act Section 230 doesn’t apply.