Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

OpenAI has issued a strong statement criticizing The New York Times for allegedly attempting to invade the privacy of millions of users amidst the newspaper’s ongoing lawsuit against the tech giant.
According to the company’s Chief Information Security Officer Dane Stuckey, respecting trust, security, and privacy guides all OpenAI’s products and decisions. Stuckey stated on the company’s website recently that approximately 800 million individuals utilize ChatGPT weekly to think critically, learn, create, and manage various aspects of their lives. He emphasized that OpenAI regards this information as highly sensitive and is dedicated to implementing robust privacy and security protections.
Stuckey expressed concerns that the responsibilities OpenAI has toward user privacy are currently under scrutiny. He pointed out that The New York Times is insisting on access to 20 million private ChatGPT conversations. The newspaper argues that it needs these records to uncover potential cases of users attempting to bypass their paywall. This demand, OpenAI claims, blatantly ignores established privacy safeguards, contradicts standard security measures, and could require the disclosure of millions of deeply personal conversations unrelated to the lawsuit.
OpenAI intends to contest The Times’s request in court and aims to exhaust every possible avenue to safeguard its users’ privacy, Stuckey affirmed. He noted that the company had proposed several privacy-focused alternatives to the newspaper, including targeted searches that would yield samples containing text from The Times’s articles, but these offers were declined. Initially, The Times sought access to 1.4 billion ChatGPT conversations, but OpenAI successfully negotiated this figure down to 20 million random conversations spanning from December 2022 to November 2024.
To protect user data further, OpenAI assured users that any personal information present in the conversations would be removed through a procedure designed to de-identify the data.
A spokesperson for The New York Times countered OpenAI’s claims by emphasizing that the lawsuit focuses on holding tech companies accountable for allegedly using millions of copyrighted works without permission to develop products that compete directly with The Times. This spokesperson also accused OpenAI of attempting to mislead users in its blog post while attempting to obscure its illegal actions. According to this statement, user privacy is not at risk, and the court has mandated that OpenAI provides an anonymized sample of chats under a legal protective order.
The spokesperson further asserted that OpenAI’s terms of service permit the company to utilize user chats for training its models and disclose such chats in legal situations.
The lawsuit, filed in late 2023, asserts that OpenAI and Microsoft engaged in unauthorized use of millions of articles created by The New York Times to train the large language model that powers ChatGPT.
The legal complaint describes how the defendants’ generative artificial intelligence tools are based on large language models built with extensive copying of The Times’s copyrighted articles, including news pieces, investigative articles, opinion columns, reviews, and instructional guides.
Moreover, the lawsuit claims that while the defendants copied material from numerous sources extensively, OpenAI particularly emphasized The Times’s content during the development of their language models. This raises questions about their recognition of the value attributed to those works.
The lawsuit highlights the financial rewards defendants reaped by incorporating others’ intellectual property without compensation. Microsoft, in particular, allegedly experienced a significant increase in market capitalization by utilizing Times-trained models throughout its product offerings, which reportedly surged by a trillion dollars over the past year. OpenAI’s release of ChatGPT has also propelled its valuation to approximately $90 billion.
While the lawsuit unfolds, it illustrates the ongoing tensions between traditional media outlets like The New York Times and tech companies like OpenAI and Microsoft, particularly in the realm of intellectual property rights and user privacy.
The ramifications of this legal battle extend beyond the courtroom, affecting how AI technologies operate and interact with user data. As regulatory scrutiny increases, the potential outcomes may set significant precedents for how tech companies, media organizations, and their respective user bases navigate privacy and rights.
As this case progresses, the broader implications for data privacy and ownership in the age of AI will come into sharper focus, sparking heightened discussions about ethics, consumer rights, and the future of content creation in the digital era.
Fox News’ Bradford Betz contributed to this report.