Home > News > AI

OpenAI Counterattacks New York Times in Copyright Infringement Lawsuit: Alleges Evidence Tampering a

澎湃新闻 Thu, Feb 29 2024 12:51 AM EST

?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0228%2Fe07d89f3j00s9jrq1001hd000sd00fyc.jpg&thumbnail=660x2147483647&quality=80&type=jpg Unexpected Development in New York Times Lawsuit Against OpenAI and Microsoft

On February 26th, local time, OpenAI made a surprising move by requesting the U.S. Federal Court to dismiss four out of seven copyright infringement charges brought by The New York Times, alleging that the newspaper used "deceptive prompts" to obtain evidence and even employed individuals to "hack" into OpenAI's systems.

In documents submitted to the Manhattan Federal Court on the same day, OpenAI stated that The New York Times had used "blatantly deceptive prompts" in its use of ChatGPT, resulting in the reproduction of content from The New York Times and using this misleading evidence to support the lawsuit. OpenAI emphasized that The New York Times exploited vulnerabilities in the ChatGPT system, which "ordinary individuals would not use OpenAI's products in this manner."

Subsequently, OpenAI launched a fiercer counterattack against The New York Times, stating that the newspaper's lawsuit does not meet its rigorous journalistic standards. The truth, according to OpenAI, will emerge during the trial, revealing that The New York Times engaged in illicit means to infiltrate OpenAI's products.

However, OpenAI did not specify who The New York Times' hired "hackers" were, nor did it indicate which anti-hacking law The New York Times might have violated. Analysts point out that the term "hacking" as mentioned by OpenAI in the documents refers to what is commonly known as prompt engineering or "red team testing," a common practice among AI security teams, ethical experts, scholars, and tech companies to test AI systems for potential vulnerabilities.

A day later, on February 27th, The New York Times' lawyer, Ian Crosby, responded to the issue, stating, "The concept of 'hacking' has been bizarrely distorted by OpenAI. The New York Times simply used OpenAI's product to gather evidence, demonstrating that it had stolen and replicated copyrighted works." Crosby also pointed out that OpenAI did not deny that it "illegally copied The New York Times' works without permission within the statutory limitations period."

On December 27, 2023, The New York Times sued OpenAI and Microsoft for copyright infringement, alleging that the defendants were responsible for "illegally copying and using The New York Times' unique and valuable works" and associated "billions of dollars in statutory and actual damages," and demanded that both companies destroy any chatbot models and training data using copyrighted materials from The New York Times.

At the time, as evidence, The New York Times cited several instances in the lawsuit where chatbots, including ChatGPT, provided content to users almost identical to The New York Times' articles, which are available on The New York Times website through paid subscriptions.

In response, OpenAI countered in the latest filings, stating that The New York Times "went through thousands of attempts to generate highly anomalous results... In ordinary circumstances, people cannot freely make ChatGPT provide content from The New York Times."

OpenAI also expressed confidence that based on fair use principles, it and other AI companies would ultimately prevail in such infringement lawsuits. OpenAI wrote, "The New York Times cannot prevent AI models from acquiring knowledge about facts, just as other news organizations cannot prevent The New York Times from reporting stories it did not investigate."

Previously, at the 2024 World Economic Forum held in January, OpenAI CEO Sam Altman stated in an interview that AI does not need to obtain large amounts of training data from publishers like The New York Times: "Some people might think 'you need all my training data', 'my training data is very valuable', but that's often not the case. For example, we don't want to use The New York Times' data for AI training."

In recent months, OpenAI has begun negotiations with publishers to obtain licenses to use their content as training data. For example, in July of last year, The Associated Press reached a licensing agreement with OpenAI, and in December of last year, the German media giant Axel Springer also reached a similar agreement with OpenAI, the specific terms of which have not been disclosed. Following the lawsuit filed by The New York Times, OpenAI's Chief Intellectual Property and Content Officer, Tom Rubin, expressed "surprise" in an interview because both parties were in "very positive and productive negotiations" at the time.