Canadian media outlets are pursuing protection against unauthorized use of content in their lawsuit against OpenAI amid growing AI disputes.
A group of Canadian media organizations has filed a lawsuit against OpenAI, alleging that the company’s ChatGPT product infringed on their copyrights by using their journalism without permission.
The lawsuit, filed on November 29 in the Ontario Superior Court of Justice, includes major outlets such as CBC/Radio-Canada, The Toronto Star, and The Globe and Mail. The plaintiffs are seeking damages and an injunction to prevent OpenAI from continuing to use their content.
The media group claims that OpenAI extracted and profited from Canadian news content without authorization, asserting that:
“OpenAI is capitalizing on the use of our content, disregarding copyright laws and online terms of use.”
In response, OpenAI defended its practices, stating that its AI models are trained on publicly available information and operate under the principles of fair use and relevant copyright laws.
The company also pointed out its collaborations with news organizations and the opt-out options available to publishers.
An OpenAI spokesperson remarked:
“ChatGPT is used by millions worldwide to enhance creativity and solve complex problems. We collaborate with publishers to ensure attribution and offer tools for them to control how their content is engaged with on our platform.”
Despite this, the plaintiffs argue that OpenAI’s actions devalue journalism by repurposing it for commercial gain.
They challenge OpenAI’s reliance on fair use, emphasizing that their journalism serves the public interest and should not be exploited for profit.
This lawsuit is part of a larger wave of legal actions against OpenAI and other AI companies regarding the use of copyrighted materials in model training.
Earlier this year, OpenAI admitted to a UK committee that training its AI systems without incorporating copyrighted content would be unfeasible.
The company has also faced criticism for recent errors, such as acknowledging that engineers accidentally deleted critical evidence related to AI training data.
The outcome of this case could have significant implications for copyright law in the age of AI, as courts attempt to balance innovation with intellectual property rights.