A group of early testers leaked access to OpenAI’s unreleased Sora tool, alleging that it was exploited during its research and development phase.
A group of artists and early evaluators of OpenAI‘s unreleased text-to-video tool Sora, who were aggrieved, leaked access to the new model in protest after claiming they had been used for “unpaid research and development.”
On November 26, the group uploaded what appears to be a front-end version of Sora to the AI developer platform HuggingFace, enabling anyone to use the tool to create. However, OpenAI has reportedly since intervened to terminate the tool.
A collective of artists and beta evaluators operating under the username “PR-Puppets” conducted the leak.
“We were granted access to Sora with the assurance that we would serve as early testers, creative partners, and red teamers.” Nevertheless, we believe that we are being led into the practice of “art washing” to convey to the public that Sora is a valuable instrument for artists.
The Beta testers claimed OpenAI exploited them
The group claimed that “hundreds of artists” provided unpaid labor through bug testing, feedback, and experimental work, only to find themselves excluded from any prospect of compensation or recognition from the AI firm, according to the open letter published alongside the leak of Sora.
The artists contend that OpenAI, currently privately valued at $157 billion, unjustly denies artists and contributors compensation for testing, feedback, and development.
The pirated version of the tool was online for several hours before it was removed, and several users promptly posted examples of the videos it generated on X.
In a post to X on November 26, film director Huang Lu shared a clip from the leaked tool and commented, “It’s impressive how well it handles arms and legs.”
According to code discovered by X users, the leaked version of Sora appears to be a speedier “turbo” variant. Additionally, additional lines of code suggest that specific controls on customization and video styles will be generated in the future.
OpenAI introduced Sora on February 16 and impressed X users with its ability to produce hyper-realistic video content from basic prompts.
OpenAI had been training Sora on “hundreds of millions of hours” of video clips to enhance the quality of its AI-generated footage and cover various styles, according to a February 17 report from The Information.
OpenAI did not prompt; it should have been a request for comment.