A federal judge has ruled that artwork created by artificial intelligence (AI) can’t be copyrighted.
As The Verge reported Friday (Aug. 18), the ruling from U.S. District Court Judge Beryl A. Howell came during a case in which the U.S. Copyright Office was sued by Stephen Thaler after it refused to copyright one of his AI-generated images.
Thaler had tried multiple times to copyright the image and sued last year after the final rejection, arguing in court that the office’s decision was “arbitrary, capricious … and not in accordance with the law.”
However, Howell wrote in her decision that copyrights have never been given to a work that was “absent any guiding human hand,” and added that “human authorship is a bedrock requirement of copyright.”
Still, the ruling also pointed out that society is “approaching new frontiers in copyright,” where artists will use AI as a tool to create new work, as Howell wrote, added that this would lead to “challenging questions regarding how much human input is necessary” to copyright AI-created art, noting that AI models are often trained using existing works.
The ruling comes amid a number of other court cases involving AI. For example, this summer has seen at least two lawsuits from groups of writers against ChatGPT creator OpenAI, accusing the company of training the AI with copyrighted works without their permission and of using illegal copies of their books pulled from the internet.
And in June, a group of news and magazine publishers began collaborating on how to safeguard their businesses from AI companies.
Among these publishers’ worries is how content such as text and images has been used to train AI tools and whether they should be compensated. The publishers are also worried that AI provides readers with information without requiring them to click links to reach their websites.
Meanwhile, PYMNTS reported last week that a group of tech sector nonprofits had begun circulating an AI policy proposal called “Zero Trust AI Governance” to lawmakers and industry groups, urging the government to employ existing laws to oversee the industry, including enforcing anti-discrimination and consumer protection regulations..
Their chief argument, aside from the ongoing lack of AI regulation, is that tech companies should not be counted on to self-regulate their AI efforts.
“Industry leaders have taken a range of voluntary steps to demonstrate a commitment to key ethical AI principles. But they’ve also slashed AI ethics teams, ignored internal alarms, abandoned transparency as the arms race has escalated, and sought to pass accountability off to downstream users and civil society,” the Zero Trust AI Governance policy says.