Can Creators Set the Terms?
Images, text, videos and the like produced by generative AI have reached a level of sophistication that can compete with human-authored works, and the commercial implications are obvious. Displaced human creators stand to lose licensing and commission income, as well as ad revenue from lost website traffic. Furthermore, errors or distortions in AI-generated content have the potential to infringe authors’ moral rights and devalue their original content.
This article provides some background on legal avenues that rights holders of human-authored content have taken to restrict the use of their works by AI platforms, or to obtain compensation. It then canvases what options, if any, are available to creators in Australia.
What is Generative AI and Why is it Controversial?
AI platforms utilise machine learning to make predictions based on data patterns. Predictive AI in the original sense may be trained on a selected dataset for a limited purpose. A well-known example is the use of AI in medical diagnostics to analyse imaging datasets in order to support disease detection in the clinic.
‘Generative’ AI uses takes this predictive facility to a new level, using similar procedures to produce material such as text, images, video and music. To create high quality works however, it needs high quality ‘data’ to train on, and vast amounts of it.
Disputes are now playing out in the northern hemisphere regarding the use of human-authored works in the course of training generative AI platforms and in the ultimate outputs of those platforms. Copyright infringement and the rights of copyright owners and licensees to restrict such use is a key aspect of these cases.
Leading copyright disputes relating to the use of human-authored works by AI
In Australia and elsewhere human-authored works which meet a minimum level of originality are protected by copyright. Copyright works may not be reproduced, whether in their entirety or a substantial part, without a licence.
Two recent decisions in California, both in class actions brought by writers for copyright infringement, have favoured the defendant AI firms. Notably however, these cases were limited to training of AI, and left scope for different outcomes. Proceedings brought by larger organisations in other jurisdictions that are still underway may also result in different outcomes.
Case 1: Bartz v Anthropic (US)
In proceedings filed in California in 2024, a group of writers alleged that Anthropic infringed copyright in their books by including them in a dataset used to train its AI platform Claude.
In a decision on 23 June 2025 the court dismissed the case, finding that this activity fell within the US defence of ‘fair use’. In the US this defence may apply to unauthorised uses of copyright works that create new material that is ‘transformative’ and does not compete with the source work. The judge considered that the purpose of this training was not to supplant authors but to create something new and therefore was transformative. The judge also found that the extent of market harm caused by such activities was not sufficient to prevent Anthropic making out the defence of fair use.
Although this decision did not relate to AI outputs, the judge did indicate that the production of “infringing knockoffs” by Claude could have led to a different outcome. The judge also found that Anthropic’s downloading and storing of more than 7 million pirated books in its library was not fair use, and a separate proceeding will be held to assess Anthropic’s liability for damages.
Case 2: Kadrey v Meta Platforms (US)
Within days of the Bartz decision, another case brought in California by a group of writers was also dismissed. This decision was based largely on the finding that the authors had not provided sufficient evidence to establish that Meta’s Llama would flood the market with competing works. As such, Meta’s defence of fair use was made out in this instance. However the judge made it clear that the use of copyright works to train AI tools would be unlawful in many instances. This suggests that subsequent proceedings with evidence more specifically directed to market harm could yield a different result.
Case 3: The New York Times vs. OpenAI (US)
Many of the writers who unsuccessfully brought the proceedings against Meta in California have been joined to the proceedings commenced by the New York Times in 2023 against OpenAI. These proceedings, being heard in New York, relate to the use of Times articles to train ChatGPT and to the outputs of ChatGPT.
OpenAI has also raised the defence of fair use, however the Times has filed evidence that content generated by ChatGPT contained verbatim excerpts from its articles. This case is also expected to address more squarely issues of market impact and false attribution of the outputs, something akin to Australian moral rights relating to attribution. The Times has also brought claims in relation to trade marks, but its unfair competition claims have been dismissed. No trial date has been set.
Case 4: Universal Music Publishing Group vs. Anthropic (US)
Universal Music Publishing Group and other music publishers brought proceedings in the US in 2023 in relation to unauthorised use of song lyrics by Anthropic’s AI platform Claude. Anthropic has also raised the defence of fair use and lack of harm in response to the infringement claims. However Anthropic committed to certain ‘guardrails’ intended to prevent the reproduction of copyright works, although the specific details are not published. This case too, is ongoing.
Case 5: Getty Images vs Stability AI (UK)
Getty Images commenced these proceedings in the UK in 2023 in relation to images downloaded from Getty websites without its consent and used to train the AI platform Stable Diffusion. Databases such as gettyimages.com and istock.com are particularly appealing for AI training due to the high quality of the images and their associated annotations and metadata.
Getty asserted that infringing reproductions of the images were made both in the course of training the AI model and in the course of generation of ‘new’ images. The trial in these proceedings has now wrapped up and we await the High Court’s decision.
However, prior to close of submissions Getty dropped the majority of its copyright case. While Stability AI acknowledged that temporary copying occurred during the training process, it relied in its defence on evidence that this training occurred outside the UK. Getty cited the difficulty of providing sufficient evidence of a connection between the infringing activity and the UK jurisdiction as the reason for dropping this aspect of the case. However, concurrent proceedings are on foot in the US, which is presumably where a substantial portion of training activities were carried out.
Getty dropping the copyright case in relation to AI outputs as well means we will need to wait longer for a decision on the two defences Stability raised, which differ somewhat from those in the above cases. First, it sought to demonstrate that the ‘diffusion’ model of generating images by removing added noise on a pixel-by-pixel basis cannot result in the reproduction of a work from its training dataset. Second, it raised the defence of fair dealing, which in applies in the UK for the creation of a ‘pastiche’ (that is, a work that imitates the style of another work).
Getty’s remaining arguments of trade mark infringement (by means of reproduction of Getty watermarks on its images), passing off and secondary infringement (on the basis that the AI model itself is an infringing article) are still on foot.
Australia’s Position: Waiting for a Test Case
As far as we are aware, no similar disputes have reached the courts in Australia. However, if (or when) they do, outcomes of foreign proceedings may not be a reliable guide. For one, the fair dealing exceptions in the Australian Copyright Act 1968 (Cth) differ from the US and UK doctrines and apply in only limited circumstances. Furthermore, if AI platforms are trained outside Australia, it may be difficult to sustain claims for copyright infringement in relation to training activities alone.
Several book publishers have sought to negotiate licences for the use of works by authors in their stable to train AI. Earlier this year the media reported that Black Inc Books had sought permission from its Australian authors to make their works available to an unnamed AI company to train its AI platform. Harper Collins is also reported to have obtained permission from some of its authors to allow their works to be used to train AI models.
In an earlier Wrays article Andrew Mullane referred to claims by Australian artists that the Lensa app, a self-portrait generator, utilised their visual works without authorisation. These claims have not made their way to the courts to date. These claims related to both use of their works in training and reproductions during content generation. Specific to the latter, the artists’ accusations were that the app didn’t just take their portraiture style but also exact brushstrokes and colour. The owner of the app countered that it learns just as a human would and without direct reference to the original source work.
The Australian government has established the Copyright and AI Reference Group (CAIRG) as a mechanism to engage with stakeholders on copyright and AI. To date no legislative reform has been proposed, but CAIRG members have generally expressed support for greater transparency on the use of copyright material in the development of AI models, and on the use of AI models in the creation of content. For the most part, members expressed a preference for broad-based regulation of AI rather than specific amendments to the Copyright Act.
What are the options for creators?
- Work Through Industry Bodies and Publishers
The scale, speed and lack of transparency around uses of works by AI mean that creators will have little power to enforce their copyright acting alone. Many of the northern hemisphere proceedings were brought by large publishing organisations, and in some cases several of them jointly. Creators are best placed by engaging with industry organisations, agencies and publishers that serve to protect their copyright, either through advocacy for legislative reform, large-scale licensing schemes or potentially, court proceedings.
Although the legal requirements for fair use differ across jurisdictions, judgments in the Californian proceedings indicate that solid evidence around market impact could be significant. Even if not used in legal proceedings, it may be important in government reviews and licensing negotiations. It is more likely that larger organisations would have the resources to develop this evidence.
- Use Copyright Notices and Terms of Use
In general, works created by AI are not protected by copyright as a human author is required. Therefore including a copyright notice on works may help differentiate them from AI-produced content.
It is prudent to include on a website terms of use that explicitly prohibit copying, reproduction or use of the content without permission. These have the potential to create contractual rights that exist separate to copyright, although their deterrent effect may be limited. Notably, there is a real question about enforceability of such terms if a user does not actively accept those terms (for example by clicking on a link).
- Negotiate AI Clauses in Contracts
If contracting with publishers, creators should consider incorporating terms that expressly prohibit use of their work to train AI. Such agreements can be technology-specific, so creators should ensure these agreements provide that any uses not explicitly licensed are retained by the creator. Thus if publishers wish to license these rights for such uses, they will need to seek permission their creators, as was presumably the case for the Australian publishing houses mentioned above.
Creators should also seek to avoid blanket consents to uses of their work that would otherwise infringe their moral rights, at least because of the potential for AI content to distort their works. Hallucinations and misattribution were key planks in the harm the NYT sought to identify in the US proceedings arising from AI-generated summaries of its articles.
- Employ Technological and Commercial Strategies
There are also a few practical options. Various technological options such as masking exist, although none appear to be completely effective. Where a hosting platform offers opt-outs, take advantage of these, even if not iron-clad. And commercial strategies such as excellent customer service, knowing your market, speed to market, branding and reputation can help distinguish your work from AI-generated content. Many organisations also specifically promote their content as ‘human authored’.
Some Final Thoughts
None of these options is a silver bullet. The law usually lags behind technological shifts, and AI is no exception. Creators should continue to be alert to ways to protect their works and engage with wider moves within government and industry to regulate their use by AI.
(With thanks to my own generative A(dolescent)I assistant for help with this article).
Kate Legge, Special Counsel
For Further Information
Our team of commercial law experts can provide a range of services related to copyright law to assist individuals and businesses protect, enforce and commercialise their creative works. These include copyright protection and registration, drafting and reviewing licensing agreements and copyright infringement litigation. Should you require assistance in this area please do not hesitate to contact the author, Kate Legge.
