Fair Use
The Ghibli Effect is, apparently, a thing. It involves using generative AI to render images in the cartoon style derivative of the Japanese animator Studio Ghibli. Why, exactly, you would wish to do that, I am unclear, but that is hardly relevant.
It is important as one of the many fronts on which the conflict between ‘content creators’ and the developers of AI models is playing out. In this case, the Studio Ghibli aesthetic and values, it is argued, are violated by its use to produce ‘cute’ pictures and memes. Furthermore, it is an abuse of intellectual property not provided for this purpose, and the appropriation and use of a creative product without proper reward to its creators.
I cannot stake out my position on these issues without very briefly rehearsing my position on generative AI which I regard as an astonishing technical achievement with the potential, even at this early stage, to yield significant societal benefits and to support a broad range of creative endeavours. I am not blind to its limitations, or the risks associated with inappropriate usage, but I believe them to be very substantially outweighed by the opportunities it presents. I am excited, increasingly so, and optimistic. This is, despite its appearance, a sober appraisal.
The debate on generative AI and intellectual property has been, in my view falsely, characterised as large technology platforms ‘ripping-off’ small creative content producers. In fact, the picture is significantly more complex, and could be equally cast as exploitative media and content aggregators, seeking rent from technology developers for a creative tool they neither developed, nor in fact envisaged. I suspect the truth lies between these poles. The point is that there are no obvious ‘goodies and baddies’ and I do not want to take sides.
In engaging with the debate, it is quite easy to disappear down both legal, and technical, rabbit holes. There is a large raft of legal cases, each advancing different arguments and basing themselves on different evidence and particulars. OpenAI, Microsoft, Meta, StabilityAI, Anthropic, Google / Alphabet, and several AI startups are all involved. Tempting as it might be to attempt an analysis, intellectual property law is complex, and I know enough to be certain only of my ignorance. I want, if I can, to hold to the high ground - the issues are as much those of public policy and ethics as they are legal.
First and foremost, it is simply wrong to use stolen content to train models. Thus, for instance, the use of LibGen (a database of pirated material) by Meta cannot be justified. In this particular case, I am personally involved. There are perhaps 150 items of my content, principally conference and journal papers, in LibGen (you can check your own exposure). Much of this is content whose copyright I assigned to a publisher in the belief that they would use any income to promote scientific exchange (probably now I would opt for an ‘open access’ publishing format but this was not an option at that time). If a student made use of LibGen for their work I would not view such behaviour as acceptable, any more than if they had casually stolen the course textbook from Waterstones. Meta should buy a copy of the relevant journals and proceedings (they might find them interesting)! If they wish to avoid the paying the price of legitimate access they may, of course, confine themselves to works in the public domain (which actually includes pre-publication copies of many of these papers).
Had the content been legitimately obtained however, I would not object to my papers contributing to Meta’s model. This is, I judge, a ‘fair’ use (note that the term ‘fair use’ has a precise legal interpretation and I intend here the everyday meaning of ‘fair’) rather like a short extract used in a published review. Indeed fairer than that, because the statistical properties of the model abstract away from the precise content and larger context.
Content creators when a model - as they see it - 'replicates' their work are generally overvaluing their originality and underestimating the proximity of their work to others. They draw on associations and the models are associative engines. Summaries have always been fair use.
Now comes the difficult part. Whatever I may now think of Meta’s business ethics, I do not want content owners to be able to restrict the use of this content for learning. I do not wish to confer on the copyright-holder the right to be able constrain what and how I might learn, what inferences I might draw, and how exactly I might draw creative inspiration, whether directly, or by way of an intermediating tool. Thus, and to illustrate, “you may read this text, but you may not count the number of words” does not appear to me a reasonable limitation. Nor should they be able to restrict to whom I subsequently convey the results of my learning. This is a quite fundamental freedom that goes beyond the immediate technical context.
Copyright is intended to promote creativity in the broader service of society, and a balance must be struck between the interests of creators, consumers, and society as a whole.
The arguments that creators suffer a substantial disbenefit from the use of the models that merits the right to secure an enhanced reward seems, to me at least, tenuous. Even in the edge case of the Ghibli effect the position is weak, Studio Ghibli cannot it seems to me, seek and secure artistic impact through the medium of animation, and get indignant when that impact inspires ‘homages’ that run counter to their corporate aesthetic.
It could, of course, be argued that by effectively displacing ‘bread and butter’ creative work: social media copy, training videos, supermarket music, and so on, generative AI is harming the creative ecosystem. Brutally however, I am disinclined to pay for a legacy business model and we do not secure the future of the creative industries by essentially artificial protections.
In most circumstances I might be tempted to allow market mechanisms to sort the conflicting interests out, I am doubtful this will work. Because I hold that AI has vast, and largely still nascent, potential, I believe the interest of society is best served by encouraging its development and use. The potential is, as the technology stands now, dependent on very large representative corpora. This may mean that we need to change intellectual property law, as is currently envisaged in the UK. Whilst I am grateful that this issue (alongside others) is being addressed I am not satisfied that any of the options are wholly acceptable. On balance I incline to the view that we should restrain the instinct to ‘buy-off’ the content lobby.
In the meantime, enjoy either the 'emotional depth, and richly imaginative storytelling' of Studio Ghibli or … this …