September 18, 2023

Copyright will have to change

Drive your cart and your plow over the bones of the dead.

The US Copyright Office is currently taking public comments on artificial intelligence. That’s good!—but in the meantime they still have to make decisions, and the decisions they’ve made haven’t been so good. Consider the case of Théâtre D’opéra Spatial, a digital painting made in part using Midjourney.

Comparison from the Copyright Review Board’s decision letter.

The Copyright Review Board decided this wasn’t copyrightable:

In the Board’s view, Mr. Allen’s actions as described do not make him the author of the Midjourney Image because his sole contribution to the Midjourney Image was inputting the text prompt that produced it.

In other words, the original Midjourney picture isn’t copyrightable, but the changes made by the artist are. Practically speaking, I don’t see what good it would do to copyright edits to an image if you can’t copyright the image. But whatever. The Board cites several court cases in support of their decision. Most of them aren’t interesting to me, but I want to highlight this one involving a copyright claim on a wildflower garden:

The real impediment to copyright here is not that Wildflower Works fails the test for originality (understood as “not copied” and “possessing some creativity”) but that a living garden lacks the kind of authorship and stable fixation normally required to support copyright. Unlike originality, authorship and fixation are explicit constitutional requirements; the Copyright Clause empowers Congress to secure for “authors” exclusive rights in their “writings.”


Authors of copyrightable works must be human; works owing their form to the forces of nature cannot be copyrighted.

In support of this decision, we have Copyright Office Practices §503.03(a):

Works produced by mechanical processes or random selection without any contribution by a human author are not registrable. Thus, a linoleum floor covering featuring a multicolored pebble design which was produced by a mechanical process in unrepeatable, random patterns, is not registrable. Similarly, a work owing its form to the forces of nature and lacking human authorship is not registrable; thus, for example, a piece of driftwood even if polished and mounted is not registrable.

The relevance to AI is obvious. If Midjourney makes an image through a mechanical process without any contribution by a human author, then that image isn’t copyrightable. The issue, of course, is what counts as human contribution. The garden case also deals with the issue of fixation, which isn’t relevant to AI but I’m including it anyway because it’s interesting:

Moreover, a garden is simply too changeable to satisfy the primary purpose of fixation; its appearance is too inherently variable to supply a baseline for determining questions of copyright creation and infringement. If a garden can qualify as a “work of authorship” sufficiently “embodied in a copy,” at what point has fixation occurred? When the garden is newly planted? When its first blossoms appear? When it is in full bloom? How—and at what point in time—is a court to determine whether infringing copying has occurred?

I would be inclined to say there was human contribution here, but the fixation issue seems pretty solid.

Returning to Théâtre D’opéra Spatial, there’s this telling comment from the administrative record:

While Mr. Allen did not disclose in his application that the Work was created using an AI system, the Office was aware of the Work because it had garnered national attention for being the first AI-generated image to win the 2022 Colorado State Fair’s annual fine art competition. Because it was known to the Office that AI-generated material contributed to the Work, the examiner assigned to the application requested additional information about Mr. Allen’s use of Midjourney, a text-to-picture artificial intelligence service, in the creation of the Work.

So no matter what the Copyright Office ultimately decides, can’t you just not say your work involved the use of AI? Or if the Office asks you, can’t you just lie? How would they know?

Scott Aaronson has talked about possibly adding a statistical watermark (PPT) to ChatGPT, and maybe the same principle could work with diffusion models. But then people will use models that don’t add watermarks. I just don’t see any possible future where the AI artists don’t win here, one way or another.

The AI trainers are another issue entirely, but I think they’re also going to win. Against OpenAI and Stability AI, I’ve seen people cite the recent 7–2 Supreme Court decision in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith, but I don’t think it’s relevant. Here’s the Oyez summary:

Andy Warhol’s “Orange Prince,” one of the Prince Series that was derived from the photograph by Lynn Goldsmith, appeared on the cover of a Vanity Fair magazine commemorating the late musician for a fee of $10,000—all of which to AWF and of which Goldsmith received none. In contrast, Goldsmith’s photographs were licensed and used on several other magazine covers commemorating Prince.

AWF’s use of Orange Prince on the cover of Vanity Fair served essentially the same commercial purpose as Goldsmith’s original. Thus, the first fair-use factor—the purpose and character of use, including whether the use is for commercial or nonprofit purpose—weighs against the conclusion that AWF’s use of Goldsmith’s photograph for the specific purpose of a magazine cover commemorating Prince was fair.

Like it says in the summary, this case centered around the first fair-use factor. From the majority opinion:

This factor considers the reasons for, and nature of, the copier’s use of an original work. The “central” question it asks is “whether the new work merely ‘supersede[s] the objects’ of the original creation … (‘supplanting’ the original), or instead adds something new, with a further purpose or different character.” In that way, the first factor relates to the problem of substitution—copyright’s bête noire. The use of an original work to achieve a purpose that is the same as, or highly similar to, that of the original work is more likely to substitute for, or “‘supplan[t],’” the work.

The thing is, in the Warhol case the artwork in question was really similar to the photo. The two are essentially substitute goods. But a model like GPT or Stable Diffusion is obviously not a substitute for any of its training data. I mean, you certainly could in some cases get ChatGPT to repeat copyrighted text to you—but that’s not what it’s for. That’s not what people use it for. Nobody is pirating books through language models. That would be stupid, and piracy was already super easy anyway. Likewise, nobody is using Stable Diffusion to replace any particular artwork.

In contrast to the Warhol case, I’ve seen some people cite factor four (“the effect of the use upon the potential market for or value of the copyrighted work”) as a problem for AI trainers. I disagree. I think the people making this argument imagine that this is a zero-sum situation, so if people adopt AI then that necessarily means some other artist doesn’t get paid. But in reality, that “other artist” might never have gotten paid anyway!

YouTuber Austin McConnell made this same point after viewers reacted negatively to his use of AI in a video. And as long as I’m plugging YouTube channels, I can’t ignore Corridor Crew, who used AI to turn live action video into animation. Once again, this was a project that ordinarily wouldn’t have been possible with such a small team, or without a much greater investment of time and money.

In closing, here’s a few predictions: