The Business of Being an Author

Business-of-being-an-author articles focus on the practical side of publishing and building a sustainable writing career—covering topics like branding, audience growth, email lists, book marketing, Amazon/KDP strategy, ads, pricing, launch planning, and long-term income streams—so authors can make smart decisions, reach more readers, and turn their books into a business.

Copyright, AI, and the New Rules of Authorship

What the U.S. Copyright Office says is required for human authorship and how authors can use AI responsibly.

Estimated read time: 18 minutes

What you will learn in this article:

  • Why human authorship is the legal foundation of copyright
  • How the Copyright Office views prompting and “control” over expressive elements
  • How to build a workflow that keeps your voice and your rights clear

At some point in the last two years, a familiar scene became newly complicated: an author alone with a laptop, the cursor blinking like a metronome, the late-night hush broken only by keystrokes—and, now, by a second voice on the page.

Maybe it’s an AI tool helping brainstorm chapter titles. Maybe it’s a chatbot proposing a tighter paragraph. Maybe it’s a clean rewrite of a clunky sentence you’ve rewritten five times already. The experience can feel less like outsourcing your imagination than like turning on a lamp: suddenly the room is brighter, and you can see what you were trying to build.

And then you hit the moment that feels oddly like a sobriety test.

You go to publish. You encounter a checkbox. Or a contract clause. Or the blunt question from a platform, an agent, a collaborator:

Was any of this generated by AI?

That question is no longer theoretical. Amazon KDP, for example, requires authors to disclose AI-generated text, images, or translations when publishing or republishing through KDP, while stating you don’t have to disclose AI-assisted content—two categories it distinguishes explicitly.

Meanwhile, the U.S. Copyright Office has been sharpening its stance on what counts as authorship in the age of generative tools, reiterating a principle that sounds simple until you test it against modern workflows: copyright protects human creativity.

Writers feel the squeeze from both sides. On one side: the temptation to use these tools because they’re fast, helpful, and—sometimes—genuinely inspiring. On the other: anxiety that using them the “wrong” way could jeopardize copyright protection, invite platform trouble, or raise uncomfortable ethical questions about originality.

So let’s talk about what’s actually true, what’s still unsettled, and what a smart author can do right now.

This is not legal advice. It’s a reported, plain-language map of a shifting landscape—so you can write with more confidence and less anxiety--well maybe less anxiety.

The most misunderstood distinction: ownership isn’t copyright

Start here, because it’s where most confusion begins.

Many AI companies’ terms say you “own” your output. OpenAI’s Terms of Use, for example, state that (as between you and OpenAI, and where the law permits) you retain rights in your input and you own the output, with OpenAI assigning any interest it might have to you.

That language matters for business. It tells you the company isn’t claiming your book the way a publisher might claim rights to a manuscript.

But “owning output” in a contract sense and having copyright protection under U.S. law are not the same thing.

Copyright is a government-backed legal protection for original expression created by an author. And in the United States, that concept has a stubborn cornerstone: authorship is human.

So you can have a contractual right to use AI output commercially and still end up with limited—or no—copyright protection for the parts that were generated without sufficient human authorship.

This is where writers need to stop thinking like consumers and start thinking like creators building a record: What did I actually make? What did the tool generate? And where is my creative control visible in the final work?

The U.S. Copyright Office’s definitive message: human creativity is the key

In March 2023, the U.S. Copyright Office published guidance on how it treats works that contain AI-generated material, focusing on the human authorship requirement and what applicants must disclose when registering a work that includes AI-generated content.

Then, in early 2025, the Office released Part 2: Copyrightability of its broader “Copyright and Artificial Intelligence” report, digging deeper into what level of human contribution can make a work copyrightable in the United States.

The through-line is consistent:

  • Purely AI-generated material, by itself, generally doesn’t qualify.
  • Works that include AI outputs can qualify—if a human author’s original expression is present in a meaningful way.
  • The strongest cases tend to look like this: AI is used as a tool, while the human author determines the expressive result (through writing, editing, selection/arrangement, transformation, and creative control).

There’s a reason the Copyright Office keeps emphasizing “human authorship.” Courts have echoed it.

The cases that turned a policy memo into a warning label

When an AI is the “author,” courts say no.

Thaler v. Perlmutter, a frequent test-case figure in AI IP debates, tried to register a work he said was created autonomously by his AI system. The D.C. Circuit affirmed the Copyright Office’s denial, holding that the Copyright Act requires a human author.

It’s tempting to dismiss this as a niche scenario—most authors aren’t claiming their AI is literally the author. But the case matters because it clarifies the boundary: the law isn’t drifting toward “machine authorship” in the U.S. If anything, institutions are reinforcing the human baseline.

The “Zarya of the Dawn” lesson: you may own the book, but not every part of it.

The most widely cited publishing-adjacent example is the graphic novel Zarya of the Dawn, which included AI-generated images (made using Midjourney) alongside human-authored text and arrangement. In a 2023 letter, the Copyright Office concluded that the creator was not the author, for copyright purposes, of the individual images generated by Midjourney—though the text and the selection/arrangement could still be protected.

This is the scenario many authors now face, even outside comics. Think:

  • AI-generated illustrations in a children’s book
  • AI-generated interior art in a fantasy novel
  • AI-generated “bonus” content—poems, prayers, affirmations—sprinkled through nonfiction

The takeaway isn’t “don’t do it.” The takeaway is: be clear about what you can protect and how you describe it.

KDP’s ominous checkbox: “AI-generated” vs “AI-assisted”

Amazon KDP’s Content Guidelines state that KDP requires you to inform them of AI-generated content (text, images, or translations) when you publish a new book or make edits and republish an existing book. The same policy says you are not required to disclose AI-assisted content and notes that KDP distinguishes between the two.

For authors, that distinction is important:

  • AI-generated implies the tool produced the expressive content itself (the words, the image, the translated prose).
  • AI-assisted implies the tool helped you in the process—like grammar suggestions, brainstorming, or other support that doesn’t replace human authorship.

A practical way to think about it:

If you could reasonably say, “the tool wrote this,” you’re in AI-generated territory. If you could reasonably say, “I wrote this, and the tool helped me polish or plan,” you’re in AI-assisted territory. And because platform policies can be updated and enforced unevenly, treating this as a casual shrug (“everyone’s doing it”) is a riskier bet than authors realize.

The unsettled battlefield: training data, lawsuits, and a pivotal year

There’s another layer to writer anxiety, and it isn’t about your workflow. It’s about their workflow—the AI companies and what they trained on.

Courts are still grappling with whether training generative AI systems on copyrighted works is fair use, and major lawsuits and licensing deals have pushed the question into a new phase. Reuters described 2026 as a pivotal year as U.S. courts weigh the contours of fair use and market harm in AI training disputes.

This matters for authors in two ways:

  • Ethics and reputation. Some writers care deeply about whether tools were trained on unlicensed material, even if the tools are legal to use.
  • Risk tolerance. Even if you’re legally safe using a tool, you may not want to be early in a culture war if your brand relies on trust, authenticity, or professional credibility.

You don’t have to take a grand stance. But you should decide—consciously—where you stand, and how transparent you want to be with readers or clients.

So what is copyrightable when you use AI?

Think less like a philosopher and more like an editor. The question isn’t, “Did I use AI?” The question is: Where is my original expression?

Here are the strongest “yes, this looks like human authorship” patterns, consistent with the Copyright Office’s framing:

1) You wrote the text, and AI helped you revise

If you drafted the material and used AI for clarity, grammar, compression, or alternatives—then you made the expressive choices. This looks like using software tools, not outsourcing authorship. The Copyright Office’s reports emphasize that human creativity and control are the key factors.

2) You used AI outputs, but you transformed them substantially

If the AI produced raw material and you rewrote it with meaningful creative changesvoice, structure, argument, original examples, distinctive phrasing—you’re building human authorship into the final expression.

A useful gut-check: if you removed the AI tool from your process notes, would a reader still recognize your voice and thinking?

3) Your protectable contribution is the selection, arrangement, and framing

This is the Zarya lesson. Even if some components aren’t protectable on their own, your original selection and arrangement may be.

For nonfiction authors, this can be surprisingly powerful. The choice of structure, narrative arc, examples, chapter progression, and original commentary can be protectable—even if you used AI to help generate some raw summaries along the way.

4) You incorporated your own human-authored material into the output

If you fed the tool your original writing and then iterated with it—carefully, deliberately—your original text remains yours. The key is that your final output should reflect your authorship, not the tool’s generic voice.

What isn’t enough: the “prompt as authorship” myth

A lot of writers want one clean rule: If I wrote an extremely detailed prompt, doesn’t that make me the author?

The Copyright Office has repeatedly cautioned against treating prompting as a magic key. In its broader analysis, the Office has emphasized that when a user provides a prompt and the system determines the expressive elements, that typically doesn’t show the level of human control required for copyrightable authorship.

This doesn’t mean prompts are meaningless. Prompts can be part of a creative process. But as evidence of authorship, they’re usually weaker than:

  • Drafts
  • Tracked Revisions
  • Human Written Passages
  • Documented Creative Decisions

In other words: your process matters, but your process has to leave fingerprints on the page.

Final Thoughts: What do readers deserve?

Legal compliance is the minimum. The more interesting question is moral: What does your audience expect from you?

Some readers don’t care if you used AI to tighten prose. Others care intensely—especially in memoir, thought leadership, and any genre where the implied contract is authenticity.

There’s no universal rule here. But there is a useful one:

Don’t build your brand on intimacy and trust, then hide the parts of your process that would change how readers interpret the work.

You don’t need a neon disclaimer on every book. But you do need to be honest with yourself about what you’re selling:

  • If you’re selling you—your story, your voice, your mind—make sure that’s what’s on the page.
  • If you’re selling a product where speed and utility matter more than personal voice, AI assistance may be less fraught.

The anxiety writers feel right now isn’t just legal. It’s existential: a fear that the line between craft and content production is dissolving.

And maybe it is—at least for some corners of publishing.

But the most enduring advantage a writer has isn’t a tool. It’s taste. It’s judgment. It’s the ability to decide what matters, what’s true, what’s beautiful, what’s worth the reader’s time.

Generative AI can produce text. It can’t—yet—produce meaning the way a human can: meaning rooted in lived experience, moral choice, and the willingness to stand behind a sentence as your own.

In the end, copyright law and platform rules are trying to formalize something writers have always known: authorship is not just output. It’s agency.

And the surest way to keep that agency is to keep your fingerprints on the work—visible, deliberate, unmistakably human.

Need help with your manuscript?

If you want to use AI without risking your rights, voice, or platform compliance, my book coaching helps you build a clear, human-authored workflow—from outlining and drafting to revision and disclosure.

I offer personalized coaching designed to meet you where you are in your writing journey. Together, we’ll take your ideas, shape them into a clear plan, and bring your book to life. Get started with personalized one-on-one book coaching today. Mention this blog and receive a 15% discount.