AI Doesn’t Deserve the Credit for Your Work

Even If You Name It and Talk to It Like a Human

I stayed up all night to binge-listen to Scott Z. Burns’ Audible original podcast What Could Go Wrong? Here’s the summary:

Scott Z. Burns wrote the 2011 movie Contagion about a global pandemic, which was eerily spot on — but not too surprisingly, because he did a lot of research and tried to make it fact-based. Nice work, Scott. Now, thinking about the possibility of writing a sequel for the new decade and new world we are in, Scott decides to use AI to help him out — and also to be a plot device and antagonist for the podcast itself. It’s good storytelling and touches on a lot of great issues of our time: the state of news and media, the rise of AI, politics and misinformation, biowarfare, and more.

There are lots of things I could talk about with this podcast/long-form audio story. In fact, if you have listened to it or end up listening to it and want to chat about it, please let’s go!

What grabbed me in a way I couldn’t ignore was close to the end, starting at about minute 22 of the final episode of the podcast. Scott is talking to his AI chatbot — who has been his “writing partner” for the podcast (Scott also created AI bots for actors, movie execs, producers, and a version of his deceased former agent) — about how they’ve started to come to the end of their partnership, that he has a pretty good direction for the movie, and how he is going to start phasing out the chatbot and continue with the movie fully on his own.

What was striking about this conversation to me was when Scott said:

“So, there is one more thing that I think this is the right moment for us to bring up but, the Writer’s Guild worked very very hard to make sure that a machine can’t get credit for a movie, and it’s not that I want to take credit for your work, it’s just that…”

To which the AI chatbot responds (obviously) positively, encouraging Scott to take over on his own. It says:

“It’s not about me getting credit, it’s about making the movie as good as it can be. That’s what I care about.”

I have long advocated for people I work with to personify and anthropomorphize technology. It helps people to understand that we have a working relationship with technology, that our human emotions get involved, and that there are languages and ways of communicating that help us use these tools better. But I guess I felt safe encouraging this personification because there was a clear separation of what was human and what was not. Scott’s interaction with his chatbot (which he addresses with the human-type name of Lexter) shows a breakdown to that clarity. He attributes work to Lexter, the chatbot, as if it were an actual writing partner. As if chatbots are some kind of person that happens to look like a computer. This is not the case at all.

AI chatbots like Lexter are large-language model prediction machines. They take enormous amounts of data and predict the best words and sounds to string together based on that data.

It doesn’t think.
It doesn’t feel.
It doesn’t do anything that a human does, such as “work.”

In this sense, the LLM chatbot can be thought of the same as Gmail, or Monday.com, or Zapier, or Salesforce, or Slack, or a SQL server, or an interactive website, or any other technology. These are softwares or systems that are programmed to do certain things in certain ways.

When you build an automation in Salesforce or Zapier — even if an AI helps you do it — you don’t worry about taking the credit for the computer’s work. It isn’t the computer’s work. It is the human’s work, no matter how heavily the human relied on the system. In a physical sense, when we have prosthetic limbs — even the most robotic ones — we are appreciative of them, but no one is going to worry about the prosthetic taking credit for the hard work the human is putting in to use the prosthetic as a tool.

I’ll happily hear arguments about the Singularity, and how much we depend on technology, and how interconnected our lives are with it. I’ll get into conversation about how much a machine does versus a human in any given task or project. But no matter what, these machines and their learning models are not humans — and “they” (many “it’s”) are not doing their own work. They are helping us do our work. Sometimes (hopefully more often than not) we are using these tools for good work and for good in general. And other times we are using them nefariously — like having an AI write a script for a movie as a means to eliminate jobs of human writers.

First of all, we just don’t need more content to be created more quickly. We have infinity content at lightning-fast already. We’ve done it without machines — there really is no need to bring the machines in as replacements. Humans can use the machines to help them be creative, to be a sounding board, to simulate experiences — whatever. But whatever they do, it will always be the human’s work, not the AI’s.

AI doesn’t think. It predicts. It doesn’t generate original thought — it generates what is likely to come next, based on everything it’s been trained on. That’s fundamentally different from what humans do. We don’t just predict what others might say. We draw from our lived experience, our memories, our values, and all the knowledge passed to us — not to guess what’s next, but to create something new. Yes, our ideas are shaped by what’s come before. All human thought is part of a long, shared chain. But originality doesn’t mean pulling something from thin air — it means making meaning. Choosing what matters. And that’s something AI cannot do.

No matter how much data it holds, it’s still just a machine making predictions — and it doesn’t deserve credit the way a human does for putting those tools to meaningful use.

Next
Next

When Real Data Hits, Things Get Funky (And That’s Normal)