

Discover more from On the Edge
It’s 2023.
A 5th grader sends an essay to his teacher for a school assignment.
An employee at a non-profit submits a grant proposal to the NIH.
A travel blogger posts an article about a surfing spot in Maui.
They all have one thing in common: they were co-written with ChatGPT.
ChatGPT uses large language models (LLMs), a relatively new form of AI capable of producing text so original and fluent that human readers can’t tell whether the text was written by a man or a machine.
LLMs represent an incredible development in the field of AI, and conversational proficiency is a major milestone for artificially intelligent systems and their engineers. It’s a momentous scientific achievement, and one we should take care to revere with equal measures of awe and trepidation.
But what’s striking about ChatGPT in particular is that it’s so gosh darn good at a very special craft: writing.
For the first time, everyday folks are able to access these AI systems and use them as if they’re writing alongside a real person, and that means that ChatGPT and its successors (coming soon, and no doubt more powerful) are poised to fundamentally restructure our relationship with the written word.
Writing used to be reserved for the elite — the scholars and scribes, the monks and priests, the kings and their accountants.
Then, everything changed with Guttenberg and movable type.
After the invention of the printing press and the subsequent movement towards mass literacy, writing went from something that only the well-off in society could do to something that only the worst-off in society couldn’t do. In 1820, 12% of the population could read and 88% couldn’t. By 2015 those figures had completely reversed. Now, reading and writing are fundamental building blocks of society, and none of that would have been possible without the invention of the press.
Are LLMs like the printing press and the push towards mass literacy, precipitating a sea-change in the way that humans relate to written communication?
It’s a possibility, but there’s another historical example that might offer more clues about where AI chatbots will take us: typewriters.
The era of the affordable typewriter (and later the personal computer) began in the mid 20th century, and we’re still feeling its effects. The invention and popularization of typing helped amplify our existing skills by enabling us to write and edit more efficiently, and with the widespread adoption the QWERTY keyboard, it became increasingly easy to write neatly, quickly, and on any device.
Even though they’ve only been around for a few decades, typewriters and PCs have become indispensable to the craft of writing. It would be unthinkable for a modern information economy worker to have a job without a laptop (imagine a new employee at the New York Times who receives their first company computer and says “No thanks! I’ll stick to pen and paper”). Typewriters and PCs have enabled us to write so much more readily, and these pioneering tools ushered in a new age of machine-assisted writing
We’re at a crux in history when we don’t yet know how the story of AI will progress. Whether AI systems herald a new paradigm of massively accessible, generative writing (a printing press in our pockets) or simply an amplification and improvement of our existing writing tools is an open question, and even if we’re inclined to speculate that it’s the latter, we may be placing our bets too early.

Although the jury is still undecided on the verdict of what our relationship with AI writing assistants will look like in the future, there’s a practical consideration we need to face today: credit.
Even if we all hated writing bibliographies and citing our sources in high school, the warnings against plagiarism were legitimate and deserved. Plagiarism, or intellectual theft, is a serious offense, and it shouldn’t be taken lightly (if you don’t believe it is, you’ve clearly never felt the pain of seeing someone else claim your work as their own).
Humans are immensely creative, able to rearrange letters and structure sentences in truly infinite and endlessly unique combinations — so these LLMs are doing something that most of us have, up until this point, expected the writer to do: supply originality. ChatGPT uses language that masquerades as human-produced, and so a writer who passes it off as their own is doing something akin to plagiarism, claiming someone (or something) else’s work as their own. This isn’t wrong provided that the writer gives credit where credit is due and cites their source.
But how can we cite ChatGPT?
In a world where AI-assisted writing is ubiquitous and expected, it may become commonplace enough that its use wouldn’t even warrant a mention, but we don’t live in that world yet, and so it’s necessary to disclose when we’re using these powerful tools and when we aren’t. If someone types a document, it’s clear that they used a computer, and we don’t think ill of someone for typing something rather than handwriting it. The day may come when the same will be true of AI-assisted writing. However, until that day when it is completely expected, written content that has been co-produced with ChatGPT or any other AI system ought to come with a disclaimer that clearly informs the reader that the work is not purely the invention of the human writer alone.
We can experiment with what that citation may look like. It could be a “Made with ChatGPT” footnote at the beginning/end of an article, or a dramatic dagger (†) next to the sentences written by AI, or something entirely different. DALL·E from OpenAI already does a good job of this with its iconic pixelated rainbow in the bottom right corner of the images it generates (see the picture above). Just as we label our food with a list of ingredients and give extra care to inform consumers about the presence of GMOs, we should ensure that our readers know what they are ingesting, especially if it’s artificially flavored.
However our credit takes shape, we should strive to give our readers a comprehensive understanding of what our writing truly is, no matter who or what its authors truly are.
Despite its title, this article was not written with the help of ChatGPT or any other AI.