ChatGPT, the Writer's Strike, and the Future of Content Writing
This Week In Writing, we explore a middle-of-the-road approach to ChatGPT and the future of writing
The Writers Guild of America is on strike. I’m not fully versed on the provisions the Guild is striking over, but I know one of them is protections against the absolute tidal wave of AI. AI can’t write with feeling or emotion (yet), but the WGA is wise to address the inevitable point where it will.
AI developments are coming fast and furious and are honestly hard to keep up with. While not a scientific poll, I’m using ChatGPT more often in everyday situations and know that my colleagues are, too. Today, I want to address the evolving state of AI writing tools, how to potentially use them responsibly, and what it means for the future of writers.
First, this is one of those situations where views change as the technology evolves. I’ve always looked at generative AI as a functional tool writers can use in their arsenal, but not something that should be used solely to create “content” (boy, do I dislike that word). I’m sticking with this stance, but the lines are starting to blur.
Currently, The Writing Cooperative rules state you must disclose the use of a generative AI. Not one submission in the last four weeks has done so. Does that mean no one used ChatGPT to build their submissions? Maybe. Though, I find it highly unlikely. Someone on one of the channels recently questioned the policy, asking what happens when all writing tools and apps have generative AI built in. It’s a really good question.
Let’s look at Grammarly for a minute. Technically, Grammarly has always been an AI company. Their fancy algorithm determines the most likely order of words, and it considers that arrangement grammatically correct. This description is an oversimplification, but it works. Now, Grammarly is going deeper into germinative AI with their Go product. Is it different from what they’ve been offering simply because it creates longer passages? I don’t know. I don’t, however, think writers need to disclose when they use Grammarly. So what does that say?
Lately, I’ve used ChatGPT for multiple projects in what I think are responsible ways. Here are a few ways I’ve used the tool recently:
Revising existing passages by using the prompt “revise this:” and entering the paragraph;
Asking for subheadings when my mind draws a blank by using the prompt “what is a one-word subheading for the following paragraph:” and entering the text;
Taking my bullet point notes from client calls and asking ChatGPT to put them into complete sentences using the prompt “take the following notes and turn them into coherent sentences:” and then entering my bullet points.
Additionally, I’ve been working with the ChatGPT API to essentially build a fancy MadLib for my nonprofit clients. They’ll eventually input a few pieces of information, which I’ll use behind the scenes to combine into a text prompt run through the API. Ultimately, this will help clients better express their ideas and provide me with better information when working with them.
I’d like to think these are all responsible ways to use ChatGPT in my regular writing process. However, I’m torn with the dichotomy here. On one hand, as a writer myself, I want to advocate for others and their livelihoods. Writers should be paid for their work, and the WGA is right to ask for AI protections. On the other hand, I see how ChatGPT saves me time and enhances my existing workflow. Like Natalie Imbruglia, I’m torn.
I still don’t think generative AI should be used to solely create entertainment. I don’t want to read a personal essay penned by an AI, nor do I think the next blockbuster film should be written by an AI that knows what will likely make the most money. Will I notice these things when they happen? Maybe at first, but over time, probably not.
What do you think? Are you torn like I am, or are your views of ChatGPT and generative AI rock solid?
PS: Besides Grammarly and asking how to spell Imbruglia, everything in this article came out of my head.
The Fictionary Webinar is this Thursday!
I’m co-hosting a special FREE webinar with our partners at Fictionary!
Join Fictionary Certified StoryCoach Editor, Shane Millar, for this fantastic webinar on How to Structure Powerful Scenes. The webinar is Thursday at 11:00 am EST.
In this webinar, you will discover:
Why every scene needs a strong entry hook
Why every point of view character needs a compelling goal
Why getting your scene middle right is so important
How to satisfy readers with a great scene climax
How to pull readers through to the next scene with an unforgettable exit hook
After attending this webinar, you’ll know the secrets behind structuring a powerful scene!
Let’s check in on Bluesky…
After talking about Bluesky and other “Twitter alternatives” last week, one of you kind people gave me an invite to the platform. My initial impression? It’s chaotic.
I think Bluesky is intentionally inviting journalists, Twitter clout chasers, and meme lords in the first wave to try and garner some of this initial hype. It’s why you keep seeing articles that say Bluesky is the next great thing despite only having roughly 60k users.
To me, Bluesky is the latest version of Clubhouse, the overhyped social platform that quickly rose to prominence and just as quickly died. It’s invite-only and going after the “cool kids” from Twitter. Sure, it creates an initial buzz, but it didn’t work out for Clubhouse. Maybe it will for Bluesky? I don’t know.
Scrolling through Bluesky feels like the tech equivalent of the White House Correspondent’s Dinner mixed with dick jokes. It’s a bunch of political and journalist nerds mixed with shitposters. That’s not inherently a bad thing, but is that what we want from social media? Or is that exactly what we want from social media?
The tricky thing about the WGA AI demand is (I am a WGA eligible screenwriter), it is already being used by screenwriters. For example, writers are having ChatGPT listen to Zoom development meetings. As executives are giving notes for changes, the writer then asks (without others knowing) ChatGPT for suggestions based on notes it just heard. The writer then shares those idea as if it was their own. The writer writes the changes themselves, the AI is solely generative. I assume development execs are doing the same. Will this put all of us out of work? I don't think so. Like you, I think there is a responsible approach. Do I trust the studios to use it responsibly? LOL. The WGA ask that every credited writer be a human is clearly reasonable, and should have been agreed to without hesitation. But the ask that AI not be used to generate source material is naive. It's already happening.
For me the opportunity to publish anything is an exercise in developing agency. The need to pay attention to what I'm actually feeling and thinking inside, choose words and concepts that convey it, and then generate the confidence to share it—are all pleasurable professional practices that are important to me. Those same skills help me to communicate face-to-face, and person-to-person; which is essential as as a professional speaker. So far I'm not touching AI and have no desire to, but no doubt it will become harder to avoid. In the meantime, I'm keeping the dopamine rewards of forming thoughts, sentences, and reaching other human beings to myself. I'm counting on the fact that the limitations of my knowledge compared to AI will force me to keep speaking from my experience and telling stories, and that readers are going to develop a growing hunger for this as AI takes over the neighborhood.