Everybody’s talking about Chat GPT. “I’m currently experimenting with AI composition,” a retired speechwriter emailed me on Christmas Eve. “Frighteningly good. Something that will need to be taken into account by speechwriters?”
I naively thought I had the final word on this last summer, when I compared AI to a kind of human writer who has always worked among us—a hack: “a literate cynic who would write anything she or he was told, as quickly as possible and without too precious a regard for precisely chosen language, or any sacred belief in the connection between words and truth.”
But more communicators who I trust are dangerously dabbling with ChatGPT. Like an executive communication consultant pal who asked AI over the holidays: “Write a heartfelt letter from the CEO to employees, telling them that 10% of the workforce will need to be laid off in the next five days.”
Here’s what AI spit back, in 10 seconds:
Dear valued employees,
I am writing to inform you that, unfortunately, we will be facing the difficult decision to lay off a portion of our workforce in the coming days. It is with a heavy heart that I have to tell you that approximately 10% of employees will be affected by this decision.
I understand that this news will come as a shock and will likely be met with sadness, frustration, and uncertainty. Please know that this decision was not made lightly and was only arrived at after careful consideration and a thorough analysis of our financial situation.
I want to assure you that we have done everything in our power to avoid this outcome. We have explored every possible avenue to mitigate the impact of the current economic challenges on our business and our employees. However, despite our best efforts, we have been unable to avoid the difficult decision to reduce our workforce.
I understand that this news will be difficult to hear and will have a significant impact on those affected and their families. Please know that we are committed to supporting you through this transition and will do everything in our power to make it as smooth as possible.
We will be providing severance packages and outplacement support to those affected by the layoffs. Additionally, we will be holding town hall meetings in the coming days to answer any questions and provide further information.
I am truly sorry for the impact this will have on you and your families. Please know that this decision was not taken lightly and that we are deeply grateful for the hard work and dedication you have shown to our company.
“Formulaic, self-centered and bland,” my pal acknowledged, before also pointing out that it’s better than many layoff letters she sees.
And written in a fraction of the time! you say. Yes, a fraction of the already relatively infinitesimal time that’s spent drafting a layoff announcement. But the lawyers and other corporate pencil-fuckers are still going to squander a hundred person-hours debating this draft, and making it even worse: “with a heavy heart” will become “with sincere regret” … and “outplacement support” will be throttled down to “separation consultation.” The day AI replaces corporate hand-wringers is the day it replaces corporate management itself.
Oh, but layoff announcements are sort of formulaic by nature, you say. Well, so is writing, “I’m so sorry,” when a friend’s mother dies of a heart attack. But how would you like for your friend to find out you pushed the “Death Condolence” button and called it a day. No, you want your friend to know you debated “so sorry,” versus “so so sorry,” versus, “may her memory be a blessing,” versus “your mother was an unforgettable woman who will live on forever through the son she raised.” And do you not believe that the bereaved or the laid-off won’t sense and suffer from the soullessness, if not of each “heartfelt AI message,” then in the aggregate?
Whether written by man, woman or machine, most layoff announcements will also wind up being “formulaic, self-centered, and bland,” you say. That’s how they wind up, but it’s not supposed to be how they start!
Using AI to write means separating the writing from the thinking, separates the words from the human mind that chose them, transforms communication into math. Communication isn’t math.
“Communication is the litmus test of management’s decision-making,” PR legend Jack Felton explained 30 years ago. If a decision is easy to explain to various stakeholders, it’s likely a good decision; if it’s torture to get across, likely it’s a bad one—or at least, there’ssomething important wrong with it, anyway. At a crucial moment during the Cuban Missile Crisis, JFK’s speechwriter Ted Sorensen drafted a speech for Kennedy to give the nation. He wanted to include a particular paragraph that would have publicly offered an ill-advised trade deal to the USSR. But he wound up scrapping the paragraph, and thus the idea. Why? “It just wouldn’t write,” Sorensen said later.
I realize that layoffs don’t get called off because the communication staffer struggled with the fourth paragraph of the announcement. But tortured paragraphs—or exasperation expressed by their writers—do occasionally cause changes in the timing, scope, nature or tone of the campaign.
I’m permanently instructed by a tale told to me two decades ago by Terry Edmonds, who was the first Black chief speechwriter in a White House—Clinton’s. Needing to deliver a speech announcing the signing of the Welfare Reform Act, Clinton tapped Edmonds, who had firsthand experience with welfare as a child growing up in Baltimore. Clinton thought Edmonds could handle the subject sensitively.
But Edmonds disagreed with the policy—along with a number of other prominent African Americans in the administration, one of whom resigned in protest of the bill.
After much agonizing, and serious consideration of recusing himself from the job, Edmonds decided to write the speech, because he figured he’d rather have a say in the way Clinton explained the bill—making sure the speech discussed methods “for helping people get off welfare in a way that did not destroy their lives” and stressed that Clinton was “not abandoning his commitment” to help people up from poverty.
Now: Was Edmonds’ speech better for the moral agony that went into it and can be heard throughout it as clear as children’s crying? Heard in the words, heard in the structure, heard in the teeth-gritted tone. Or should Edmonds have saved himself and all his colleagues a lot of unnecessary grief and just asked a machine:
“Write a heartfelt speech from a Democratic president announcing the end of welfare as we know it.”
Back to that IA-sanguine retired speechwriter I was emailing with. When I confronted him with arguments like the above, he claimed that as a writer, he’s “not so crass” as to pass off artificial intelligence as his own.
“However,” he added, “if I was still in the corporate world, under a tight deadline, would I have the strength to resist?”
Well that, my friend, is a different question altogether.
Postscript: Another perspective, from star speechwriter Justine Adelizzi, responding to my piece:
David, you know how much I love your passionate defense of the art and poetry of what we do. But here’s another way to think about it: ChatGPT essentially gives every professional writer an intern–one who might occasionally surprise with an insightful turn of phrase, but who will generally be supplying a rough attempt that still requires considerable finesse from the person in charge.
Will there be hacks who use this to cut corners and copy-paste AI verbiage verbatim? Of course—but those hacks already exist, and AI probably won’t make them much better or worse. But it’s a potential godsend for communications people who have real tradeoffs to juggle—a solution when you can’t get the resources you need, or when you need to give more attention to the Big Thing at the same time a dozen little things are due.
Your own survey results tell you that our colleagues are overwhelmed, burnt out, and being asked to do too much with too little. Can you really begrudge them an intern? …
The professional writing community can’t just walk away from helping to figure it out because we think we’re better than it. The tool is here. There’s no putting the genie back in the bottle. If anything, we should be diving in and helping shape what ethical usage looks like—now, in its infancy. This is absolutely something PSA should think about leading on, so that our expertise can help shape what comes next.
Post-postscript: I told Justine I wasn’t sure about “leading” on the issue … but I did agree the PSA, and the Executive Communication Council, will be obliged to deal with the issue. First, by finding out what our speechwriting and exec comms colleagues think about AI writing, and whether they’re using it.
We’ve had our say. We’d love for you to use the comments here, to have yours!