Jim Reische, special advisor for executive communications to the president of Williams College and co-founder of the Higher Education Leadership Communication Council, writes:
The Professional Speechwriters Association recently published a new white paper, “AI Will Lead to a Golden Age of Speechwriting. Will You Get There With Us?” by Brent Kerrigan.
Will it? Will we?
Brent’s topic is about as timely as it gets, his qualifications are unimpeachable, and his writing is lively and entertaining. He has applied long and varied experience to exploring an important professional topic in detail. However, I have some concerns about the paper’s starting point.
Since the dawn of the Industrial Revolution, and maybe since the construction of the pyramids, bosses have been chasing the dream of technologically-enabled increases in efficiency and productivity. These they sell to workers (admittedly the management philosophy was a bit different at the pyramids).
In practice, these technology-driven productivity increases eventually run up against the hard limits of human endurance. You can only wring so much water out of a rag. What the cannier of the futurists and CEOs figured out was that the big gains would henceforth no longer be in efficiency, but quality. We don’t need to squeeze more water out of the rag. We just need to make sure we get the best water … the most strategic water. Hence all the rosy talk about how AI will help employees achieve more strategic value per unit of work.
At this year’s PSA World Conference, one of our keynote speakers, who writes extensively about the potential value of AI in our profession, described using AI to craft the message she delivered to several employees who she was terminating. She justified her choice by claiming that it freed her up to do “high-level” and “strategic” thinking that was more worthy of her abilities. All of us in the room, she said, could use AI to free our minds for better things, too.
Cui bono? as the jurists say.
And if the carrot of strategic value isn’t motivating enough on its own, there’s the stick of obsolescence. In his conclusion, Brent writes: “The craft thrives. The speechwriters who articulate their amplified value will too. The ones who don’t? There’s always someone with a ChatGPT account and a budget to cut.”
Each new phase in the long, often mind-numbing history of business innovation has been accompanied by similar warnings (or threats, depending on whom they’re coming from). Consider IBM’s 1971 ad that introduced the world to “word processing machines” (another grotesque turn of phrase), which asked readers, “Are you unconsciously telling your boss you can’t handle a bigger job?”
Heaven forbid.
Many people have no choice in the matter of whether to employ AI at work. For them, Brent’s white paper will be a good guide to using it effectively and, on his terms, ethically. But I wish I saw more members of our profession challenging those terms, and especially the insidious logic of efficiency, productivity, and, now, strategy. The same logic that so many speechwriters, in other settings, complain is eliminating room for craft and reducing us all to mere typists. Brent and our PSA keynote speaker, among so many others, are telling us that AI can help us recapture some of that room for artistry, when in fact what it will give us is not free time, but the “opportunity” to handle ever bigger jobs.
That’s the magic of the progress mindset: it fashions a solution out of your problem and then markets it right back to you.
Last fall, President Maud S. Mandel, whom I write for at Williams College, and I put together a convocation speech called “The Forms that Thinking Takes.” Midway through her remarks, Maud asked the audience of students and faculty:
Now that AI and related technologies can produce plausible answers to many questions, faster than we can, who needs learning, anymore? If we can just ask Claude, then isn’t all that talking and studying and writing just inefficiency?
Her answer—the answer she and I arrived at together—was:
That messy, inefficient, stay-up-all-night stuff is where most of the value is. In the hard thinking. The debates. The doubts you learn to overcome. The mistakes you have to correct and get beyond. The false starts and eventual breakthroughs. And, most of all, the connections with each other.
Good writing depends on good thinking. And good thinking comes from doing work that the AI advocates claim is unstrategic: unworthy of our talents and limited, precious (to them and to us, in different measure) time. Work that, in the doing of it, also produces the valuable and sometimes enjoyable byproduct of natural human relations. Doing this supposed scut-work often leads to a conversation with a co-worker who has information we need; a librarian or archivist who can help us with our research; a barista or bus driver on the way to the office. None of which transpires when we outsource the tasks to AI. Once that occurs, it’s just a matter of sitting in our lonely offices crafting prompts, then helping the machines to connect with other machines and present us with their conclusions.
This use of AI is not only lazy and dehumanized, it’s extractive. It is, at its root, the process of getting someone (or, in this case, something) else to do our work for us. The illusion of costlessness, which is essential to the sales pitch, actually masks a set of messy and increasingly plain moral problems: about the enormous differences in wealth and privilege between those who create and sell these technologies, and the rest of us who will have to accommodate it in our lives; about the fact that our interactions with agentic technologies are exploitative by design; and about AI’s ravenous consumption of energy, rare minerals and clean water, to name a few.
Nor is an extractive approach healthy for the people doing the extraction. Feudal lieges, agrarian slaveowners, industrial barons, tech bros … they eventually grow lazy, dependent and smug, as history shows us. AI seems to promise that we can all be extractors. But it’s a suckers’ game. Just watch what happens once the electricity and clean water run out. None of us are getting a seat on Elon’s SpaceX flight to Mars, or a spot in Peter Thiel’s bunker.
If you believe in a business culture of more, faster, better—or if you have no way to resist such an ideology—then adopting AI is an apt way to aim for short-term success (and survival). And if you’re going to use AI, then Brent Kerrigan’s white paper is a well-crafted and conscientious guide that you should turn to. It’s thoughtful, applicable and well-structured, as far as it goes. The problem, in my opinion, is that it doesn’t go nearly far enough.
Humanity has had better ways of writing, thinking and living available to us for thousands of years. Why should we favor artificial forms of intelligence over the real thing?
***
PSA Executive Director David Murray responds:
Jim Reische is one of the PSA’s closest advisors—and mine. So I thought I’d better respond, as the executive director of the PSA, the commissioner of Brent Kerrigan’s white paper and the convener of the popular workshop, AI for Speechwriting and Executive Communication.
I’m a tech foot-dragger going way back—back to the first years of my career, when I had just learned how to write kickers, heads, subheads, leads and nut graphs to drag readers kicking and screaming into the articles I was writing. And suddenly all these propeller heads were telling me I should put lots of “hyperlinks” into my articles, to give readers portals through which to leave the article, to explore other areas of the endlessly growing World Wide Web.
Ummm, fuck that!
I resisted the Internet, I resisted blogs (see “Blog Wonks Need Chill Pill,” Murray, circa 2004), I resisted social media.
And the only thing I hate more than AI itself, is having to talk about it. It’s been almost three years now since I wrote in Fortune, “Once the ‘Intellectual Blood Banks’ of the Rich and Powerful, Can Speechwriters Be Replaced with ChatGPT?” Rereading that piece, it sounds half like Jim and half like Brent.
“For a real communicator, having a machine write a first draft is not a shortcut, it’s a short circuit,” I wrote, like Jim. “And a CEO who believes that an important part of the job is to communicate genuine ideas and feelings to other human beings will intuitively understand the essential involvement of a human soul in that process. Rather than just adding a ‘personal touch’ to an AI composition, human writers are an essential part of the DNA of the message.”
But then in the same piece, I started sounding a more like Brent, a workaday speechwriter churning out stuff to be said at conferences, “But really: How much of the volume of corporate communication is a sincere attempt to communicate strategies, build culture and create a human connection between an organization’s leaders and its stakeholders? And how much of it is just filling the vacuum with corporate noise …? I’ve wondered that for a long time. And it looks like we’re all about to find out.”
The other night I had dinner with an exec comms chief at a big important organization whose CEO uses AI to write a lot of his own shit, and then leans on my guy to make it better. My guy is encouraging his direct reports to experiment with AI so they get conversant in how it works—because the CEO sees this as the future, as so many CEOs do, whether we like it or not.
I objected to that, too, remembering how a PR trade newsletter company I worked for as the Internet came in, screamed in brochures for communicators’ conferences, “Don’t Be Left Behind!” Lotta people came to those conferences. Lotta people didn’t. I don’t remember anybody getting “left behind,” who didn’t want to be left behind. (Sometimes, I confess, I want to be left behind.)
The Internet got easier and more intuitive to use without knowing what TCP/IP meant, and everybody settled in and eventually figured out how to use it—or it figured out how to use us.
I think the same thing will happen with AI. I also don’t think there’s a hell of a lot you and I can do to speed that process, or slow it. I certainly don’t expect to see this headline anytime soon: “‘I Prefer Not To’: Global AI Boom Stymied by the Reticence of Several Hundred Professional Speechwriters.”
Look, the Professional Speechwriters Association waited almost three years since ChatGPT caused all writers to shit our jeans, to start offering in-depth AI webinars for our customers. We waited until our customers started telling us they needed help, mostly because their bosses were demanding, in various ways, that they learn how to use this stuff. And, we waited until we found the most sensible, gentle, ethical, good-humored, Canadian, longtime UN speechwriter—a real speechwriter’s speechwriter—to teach it.
And boy, it’s hard to teach, by the way. And we’re still working on it, still thinking about it, still trying to make sense of it. And by “we,” I mean Brent Kerrigan.
Thank you, Brent, for your hard work in figuring out how to teach a still-new technology to an ambivalent audience. And thank you, Jim, for questioning the entire premise. To the extent that all of us, Brent included, discover that AI is, as an Onion headline once described another technology, “an inelegant solution to a problem that doesn’t exist,” we’ll all be happy. Oral rhetoric and speechwriting has been around for thousands of years, and I don’t believe it’s under fundamental threat now.
And if it is, I doubt the PSA is going to save it—with white papers, rebuttals, or answers to rebuttals.
Do you?
***
And a last word (for now) from Brent Kerrigan:
Jim, thanks for the thoughtful reply. If I was slow to respond, it’s because I was buried in speech assignments—which, in a way, entirely proves your point.
I’ve had my say and stand by it but let me clarify.
I agree with you: technology is rarely built for workers, and AI certainly wasn’t created with the sweating speechwriting masses in mind. I saw this clearly during my tour of duty at UN Climate Change, where talk of a “just transition” often masked a simpler question—for whose benefit? That answer was supplied just a few years later as tech and finance leaders kneed their way to Washington, shedding climate pledges and morality along the way.
But after two decades in executive communications, I’ve also watched as leadership teams shredded comms units and budgets like suited mind-flayers—telling staff to “do more with less”, combining four writers into one and revealing how little they understand the value of the work—until something goes wrong.
I don’t see AI as a savior; I see it as a leveling force. Used deliberately and ethically, it allows me to do more with less on my terms. And while I agree that there is beauty in the process of obtaining information, that beauty is damn hard to spot when it’s 1 a.m., you’ve got a dozen speeches to finish by Friday, the dog ate one of the kids, and the executive assistant kindly texts to let you know that they forgot a speech due for the next morning (damn you, Sweet Corn Sal!).
That’s the reality speechwriters I know face on a regular basis. It’s also the part that nonsense like The West Wing never got right about speechwriting: it’s not about staring into the clouds waiting for inspiration to strike … it’s a war with inspiration itself, supported by blind generals who (hold on … Chat GPT is down … can’t finish the metaphor). Point being: I welcome anything that reduces the time I spend pecking through a 90-page document in the middle of the night and allows me to focus on writing the actual speech. I’ll think about the beauty of the process in the morning.
I’m also wary of how our increasingly fraught political atmosphere shapes the AI debate—where one must be either fully for or fully against. As Turing might have asked: why so binary? I approach AI the same way I approach politics—suspicious of all, trustful of none, and always looking for where the advantage lies for the working speechwriter.
In the end, it’s a personal choice.
Some like to use an abacus to do their taxes … others prefer a calculator.
***
Reader, this is an open conversation that requires, for its immediate usefulness and ongoing relevance, you—your opinion, your daily experience, your imagination. Share it—now, and always—with PSA Executive Director David Murray: David.Murray@prorhetoric.com.

