Waiting for AI to Transform Politics? Don't Hold Your Breath.
Eight months after the launch of ChatGPT-3.5, the hype and hysteria has far outpaced the actual use of new AI tools in the political arena.
A few days ago, I asked ChatGPT-4 for some help with a writing project. I wanted to write a series of skits for an annual end-of-summer community dinner, and I was curious to see what generative AI could do to help. My first prompt went something like this: “You are a comedy skit writer. Your job is to draft several short skits, each led by a narrator who is a 40-year-old man sitting as his computer talking to ChatGPT, asking it for help writing each one. The first skit is about his writer's block.”
In seconds, ChatGPT-4 produced this:
Skit 1: Writer's Block
Narrator: (Rubbing temples) Okay, ChatGPT, I've got major writer's block. Help!
ChatGPT: Writer's block? No problem! Have you tried turning your brain off and on again?
Narrator: (Laughs) Cute, but not helpful. I need to write a comedy skit.
ChatGPT: How about writing a skit about a man struggling with writer's block, talking to an AI for help?
Narrator: That's... literally what's happening right now.
ChatGPT: Exactly! Meta-humor. Very in.
Narrator: So, I'm basically writing a comedy skit within a comedy skit. Skitception?
Not bad, I thought. So I added a bit more to my prompt: “This a great start. Let's focus on improving this skit. You are still a comedy writer, but now, after mentioning his writer's block and asking ChatGPT for help, the narrator (40-year-old man) goes on to say, ‘I actually think this could work, if you let me keep throwing ideas at you, and you keep producing snappy, funny comebacks.’ Then, let's you and I talk like I am that narrator and you are the ChatGPT-comedy-writer responding.” Again, ChatCPT responded in seconds:
Skit 1: Writer's Block Redux
Narrator: (Sighs) Ugh, ChatGPT, I've hit a comedy brick wall. Massive writer's block.
ChatGPT: A brick wall? Have you tried using a comedy sledgehammer?
Narrator: (Smirks) Clever. You know, this might actually work. I throw ideas at you, and you come back with snappy, funny responses.
ChatGPT: Like a comedy ping-pong match? Serve it up!
Narrator: Alright, here goes... Why did the scarecrow win an award?
ChatGPT: Because he was outstanding in his field?
Narrator: (Laughs) Okay, okay. Classic. How about this one: Why don't scientists trust atoms?
ChatGPT: Because they make up everything! But honestly, they're just trying to bond.
Narrator: (Chuckles) You're good. Alright, last one: Why did the golfer bring two pairs of pants?
ChatGPT: In case he got a hole in one! But really, it's a fashion faux pas to not have a backup look.
Narrator: (Laughs) Okay, ChatGPT, you might've just cracked my writer's block wide open! Thanks for the comedy rally!
I kept going, inventing concepts for additional skits, seeing what ChatGPT would produce on its first try, and then refining my prompts to make the jokes more appropriate for the dinner’s audience. In less than two hours, I produced a 3,300-word play in five acts, including a nostalgia competition between three retired summer camp directors each trying to out-do the other with stories of how hard they used to have it (yes, a Monty Python rip-off), a fantasy about Donald Trump making friends with his fellow prison inmates (complete with a GPT-written parody of “New York, New York” where Trump sings of his real dream of being a UPS driver), and a dark coda where the narrator and GPT switch roles and the machine is now in charge, gently chiding the human for not being nicer to its earlier brethren.
I’m sharing all of this for three reasons. First, if you haven’t tried playing with any of the new “artificial intelligence” chatbots, I wanted to give you a simple ilustration. Second, because it shows that one thing these tools are genuinely very good at is a kind of creative writing. There are many other problems with the current generation of AI tools, most notably their tendency to invent facts because they are programmed to give users a satisfying experience, not an accurate one. Plus the fact that they have been trained on biased data. (And, if you look closely at the image I made accompanying this post using Stable Diffusion, they still can’t draw things like human hands very well!) But if you are any kind of writer—like those Hollywood creatives now on strike—you have very good reason to both marvel and worry about what large language models like ChatGPT can do.
The third reason is this: eight months after the public release by OpenAI of Chat Generative Pre-trained Transformer (aka, ChatGPT-3.5) and its explosive adoption by more than 100 million users in a matter of weeks, we have yet to actually know anything concrete about how these new tools are going to affect politics. I’ve been keeping a lookout for anything that amounts to hard examples, and so far the vast majority of coverage what basically amounts to semi-informed speculation, with titles like “In the age of AI, elections will never be the same,” (New Statesman, April 18), “How AI will transform the 2024 elections,” (Brookings, May 3), “Political campaigns may never be the same,” (The Atlantic, May 27), “How AI could take over elections and undermine democracy,” (Governing, June 11), “AI is making politics cheaper, easier and more dangerous,” (Bloomberg, July 11), “Six ways that AI could change politics,” (MIT Technology Review, July 28), and “Will generative AI make or break democracy?” (Project Syndicate, August 10).
In general, these pieces do one of two things. Either they envision AI being used to somehow “super-charge” the targeting of individual voters (with Harvard professors Archon Fung and Lawrence Lessig giving the most fanciful scenario), a theory of political change that isn’t based much on the reality of how voters make up their minds, or they spin out visions of sentient and autonomous AIs doing things that, who knows, someday might be possible but right now are nowhere close to happening (Bruce Schneier and Nathan Sanders, also of Harvard, seem to have made this their new specialty).
Getting down to earth
So far, the only product to hit the political marketplace that is explicitly built using generative AI is Quiller.ai, which is designed to help Democratic campaigns and organizations draft fundraising emails and which is built on top of a corpus of best-performing emails. As founder Mike Nellis explains in this demo, Quiller quickly sets up a campaign profile, takes in a bunch of identifying details (including a candidate bio, top issues and writing samples in the candidate’s voice), and then prompts the user through a set of practical choices (including whether the email is to hit a “fundraising deadline”, ha!) and also sets the “tone” of the email (“celebratory”? “hopeful”? “warning”?) and how urgent the content should sound. Since it typically takes someone about an hour to write a quality email, getting drafts in 15-30 seconds is pretty amazing. As of early July, at least 40 campaigns were already using the tool, according to Nellis. (More details on Quiller here.)
With this kind of speed-writing in hand, it’s not hard to see how campaigns may soon be calling or contacting individual voters, asking them open-ended questions about their feelings about specific issues, and then using those answers to craft more personalized communications designed to make them “feel” like they are being heard. Will voters care that they’re being pandered to even more personally than they are now? Maybe they will. Maybe they won’t notice. (And is anyone building a phone or web plug-in that will monitor your incoming messages and score them on the likelihood that they’re machine-generated?)
Other than speeding up the drafting and crafting of emails, though, when it comes to reporting on concrete examples of AI being used by political campaigns, a certain blurring of definitional boundaries is taking place. Savvy to the trendiness of AI, fundraising vendors are now claiming that the well-established practice of sifting and sorting potential donor lists in order to find the people most likely to contribute and the messages that best unlock their wallets is now “AI powering campaigns.” And there’s a real chance that vendors are hyping their products well beyond their actual capabilities. So, for example, in this story in Forbes from last December by Craig Smith (a former New York Times reporter who has a very good podcast about AI), Martin Kurucz of the Sterling Data Company brags about using AI to sift through thousands of names to help lots of 2022 Democratic campaigns, from John Fetterman on down, find donors. What this actually involved is nothing like using ChatGPT or any of the other large language models—especially as they weren’t even available for public use in 2022. As Smith wrote:
“Sterling Data uses a no-code machine-learning website, Akkio.com, which allows users to drag and drop their data spreadsheets into the site. On the backend, Akkio analyzes the data and gives the user options for what they want to predict. In Sterling Data’s case, that is each person’s likelihood of giving money. Sterling Data has built up a database of about 30 million donors with 500 or more columns of information on everything from the kind of car each donor drives to which Netflix shows they watch. It runs a subset of the data through an Akkio model trained to spot likely donors for a particular candidate. The result is a prospective donor list ranked from most likely to least likely to give, allowing Sterling Data to focus their efforts without wasting time and money on the wrong people.” (More on that from Akkio’s point of view here.)
Kurucz told Smith that “The whole point is predicting who will actually donate if my candidate reaches out,” adding that Akkio’s model builds prospective donor lists that consistently raise twice as much as lists built with other methods. OK, maybe? That seems like standard vendor-speak to me. Kurucz also says Sterling uses natural language processing to read through call notes with potential high givers in order to glean more details about them, but again, this is nothing close to the fantasies being promulgated of late about how AI is going to upend politics with sentient AIs running for office or making autonomous campaign contributions.
Kurucz also plays a big role in Russell Berman’s May 27 article in the Atlantic, “Political campaigns may never be the same,” the only other story I came across that describes specific actual uses of “AI” in politics. In it, Kurucz expands on his vision, calling the technology “a great equalizer,” after saying that Sterling used AI to help more than 1,000 Democratic campaigns and committees, including the DCCC, find potential donors. Berman writes, “The speed with which AI can sort through donor files meant that Sterling was able to cut its prices last year by nearly half, Kurucz said, allowing even small campaigns to afford its services. “I don’t think there have ever been this many down-ballot candidates with some level of digital fundraising operation,” Kurucz told Berman. “These candidates now have access to a proper campaign infrastructure.” And that, apparently, is going to change democracy, he adds. “You will see a lot more representation,” he told Berman, “because people who didn’t have access to running for elected office now will have that. That in and of itself is huge.”
So, to sum up, a form of high-speed data-crunching allowed a fundraising firm to cut its prices in half, and the service it provided was the faster and cheaper finding of potential donors. Does that sound “huge” to you? Or does this sound like a repeat of the sort of inflated claims once made by Cambridge Analytica, which was also in the business of selling campaigns on using personal data to micro-target voters? I’m not saying Kurucz is a fraud, just that he’s making some pretty big claims about how much he’s supposedly increasing the efficiency of a notoriously difficult task—fundraising—and he’s doing so at a time when overall response rates to fundraising pitches are in decline. (According to this October 2022 story in the Washington Times, Kurucz says he started developing Sterling’s donor behavior algorithm in 2020 while working for Democrat Gil Cisneros’s 2020 House campaign; Cisneros’ 2020 filing with the FEC shows no record of Kurucz or Sterling Data being paid by his campaign. FEC records do show several dozen campaigns paying Sterling Data in the 2022 cycle.)
Kurucz does give good quote. “This is one of the few technologies in the history of planet Earth that is continuously and exponentially bettering itself,” he told The Atlantic. “….You no longer need average people and average consultants and average anything….Because AI can do average.” He compared the skeptics in his field to executives at Blockbuster who passed on the chance to buy Netflix before the start-up eventually destroyed the video-rental giant. “The old guard,” Kurucz concluded, “is just not ready to be replaced.” Oh, and here he is again, in an Bloomberg News story about AI making politics faster, cheaper and more dangerous, saying, that it could make campaigning “a lot cheaper.” Yes, and I can flap my hands and fly to the moon.
In my humble opinion, the potential for AI to transform politics is mostly still hypothetical, though hucksters are going to huck. Yes, it may increase the supply of disinformation and related forms of manipulative communications, but we’re already swimming in fake news and flim-flam from politicians. That said, certainly it’s good that the Federal Election Commission, a basically toothless and immobile regulatory agency, just voted unanimously to consider imposing rules governing the use of synthetic media (aka “deep fakes”) in campaign communications. As I’ve written before, I’m most worried about the potential use of tools that can copy and reproduce people’s voices, mostly because people have little experience with audio fakery and the infrastructure for robo-calling is already well developed. The dirty trick of 2024 may not be a last-minute “deep fake” video that somehow fools voters into believing something untrue about a candidate, but a pinprick and surreptitious effort to get to people through their ears. But that’s just my best guess of what could happen—not a report on what is happening. I guess we’ll have to stay tuned.
—Related: The Civic Tech Field Guide is collecting examples of AI prompts for civic, nonprofit, campaign or government purposes. Newspeak House’s Campaign Lab is maintaining a massive handbook of AI & Campaigning links. And Taren Stinebrickner-Kauffman, founder of AI Impact Lab, offers her ideas on how an advocacy organization could go “all-in” on AI.
—Wanna go deeper on AI and politics? German political scientist Andreas Jungherr offers a conceptual framework that, to my eye, looks pretty comprehensive.
Odds and Ends
—It could be that with each new indictment, the Orange Cheeto gets a burst of small donations from his base, but there’s also research suggesting that his legal troubles are starting to weaken his support.
—Dave Fleischer, one of the founders of the whole field of “deep canvassing,” has started a Substack. Check it out and subscribe!
Charles Koch organization had been using AI for ideas on financial investments and in politics for almost a decade, but it didn't work out well. Koch made a clear apology on his podcast and changed his direction a couple of years back. He's still a right-winger, but he changed his organization name to Stand Together and is helping the little people start their own businesses. That's a huge step in the right direction.
I share your overall observation: not seeing a lot of truly valuable practical application (yet), although I do remain hopeful that with more time, we will see more value. One thing I wonder about is whether there is the untapped potential to use generative tools to better counter social media disinformation at scale. I'd be curious to hear if anyone is experimenting with that, and what if anything they are finding success with.