Time to Calm Down About AI and Politics
It turns out generative AI isn't going to be the Meetup or Facebook or Twitter of this election cycle.
Hello to all my new subscribers! If you’re here because you read the tome I put out last week about how race, class, and identity were playing out in my home congressional district, as incumbent Rep. Jamaal Bowman and challenger George Latimer clash in the wake of the Israel-Palestine rift among Democrats, be forewarned: The Connector is my forum for reporting and analyzing a wide range of topics that all relate to politics, movements, organizing and tech. If there’s a single through-line, it’s the fight to preserve and strengthen our democracy. But since I am who I am (a white Jewish progressive man whose been around journalism, politics and organizing for a couple of decades) and live where I live (the burbs just north of Yonkers and New York City), all those personal and local angles will keep coming up. Anyway, welcome aboard and I hope you stick around for each week’s edition!
A week ago, The New York Times front-paged a story (gift link) about how artificial intelligence hasn’t yet “upended” politics. One of my favorite subscribers, Dmitri Mehlhorn, a top advisor to billionaire Reid Hoffman, who has put millions over the years into tech projects meant to help Democrats, told the Times, “This is the dog that didn’t bark.” He added, “We haven’t found a cool thing that uses generative A.I. to invest in to actually win elections this year.”
Let’s put aside the first oddity about this story’s framing. The notion that a new technology can drastically alter the fundamentals of electoral politics in just one election cycle should have been put to bed years ago, after former Obama communications director Dan Pfeiffer wrote his unfortunate 2015 article for Wired announcing “If 2004 was about Meetup, 2008 was about Facebook, and 2012 was about Twitter, 2016 is going to be about Meerkat (or something just like it.) Do you remember Meerkat? It was a live-streaming tool that momentarily caught some buzz at South by Southwest that year. Live-streaming definitely did not alter the fundamentals of the 2016 election: a crass celebrity demagogue with some help from disinformation farms overseas did.
It's somewhat understandable why people might have expected AI to radically change the political process. Remember how a year ago, Tristan Harris and Aza Raskin, the very mediagenic co-founders of the Center for Humane Technology were running around the talk circuit warning that “2024 will be the last human election” because generative AI appeared to have the ability to create persuasive narratives? I suppose one of the great things about America is that people forget more than they remember. But just for entertainment’s sake, let’s remember that a year ago, much of the tech-influencer class seriously thought that our number one problem was AI causing human extinction--this because a small percentage from a skewed sample of AI researchers told a survey that was their worry. The AI hysteria hype was so bad it earned Harris an audience with President Biden. (If you want the full trip down memory lane, here’s the post I wrote about Harris and Raskin’s spiel last spring.)
Now it’s turning out that the first and best use cases for AI in politics are a lot more mundane. As Higher Ground Labs details in the latest edition of their indispensable annual Political Tech Landscape Report, which came out last week and which is focused entirely on “the emerging use cases, needs, gaps, and opportunities presented by the introduction of generative AI in politics,” we’re still in the early days. But so far, the main ways that these tools are being used in politics is to help teams “run smoother, faster and more efficiently.” Is this actually, as Higher Ground Labs puts it in their introduction, “a generational opportunity for Democrats to get ahead”? I don’t think so, but it’s still worth a closer look at how this new technology is starting to be used.
Per the HGL report and some of the other sources they cite in its appendix, here are some of the things that generative AI is good for: drafting fundraising emails for different audiences, drafting thank-you notes, creating drafts of social media posts for different audiences, message testing, helping with media monitoring, image generation, data visualization, video generation, creating voice-overs, editing scripts to fit time, cross-referencing data, meeting transcription and summaries, analyzing raw notes (including voice memos) to create summaries, summarizing legislation, drafting op-eds, generating ideas for slogans, helping people improve their letters to legislators or public comments.
And here are some of the things that it ~might~ be good for: drafting campaign strategy plans, opposition research, fact-checking, discerning patterns and drawing insights from raw political information, vetting of staff, engaging voters with chatbots, and enabling personalized and ongoing interactions with individual voters. Anyone who thinks they can rely on generative AI to do all the work involved in doing these tasks well should join the lawyers who got caught using AI to find case precedents for their legal briefs.
So far, the most worrisome or creepy use cases have yet to gain traction. I’m thinking, for example. of using AI to simulate the voices of candidates or influencers to either engage or manipulate people. A campaign run by TheShotline.org, for example, that asked activists to send AI-generated voice memos to members of Congress from gun violence victims using their actual voices (with consent from family) didn’t exactly move the needle. Deceptive fakes of politicians voices have surfaced, and in one case the perpetrator is being prosecuted and has been fined $6 million by the FCC, but we’re not yet seeing an onslaught of such dirty tricks. (As it is, campaigns and independent expenditure groups are quite capable of pumping out their own standard forms of disinformation, if the flood of crap currently filling my mailbox from the Bowman and Latimer campaigns and their allies is any indication.) And politicians who are already inclined to lie and blame “fake news” for anything they don’t like are definitely benefiting from a “liar’s dividend,” now that they can tap the public’s innate distrust about the media by claiming that something true was actually faked. So instead of a fundamental shift in politics works, or the creation of a whole new playing field, generative AI will mostly just speed up all the existing trends that we’re already used to: more efficient fundraising, more targeted advertising, more fragmentation of the audience and more confusion about what is true.
If anything, the biggest impact generative AI is having now on politics is more quantitative than qualitative. As it makes it easier and cheaper for campaigns to a bunch of mundane things more efficiently, we are likely to get more of those things. More annoying emails and texts, more personalized thank-you notes, more meeting memos, more more more. Which will put an even bigger premium on the sorts of things that can genuinely affect the outcomes of campaigns: really persuasive messages, really creative tactics, really good candidates, and really effective movements shaping the context that campaigns operate in. And those aren’t things that AI magic can make.
For a reminder of what a really persuasive message can do for a candidate campaign, watch this two-minute video. You’ve probably seen it but it’s worth watching again. It’s six years old but holds up well.
And for an example of what really creative tactics can do to upend and win an issue campaign, watch this three-minute video. It’s twelve years old and impressively good. (h/t Anat Shenker-Osorio.)
—Related: I’m intrigued by Sourcebase.ai, an AI platform designed for journalists, researchers, and professionals who need to make sense of massive amounts of source material. It’s currently got a big collection of material related to the January 6 insurrection, and another one with all the documents related to Trump’s hush money trial. (h/t Ron Suskind.)
Things That Caught My Eye
—Joan Donovan in the Harvard Crimson on how corporate money (specifically from Meta) led to the suppression of her research on Facebook while she was at Harvard’s Shorenstein Center on Media, Politics and Public Policy.
—Yael Eisenstat, Justin Hendrix and Daniel Kreiss on “What online platforms can do to ensure they do not contribute to election-related violence.”
—Jeff Hirsh, “Rally calls for Israeli-Palestinian unity,” in Evanston Now, reporting on a branch of Standing Together for Peace in Illinois that is another example of people avoiding the polarizing binary of the current Israel-Palestine conflict. (h/t Anna Galland.)
—DSA’s Red Star Caucus is beyond parody: “We do not condemn Hamas and neither should you,” they declare!
—Marc Caputo in the Bulwark, “‘October 7 was a turning point’: Trump’s pro-Israel fundraising accelerates.” One donor who runs a hedge fund says that his “Never Again” pledge trumps his “Never Trump” pledge. This made me throw up in my mouth a little.
—Gil Duran in the New Republic profiles Balaji Srinivasan, “The tech baron seeking to purge San Francisco of ‘Blues’.” The thought leader most loved by the likes of VC Marc Andreessen says “What I’m really calling for is something like tech Zionism,” but it sounds more like digital fascism when you listen more. But hey, if you want to go with “tech Zionism,” see how popular that might be!
End Times
The one poll you need to see as we wait for the Trump jury to reach a verdict: If he’s acquitted, it’ll have no effect on his level of support, but if he’s found guilty, his support among Republicans drops 5%. Among independents, it drops 16%.
Although AI tools increase efficiency, genuine creativity and effective movements remain the keys to success.
>Related: I’m intrigued by Sourcebase.ai, an AI platform designed for journalists, researchers, and professionals who need to make sense of massive amounts of source material.
Oh, you doogie-woogies. Aren't you supposed to be the ones making sense of things? If you're just repeating the sense an AI made of it, with a little bit of personal noodling at the start and the end, what do we need you for?
This has already sort of happened, though - it's pretty easy to tell when an article is just things the author saw on twitter or reddit with nothing of substance added. So no, AI isn't going to make a huge difference - lazy journalists have been able to rehash the internet instead of thinking for twenty years already...and boy have they.
You know who'll come out on top of all this? The few people who stay original, or at least rehash things other people haven't seen yet. It's them that the AI & thereafter everyone else will be rehashing.
It's not just "ideas" we're talking about. They who, like Shakespeare, invent words, lead -- it is by their clever fingers that our stage is painted bright and gaily furnish'd.