Clowns to the Left, Jokers to the Right
While self-styled tech ethicists Tristan Harris and Aza Raskin warn that AI is about to end human agency, a Biden AI advisory panel stacked with industry reps avoids the hard calls.
The big political news of this month is not President Joe Biden’s formal announcement that he is seeking re-election, setting up a very likely rematch with his predecessor. Nor is it Tucker Carlson’s welcome fall from his evening slot on Fox. No, silly reader. The truly big news is that “2024 will be the last human election” because we are in a “double exponential” moment for the development of artificial intelligence, and “whoever has the greater compute power will win” elections going forward.
Why? “Because everything human beings do runs on top of language—our laws, the idea of a nation-state, the fact that we can have nation states, is based on our ability to speak language. Religions, friendships and relationships are based off of language, so what happens when you have, for the very first time, non-humans able to create persuasive narrative—that ends up being like a zero-day vulnerability for the human race.”
But how does that make 2024 the “last human election”? “The difference now [compared to earlier elections] is that not just you're testing some different messages, but the AI is fundamentally writing messages creating synthetic media, a/b testing it across the entire population, creating bots that aren't just like bots posting on Twitter but instead are building long-term relationships over the next six years to solely persuade you in some direction--loneliness becomes the largest national security threat.”
Or, maybe we’re already on the precipice of a collapse in our capacity to use language to make sense of the world, let alone make political decisions: “If I'm the Chinese Communist Party and I want to screw up the US right now, what I do is I just ship a Biden and Trump filter to every single person in your country [who uses TikTok] that gives you a Biden voice or a Trump voice. So now I've turned all of your citizens--like [in the movie] Being John Malkovich--into the sort of most angry Biden, Trump, information angry army, that just talks all day in a cacophony. That would just break your society into incoherence. it has nothing to do with where the data is stored, it has nothing to do with where the algorithm [is based], which videos are being ranked in what way, it has to do with how we are enabling sort of a math confrontation with them.”
So say Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology, who have been on a tear in recent weeks giving public talks and private briefings about what they deem “humanity’s second contact moment with AI.” The “first contact,” they argue, was the arrival of social media, which used algorithms—excuse me, the new word is AI—which used AI to determine which photo or video to show your nervous system to keep you scrolling. That first contact, they claim, “[broke] humanity with information overload, addiction, doom scrolling, sexualization of kids, shortened attention spans, polarization, fake news and breakdown of democracy.”
OK. Enough quoting Harris and Raskin with a straight face. This is off the wall. Yes, we as a society have challenges, but the kinds of problems Harris (whose bio notes that the Atlantic has called him “the closest thing Silicon Valley has to a conscience” and that Rolling Stone named him as one of “25 People Shaping the World”) and Raskin are pointing to existed before social media, and social media is not the primary engine causing those problems to get worse in some places. As technologists, Harris and Raskin continuously center the supposed power of tools and platforms rather than the power of the state or corporations, or corporate oligarchs in capturing the power of the state. A cynic might say that centering and elevating the power of tech algorithms has the benefit of also putting people like themselves at the center of finding a solution (something Harris did explicitly in his early work around the founder of the Center for Humane Tech, when he told a small room of a few hundred Bay Area techies that they personally had the power to solve a problem he compared to climate change). But just because two glib youngish white men stand on a darkened stage in front of stark slides making portentous statements doesn’t make them true.
Nor is the most dire-sounding factoid of their entire spiel—that in a 2022 survey of AI researchers, nearly half said there was a one-in-ten risk that future AI systems could lead to human extinction—remotely accurate. “If you're about to get on a plane and 50% of the engineers who make the plane say, Well … there's a 10% chance that everybody goes down, would you get on that plane?” Harris and Raskin have been asking their audiences. Actually, only about 20% of the 738 AI researchers who responded to that survey gave a response to the extinction question. It’s bogus to claim that “half” of that group of more than 700 top experts think we have a 10% or greater chance of extinction. But admitting that only a tiny sample of AI researchers are actually that worried isn’t going to get you on the New York Times oped page.
This is not to say that some of their warnings are worth pondering. Now that it’s become really easy to sample actual human voices and then turn that data into synthetic speakers, scammers will have a powerful new tool to fool people and verification systems that are based on voice. We need to educate the public as well as security managers to be more cautious. But that doesn’t mean, as Raskin says at one point, that “this is the year that all content-based verification breaks.” A little hyperbole can go a long way, but it also can go too far.
Worse, given how much mind-share Harris and Raskin have managed to capture since the launch of the movie The Social Dilemma on Netflix, where it’s now been seen by more than 100 million people, is how naïve they are about the politics of actually getting AI under control. The worst part of the “AI Dilemma” talk that they’re giving these days is when they point to the 1983 anti-nuclear movie The Day After. They gush about how that made-for-TV movie, which had a huge domestic audience of 100 million Americans and was later also aired inside the Soviet Union, supposedly led to a “shared understanding” of the danger of nuclear war and a cooling of global tensions. I guess as people who have really guzzled their own Kool Aid, imagining that a movie could be a huge force for change is understandable.
But it gets worse. What Harris and Raskin really hold up as a model isn’t just The Day After movie, but the live TV panel discussion held afterwards, hosted by ABC anchor Ted Koppel. Without a shred of awareness, they laud the “distinguished panel of guests” who were called on to calm the public and reassure people that the kind of general nuclear war shown in the film could never happen. Who were those esteemed gentlemen? Harris and Raskin mention former Secretary of State Henry Kissinger, rightwing publisher William F. Buckley, theologian Elie Wiesel, and astronomer Carl Sagan. Two nuclear war-mongerers, one of whom, Henry the K, explored the concept of “small nuclear war” before he went into government and then, once in government, embraced and condoned genocidal policies in numerous countries; and the other, a homophobic racist, who opined publicly in 1968 in his magazine National Review that “The time to introduce the use of tactical nuclear arms was a long time ago, in a perfectly routine way.” They leave out that the ABC panel also included three additional representatives of the national-security establishment that had built American foreign policy around the doctrine of nuclear superiority: former Defense Secretary Robert McNamara, former national security adviser Brent Scowcroft and then Secretary of State George Schultz. (To their credit, many of these officials became nuclear abolitionists, but only much later in the life.)
It's long past time to shrug and say these two self-appointed tech ethics leaders just mean well and they ought to know better. Americans didn’t demand a halt to the nuclear arms race because they saw a movie in 1983; they made that demand tangible in the preceding years when a mass movement demanding a “nuclear freeze” passed resolutions and ballot initiatives in hundreds of locations, culminating in a million people rallying on Central Park’s in June 1982. Koppel’s panel after the showing of The Day After wasn’t, as Harris and Raskin inanely claim, “an example of a democratic dialogue about what future we want.” It was a whitewash. And frankly, the military industrial complex is rife with the last people we want to turn to for reassurance or guidance about the future of AI and society—from what I understand the real danger of the current race to build more powerful uses of machine learning and large language models is not their use to spread disinformation or undermine trust, but their actual use in the next generation of military weaponry. And on that topic, Raskin and Harris have, so far, had nothing to say.
Throwing More Bureaucracy at the Problem
I spent this morning online, watching a public meeting of the National AI Advisory Committee (NAIAC), a panel run under the aegis of the Commerce Department, as its members deliberated and then voted to endorse its first-year report to President Biden on how to respond to the AI boom. You can be excused for not knowing about NAIAC, which is tucked into a little corner of the AI.gov homepage, or its report, notice of which is buried at the very bottom of a long, dull description page largely devoted to listing the bios of its 26 serving members.
Nor, I suppose, should you be surprised that what NAIAC produced is tilted heavily towards the preferences of private industry. Of those 26 experts, six come directly from top positions at Google, Microsoft, IBM, NVIDIA, Amazon and Salesforce; six more come from such places while now working at smaller smart-ups. One is a top lobbyist for the Business Software Alliance and another is an extension of former Google chairman Eric Schmidt’s network of operations surrounding government. Just three NAIAC members represent public interest organizations, plus one from the AFL-CIO’s Technology Institute. The remaining seven are academics.
The report says many of the right things about the potentials and risks of AI and the need for what it repeatedly calls “trustworthy” AI. But, as a few dissenters on the committee bravely pointed out, it notably avoids calling for any kind of specific mandates and mechanisms to achieve that, instead merely waving its hand at the need to do so. It mostly recommends setting up and filling a Viennese desert table’s worth of committees, chief AI officer positions and risk management protocols. While the White House’s October 2022 Blueprint for an AI Bill of Rights gets mentioned, the NAIAC report steers clear of recommending any steps to actually turn it into actionable law.
Buried deep near the end of the report, there’s a four-member dissent from Janet Haven of the Data & Society Institute, Liz O’Sullivan of the algorithmic risk platform Vera, Amanda Ballantyne of the AFL-CIO, and Frank Pasquale of Brooklyn Law School pointing out many gaps, including a lack of attention to the “pressing issue” of AI’s use in the criminal legal system (this 2019 report from the Partnership on AI details why) and the need for more protections for workers especially those in low-wage and precarious work who are “increasingly hired, fired, surveilled, and managed by algorithmic systems.”
They also write, “For the NAIAC to meet this critical moment, the Committee should clearly articulate a commitment to a people-first, rights-respecting American AI strategy. The U.S. should lead from a position that prioritizes civil and human rights over corporate concerns. Given the immense concentration of money, data, compute, and talent amassed by AI companies and the overwhelming evidence of societal impacts and harms, this requires more than the positive and important steps undertaken by agencies and through executive orders that we’ve seen over the past year. Congress needs to enact legislation, starting with the most basic comprehensive data privacy protections, to protect citizens and non-citizens alike from the AI harms already identified through a growing field of research. Beyond that, we need to design comprehensive systems of accountable governance that allow values-based, rights-respecting AI to thrive.”
O’Sullivan went further in their remarks to the rest of the committee during today’s hearing, which are worth quoting in full here:
“From the first moment we convened together, I, and several others on this committee, have advocated for a transparent process that invites the public to contribute, just as is called for in the Blueprint for an AI Bill of Rights. That transparency has not materialized, and in its place we find a once-strong report, watered down with changes. I struggle to understand why some of these previously strong recommendations have either been eliminated or reduced in scope, in a moment where it’s never been clearer that immediate action is needed. The report mentions efforts to better include the public in conversations moving forward and I welcome this change, along with improvements on clarity of processes. Several of us on this committee have proposed a working group with a focus on human rights, and we look forward to tackling these big picture ideas with greater public engagement over the next two years.
On the surface, I think we all agree that we must find a way forward to promote AI adoption and regulate its risks. And while I believe we are all coming to this issue in good faith, given the lack of public engagement, and given that so much is missing from the report, I cannot vote to advance this report in its current state. I joined the equitable AI movement more than 4 years ago, and since then I and many others have sought to underscore the sheer urgency of the need for regulation on this space. The last 6 months and the rapid rise of generative AI have brought new attention to this urgency, and yet very little of substance has come to pass.
We cannot afford to sit back and simply throw a little more bureaucracy at the problem. This strategy has a near zero chance of producing meaningful change in the ways these systems are built, incentivized, and foisted upon the public. We need structural changes that include federal privacy law, like the American Data Privacy and Protection Act. We may even need a technology regulator, perhaps one modeled after the FDA or the CFPB. If nothing else, giving regulators enhanced powers for agencies to enforce existing laws could have a real effect. Earlier drafts of this report included more of these actions, but those now lie on the cutting room floor.
And of everything we’ve disagreed about, it’s been most surprising, in fact, to find that the suggestions that best align American AI with our democratic values, such as those laid out in the Blueprint for an AI Bill of Rights, are the most controversial and have faced significant opposition. Even more concerning is that it’s been difficult to triangulate the source of this opposition, which is needed to enable a legitimate, and public exchange of views.
The Blueprint for an AI Bill of Rights is by no means perfect, but it does acknowledge loudly and clearly that the interests of the public should outweigh the interests of a few enormous companies. If America is to lead the world in AI, we must do so from a place closer to democracy than to oligarchy, and I look forward to working with all of you in the next year to make sure that future reports we put forward are built with this in mind.”
Commerce Secretary Gina Raimondo made a brief appearance before the committee before the hearing ended, declaring that while many people she talks to “are scared to death about these innovations, net net I’m still on the excited side.” She cited her own experience in the health care field, where she is certainly right that kinds of AI can improve diagnostic tools, health education and upskilling. She did not mention her husband Andy Moffitt’s job as a strategic advisor to PathAI. But she also admitted that “it’s pretty scary what could happen if this thing runs away from itself,” while pre-emptively declaring that “we can’t regulate” in the short-term while companies hustle to win the AI race, and could only rely on “voluntary commitments” from them. “You are the AI advisory board for the administration,” she told the group that she had appointed, asking them for timely solutions, having foreclosed the obvious ones. And so it goes.
On a More Practical Note
—Yesterday, Taren Stinebrickner-Kauffman’s AI Impact Lab held its first “AI for Good” webinar, featuring a conversation with Eli Pariser, codirector of New_ Public and author of The Filter Bubble and Niffer Nan, startup investor and advisor and former head of company strategy and second product manager at Asana. They focused on how AI tools can improve staff productivity and how they may be helpful in providing services to constituents, taking notes and summarizing meetings, crafting draft emails or advice to volunteers performing specific tasks. Well worth viewing.
—Here's a hot, and maybe dubious idea: use large language models to simulate human samples. A group of researchers at Brigham Young University tested whether GPT-3 could fool human scorers into thinking they were reading political comments written by actual people, and they argue that the degree of “algorithmic fidelity” displayed may allow researchers to dispense with the costly process of interviewing actual humans to find out what they think about stuff. Could this replace polling?
—Here's a crowdsourced list of federal legislative proposals pertaining to generative AI, curated by Anna Lenhart of the Institute for Data Democracy and Policy at GWU.
—Jumana Abu-Ghazaleh of Pivot for Humanity points out that with all this talk about AI, there isn’t even an agreed-upon definition.
—Here comes AI imagery in political ads, courtesy of the RNC. I’m pretty sure this is not what Tristan Harris and Aza Raskin were warning about.
Odds and Ends
—Daniel Stid has started a series of posts on the relationship between philanthropy and democracy, beginning with this observation: “Philanthropists consistently overestimate their ability to improve democracy in America in the short term, within the political confines of an electoral cycle, congress, or administration. Conversely, funders underestimate their ability to do so over longer time horizons in the fertile expanse of our civic culture. And they overlook the extent to which politicized philanthropy serves–inadvertently but nonetheless inexorably–to accelerate the hyper-partisan tribalism that is the source of so many of our problems.”
—Speaking of philanthropy and democracy, check out all these refreshing leadership and board changes at the Democracy Fund.
—I remember when Matt Stoller was blogging, do you? Now he’s reshaping anti-trust policy, and continuing to make unlikely alliances and poke people, as this in-depth profile by Nancy Scola in Politico details.
End Times
There has been and always will be data used for elections. There's been polling for a long time. Direct mail was extremely influential at one time. And there were highly personalized ads being targeted with Cambridge Analytica. 2028 won't be the first big data election, nor the last. It's all on a spectrum, and all of the programs are being written and managed by humans. The programs could be extremely powerful, but we don't know exactly how it will work, and neither do they.
To go to the example of a plane, if you did have a bunch of mechanics look at a plane and even 10% of them said this plane is going to crash, you wouldn't want to get on that plane until it's fixed. But mechanics or engineers looking at existing technology they have worked with many times is one thing. When it comes to AI as it currently exists, it has only been around a short time, and we have had no time to study how it affects society. Now they're talking about what *could* happen with new and presumably much different AI in the future. Things that don't even exist now. And social media looks like it had much different (and probably more negative) effects on society than we originally thought.
There does seem to be a kind of arrogance in the AI tech community when they think that their thing is so powerful that it will destroy free will.
Thanks for the shout-out on the webinar!