Is the AI Wolf Really at the Door?
Trying to think clearly about the unthinkable. If a super-intelligent AI is on our doorstep, many well-intentioned attempts to regulate it or control it will fail.
There once was a shepherd boy who was bored as he sat on the hillside watching the village sheep. To amuse himself he took a great breath and sang out, "Wolf! Wolf! The Wolf is chasing the sheep!" The villagers came running up the hill to help the boy drive the wolf away. But when they arrived at the top of the hill, they found no wolf. The boy laughed at the sight of their angry faces. "Don't cry 'wolf', shepherd boy," said the villagers, "when there's no wolf!" They went grumbling back down the hill.
That’s how the old Aesop’s Fable starts. And you know how it ends. After crying wolf multiple times, fooling the villagers each time, when a real wolf appears, the boy cries out for help and no one comes.
The lesson this story is supposed to teach children is not to make up stories to get attention. But there’s another, darker meaning to the fable: if you ring the alarm bell too many times, adults stop listening.
Are we in such a moment with artificial intelligence? After so many false alarms and overheated claims about this or that technology being on the verge of “changing everything,” it’s possible we have we lost our ability to distinguish real danger from hype.
Who to listen to? Last week, I poked hard at Tristan Harris and Aza Raskin of the Center for Humane Technology, whose breathless presentation on the “A.I. Dilemma” struck me as too much like a policy panic attack than a trustworthy assessment of the moment. And I still feel that way about their effort to position themselves as guides to the perplexed.
But what to make of this essay in Time Magazine from Eliezer Yudkowsky, one of the founders of the whole AI field? He says he didn’t sign onto the recent call from the Future of Life Foundation for a six-month moratorium on systems more powerful than GPT-4 because it doesn’t go far enough. He writes:
“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how. Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
He compares humanity trying to stop such a superhuman intelligence as “the 11th century trying to fight the 21 century.”
Or take this New York Times story by Cade Metz about Geoffrey Hinton, an AI pioneer who just quit his job at Google so he can speak out more freely. He believes that existing AI technologies will soon flood the Internet with counterfeit content, destroying the average person’s ability “to know what is true anymore.” And he’s worried that autonomous killing machines may not be far off. What seems to be driving his concern is the breakneck competition between tech giants like Google and Microsoft. “I don’t think they should scale this up more until they have understood whether they can control it,” he says.
These are professionals with deep knowledge of their subject. And they want the regulators to step in, hard.
That’s also the view of Harvard political theorist Danielle Allen, who signed the Future of Life letter and who, in this Washington Post column just says, “What’s the hurry? We are simply ill-prepared for the impact of yet another massive social transformation. We should avoid rushing into all of this with only a few engineers at a small number of labs setting the direction for all of humanity. We need a breather for some collective learning about what humanity has created, how to govern.”
Alondra Nelson, who helped write the Biden Administration’s Blueprint for an AI Bill of Rights, seems to be taking a more temperate approach, suggesting that existing laws and agencies in the US and Europe already have some real abilities to check AI’s use. She’s not wrong, but invoking the Americans with Disability Act of 1990 to counter the use of automated systems in employment or housing screening will not affect in any way the pace of GPT-4 development. Nor does her call that Members of Congress, state legislators and enforcement agencies get busy studying the risks of AI and demanding real accountability from tech companies fill me with much confidence.
But Nelson is presumably well-versed in the realities of getting lawmakers and regulators to act with any degree of concern, so it’s probably safe to assume she thinks it makes more sense to make incremental progress than none at all. As it is, there doesn’t seem to be anyone high up in the Biden Administration who might want to pull the emergency brake on AI development; as I noted last week, Secretary of Commerce Gina Raimondo has already ruled out anything but voluntary self-regulation by tech companies in the short term.
No, the prospect of a super-intelligence breakout isn’t quite like the fictional news in last year’s hit movie Don’t Look Up about a giant asteroid heading for a direct collision with Earth. We don’t have anything like that level of certainty about the risk of an out-of-control AI suddenly taking over. But if someone actually went to the White House with proof that OpenAI or Bard or one of the other LLMs now in development was on the verge of breaking loose, would they be listened to?
As I kid, I grew up devouring every science fiction novel I could get my hands on. I’m not crazy about the feeling that now I’m living in one.
--Related: A bipartisan group led by Senator Edward Markey (D-MA) has introduced legislation called the “Block Nuclear Launch by Autonomous AI Act,” which states that no federal funds can be used for any launch of any nuclear weapon by an automated system without meaningful human control. I really don’t know whether to laugh or cry about this bill. Here’s Rep. Ted Lieu (D-CA) explaining his support for it: “It is our job as Members of Congress to have responsible foresight when it comes to protecting future generations from potentially devastating consequences. That’s why I’m pleased to introduce the bipartisan, bicameral Block Nuclear Launch by Autonomous AI Act, which will ensure that no matter what happens in the future, a human being has control over the employment of a nuclear weapon – not a robot. AI can never be a substitute for human judgment when it comes to launching nuclear weapons.”
Anyone who has studied the actual functioning of America’s nuclear arsenal knows that this notion of human “control” over their use is built on a very shaky set of assumptions, which whistleblower Daniel Ellsberg exposed in painful detail in his book The Doomsday Machine. For example, he found that while Pentagon regulations require two soldiers to jointly authenticate any order to use nukes, in practice on many bases often only one was on duty and unofficially was understood to be able to act on his own. So much for a “failsafe” way of preventing a rogue duty officer from firing his missiles.
Keeping humans in command of a system that is already built on top of an enormous host of sensing machines like the NORAD radar installations that are supposed to be able to alert the President to an incoming missile strike assumes that no autonomous AI might ever figure out how to spoof our early-warning system into convincing the humans in charge that bombs are on the way. So while the Markey bill would prevent the Pentagon from putting nukes into an autonomous weapons system, it doesn’t do anything about a system that is smarter than the proverbial “human in the loop” fooling that human into pressing the button.
To be honest, it might not be a bad idea to put an AI in charge of our nuclear weapons. At least it would understand, probably better than humans, that the only sensible use of nukes is no use.
—Semi-related: Rep. Yvette Clarke has introduced a bill that would require that political ads disclose if they are built using AI-generated imagery. Call me a cynic, but I bet this passes a lot faster than long-sought legislation aiming to limit law enforcement’s use of facial recognition AI.
Odds and Ends
—Thousands of volunteers knocked on more than 555,000 doors, made more than 1.26 million phone calls and sent nearly two million texts in the successful grassroots field campaign that propelled Brandon Johnson from the Cook County commission the mayorship of Chicago, Mina Bloom and Joe Ward report for Block Club Chicago.
—The government of Saudi Arabia recently disclosed that it is a “passive investor” in the private equity firm that owns the Democratic National Committee’s voter list, along with all the other assets of Every Action and NGP Van, Akela Lacy reports for The Intercept. As she notes, it’s not clear why the investment was disclosed, but one imagine that it would be a shame if anything were to happen to your voter file, right?
—According to M&R’s Annual Benchmark Survey of 215 nonprofits, average online revenue declined by 4% in 2022. This is after a pretty healthy year of growth in 2021, so it’s possible we’re just seeing a tapering off rather than a serious dip. But the report also notes a 13% drop in Giving Tuesday revenue and in December 31 donation revenue. The pinch is real.
—American Edge Project, a relatively new lobbying group that opposes antitrust legislation aimed at the tech industry, is almost entirely a creation of Facebook, Brian Schwartz and Lauren Feiner report for CNBC. On top of a $4 million donation in 2020 from Facebook that got the group going, their latest tax filing through 2021 shows a $34 million anonymous gift. According to a person who works with the group, that money also came from Facebook. What this story shows is that Facebook is a true believer in augmented reality, the kind you get from creating advocacy groups out of nothing more than giant piles of money.
—Republicans are objecting to the free market! What else can you say about this story from Shane Goldmacher in the New York Times on the “growing tensions” between WinRed, the right’s answer to ActBlue, and party officials who are objecting to the for-profit company’s plan to raise its transaction fees by 30 cents, on top the of 3.94% it charges per donation. ActBlue, by contrast, is a nonprofit and charges a flat rate of 3.95% per donation.
End Times
Before AI-generated video gets so good we can’t tell the difference, savor this beer commercial.
Yudkowsky is not a founder of anything except AI doomsaying.
AI can’t knock on doors in 20 degrees