While self-styled tech ethicists Tristan Harris and Aza Raskin warn that AI is about to end human agency, a Biden AI advisory panel stacked with industry reps avoids the hard calls.
There has been and always will be data used for elections. There's been polling for a long time. Direct mail was extremely influential at one time. And there were highly personalized ads being targeted with Cambridge Analytica. 2028 won't be the first big data election, nor the last. It's all on a spectrum, and all of the programs are being written and managed by humans. The programs could be extremely powerful, but we don't know exactly how it will work, and neither do they.
To go to the example of a plane, if you did have a bunch of mechanics look at a plane and even 10% of them said this plane is going to crash, you wouldn't want to get on that plane until it's fixed. But mechanics or engineers looking at existing technology they have worked with many times is one thing. When it comes to AI as it currently exists, it has only been around a short time, and we have had no time to study how it affects society. Now they're talking about what *could* happen with new and presumably much different AI in the future. Things that don't even exist now. And social media looks like it had much different (and probably more negative) effects on society than we originally thought.
There does seem to be a kind of arrogance in the AI tech community when they think that their thing is so powerful that it will destroy free will.
Thanks for the shout-out on the webinar!
I drafted a brief petition after reading Robert Wright's article (https://nonzero.substack.com/p/the-worlds-very-big-ai-test) about the pressing need for global governance re: AI. Based upon my entirely anecdotal observations from within a community of educated, successful people, the public is profoundly underinformed and underengaged on this topic and the conversation is not progressing in a way that is even remotely proportional to the need, urgency, and complexity.
I'm not sure that petitions do a lot, but they can elevate a conversation if done properly. After reading this, I feel certain that an element of US regulatory/public education and engagement must be threaded in, but will paste the first draft here for thoughts and comment if you or any of your readers are willing. Bull by horns, if for no other reason than I haven't seen anyone else do it and we need to get ordinary people to fully wrap their heads around this beyond the novelty of ChatGPT. I know everyone is a bit strained by the sheer volume of complexity we're grappling with on a daily basis.
I also, FWIW, think it is a brilliant observation that "loneliness becomes the largest national security [I would add health] threat.” Haven't heard it put that way before, and though I know you were somewhat critical of Harris (whose work I have admired for some time - I subscribe to CHT newsletter, etc) I think it's a really important and pithy way to name the situation.