PauseAI: Or What?
The end is near, and the techies are not taking it well. Eliezer Yudkowsky, after decades of tackling the AI alignment problem head-on, threw up his hands and announced MIRI’s “Death With Dignity” strategy in 2022. On my brief inventory of random passerby in Hayes Valley, the prior probability of human extinction at the hands of a rogue artificial superintelligence Often shortened to \(P(\text{doom})\). is hovering around 40%. And these fears are not unfounded; OpenAI’s recent success with ChatGPT and DALL-E have all but proven that Giant Inscrutable Matrices can and will make an insane amount of money before anyone can prove the absence of murderous intent.
Not all hope is lost, however. I recently stumbled across PauseAI, a “community of volunteers and local communities who work together to mitigate the risks of AI (including the risk of human extinction).” Their chief demand is of course the AI “pause,” a global moratorium on further training of large AI systems until there is strong scientific consensus that we can guarantee their safety. PauseAI is growing fast, with a vibrant Discord server and regular meetups all over the world.
This sounds promising. What are they doing to enact their AI pause and potentially save humanity from extinction?
Well at first glance, it seems they’re really into making and holding up signs. The “take action” page on their website has a few petitions, some reading material, and some tips for writing to your representative about killer robots. Oh, and if you’re an AI researcher, they kindly ask that you avoid building a superintelligence if you can help it.
I was unsatisfied with this, but I figured there must be more exciting things in the works, so I joined the Discord server. Sure enough, after a few days of lurking I stumbled across a small planning meeting in one of the voice channels and started listening in. It was the founders Joep Meindertsma and Holly Elmore, and another organizer that I’ll call “Benedict Arnold.” After a warm welcome from Joep and the others, I gave a brief introduction and settled into the background. They were in the middle of a brainstorming session, throwing out potential outreach campaigns like television ads or billboards.
The topic quickly lost steam after they started considering the costs involved, and the conversation shifted to how to get AI researchers to quit their jobs. It was then that Benedict Arnold, presumably burdened with guilt, admitted that he himself had recently started work with OpenAI as a contractor. (I was disgusted. What cowardice! What lack of principles! I thought we were talking about potential human extinction here!) “Maybe this is good” said Joep, remaining optimistic. “If we could get an OpenAI employee to publicly quit, that would be a very powerful message.” Disclaimer: I’m paraphrasing a conversation that happened a few months ago from memory. But Benedict refused. “I’m not like a super great researcher, and this is a big opportunity for me. Besides, I would still be working on AI safety.” There was some more prodding, but he had clearly made up his mind. And to add insult to injury, he expressed concern about how his public association with PauseAI could affect his career opportunities in the future.
At this point I had some concerns, but the meeting came to an awkward close before I could get a word in, so I took to text chat:
Meta executives practically have a fiduciary responsibility to their shareholders to ensure that they develop AI capabilities faster than their competitors. They are not going to be swayed, so why is there so much focus on AI labs? This was in the wake of recent protests at Meta headquarters.
There are few people who can make consequential decisions on these things, and targeting them specifically would be much higher ROI than mass broadcasts… When I clicked “Join PauseAI” I was hoping to learn who those people are and what I should tell them, but it seems like none of that is clear at the moment.
Of all people, Benedict Arnold replied (emphasis mine):
Thanks for your thoughts.
The idea is not so much to sway big tech execs directly as to help set the broader narrative around AI. In particular trying to counter the popular belief that AI x-risk is made up by big tech for regulatory capture. Also we hope to influence employees of the labs.
I agree that persuading a few very powerful people can be more impactful than public outreach. Unfortunately it is extremely difficult to influence those people. Shaping public opinion and getting media attention does indirectly change the minds of politicians.
There are several other groups that are trying to play the “inside game” and talk to powerful people by gaining influence in powerful circles. Part of the reason for PauseAI is that we saw a gap in the x-risk ecosystem, where lots of groups were trying to talk directly to powerful people, but no one was trying to influence the public.
I don’t buy this.
I’m not saying public advocacy is completely useless, but there is no plan for what to actually do with the public. At its core, an effective movement needs an answer to the “or what?” question. You can make all the demands you want, but it’s in the nature of those in charge to ask “what will you do if I refuse?” Decades of “climate debate” have proven that The Science does not matter, and that you cannot sway these people by telling them “we’re all going to die.” The only thing that matters is making their behavior personally costly. Voting politicians out of office. Boycotting corporations. There needs to be some impact to the bottom-line, and the public is the means to obtaining that.
The argument for protesting outside AI labs is even weaker. Essentially, “stop working on AI capabilities or we will try to lower your social status.” I’m not sure how this could possibly happen; the organizers admit the main challenge in attracting people to protests has been the perceived low status of the act. It’s hard to believe that enough of this will sour the prospect of over a quarter million dollars a year in total compensation to work at the most prestigious and lucrative organizations in tech.
All things considered, not a great showing for humanity. But it’s not like we haven’t seen this before.
Doom is a disease, and AI is just the latest strain to get coughed out of the Bay Area’s memetic wet market.
I predict we’ll see it jockeying for power with the likes of other capital-I Issues like climate change once it starts hitting the normies.
In the meantime, one can’t help but feel sympathetic for the organizers of PauseAI who seem genuinely afflicted by this. It takes an incredible amount of bravery to have such a public stance on anything, and if you scroll through the #emotional-support
channel in their Discord it’s clear that it’s taking a toll.
Joep, Holly, and the other organizers are smart and well-meaning people. Even “Benedict Arnold” is just playing the hand he’s been dealt. But it’s disappointing to see energy wasted on activities that feel good but don’t move the needle. Then again, it’s easy to have ideas on how PauseAI should be run, but I am not running PauseAI nor would I be capable of doing so.
•
Last weekend I managed to catch a glimpse of the other side, sneaking into one of those OpenAI parties with a bouncer “Yeah, you know Roon? We were Twitter mutuals so you can let me in.” and a classy venue. Not knowing anyone, I decided to steel my nerves with several “GPT” and “DALL-E” themed cocktails from the open bar before I could harangue the exclusive crowd. “Aren’t you terrified of the future?” I would ask, holding five different hors d’oeuvres. “I’m worried that I’ll be left behind.” Unilaterally the response was confusion. “Why would you be worried? You’re here!” Clearly they felt like they had everything under control, and they would be handsomely rewarded for their hand in the coming AI rapture.
Now I don’t know nothin’ about deep learnin’ and the like, but I do know this: before we spawn superintelligence, AI will create winners and losers. The winners will be just as aware of the risks as the losers, but they won’t care, and that’s why they’ll win. The takeaway for me personally is that if I hang out at the right parties and schmooze the right people, I’ll be well on my way to securing my own Dyson sphere post-singularity. Stay tuned to see if your intrepid narrator manages to pull this off.