What are the alternatives to calling for a pause on giant AI experiments?

The next generation of large language models beyond GPT-4 may present unique and unprecedented risks to society — but is temporarily halting their development the right way to go?

Andrew Maynard
2 min readApr 20, 2023
Image: Midjourney

(Cross posted from Substack — read the full article here)

A couple of weeks ago, a number of leading artificial intelligence experts — and many others — created something of a stir when they published an open letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

I haven’t signed the letter — not because I don’t think there’s a risk of potentially existential proportions emerging here (I do), but because having spent so long in the messy nexus of risks and benefits around emerging technologies, I’m not convinced that the proposed pause will have the intended effect.

But despite an increasingly pointed rhetoric from some AI experts deriding fears over the potential dangers of advanced AI — including from Meta’s Chief AI Scientist Yann LeCun — Large Language Models (LLMs) hint at a deeply complex risk landscape that we are, at this point, unprepared to navigate.

Diving into the nature of this landscape is a topic for another time (although some of the aspects are captured in this video). But what I do want to touch on briefly here is that nature of potential pathways forward.

The thoughts below are a lightly edited version of a response to a tweet from Gary Marcus (including links). We got into a short back and forth about governance approaches to LLMs and AI/AGI, and this is what I rattled off — it’s a little rough, but given the speed with which things are developing here, it’s worth posting:

Having worked on navigating the risks and benefits of advanced technologies for over 20 years now, the only thing I’m certain about is that there are no silver bullets around ensuring the benefits of AI/AGI while avoiding possible risks — but we do have some sense of how to frame the questions that are necessary to move forward…

Continue reading on Substack



Andrew Maynard

Scientist, author, & Professor of Advanced Technology Transitions at Arizona State University