Wozniak, Musk and leading researchers urge pause on ‘out of control’ AI

By

The letter doesn't refer to the
The letter doesn't refer to the "rise of the machines" a la "Terminator," but it could have.
Photo: Warner Bros. Pictures

A new open letter signed by tech leaders urges a six-month pause on development of advanced artificial intelligence applications that may pose “profound risks to society and humanity.”

Apple co-founder Steve Wozniak, Tesla CEO Elon Musk, other tech execs and many AI academics signed the letter. It urges caution concerning “emergent” AI more powerful than GPT-4. We’re not talking Siri here (at least not yet).

Open letter urges pause for more careful planning to manage advanced AI

The letter appeared on the Future of Life Institute website and attracted more than 1,123 signatures as of Wednesday. It doesn’t call for a pause on all AI development, just “ever-larger unpredictable black-box models with emergent capabilities.”

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” it said. It added the pause “should be public and verifiable.” And if it does not happen quickly, “governments should step in and institute a moratorium.”

‘Out-of-control race’

The letter described AI development as out of control, risky, unpredictable and even beyond understanding to its own makers, thanks to brisk competition. It cited the Asilomar AI Principles developed at the Beneficial AI Conference in 2017:

As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Should we even do this?

The signatories posed a number of questions that they emphasized people should be asking before charging ahead with AI development. The letter also referred to AI’s power to take away many types of jobs at different levels, not just simple, repetitive tasks.

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

Even AI bot ChatGPT’s developer suggested it

The letter pointed out that ChatGPT developer OpenAI previously noted a pause might become necessary.

OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Positive effects and manageable risks

In addition to the positive effects and manageable risks mentioned above, the goal of a pause is summed up in the letter like this:

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

And the letter went on to describe necessary safeguards through governance systems developed in cooperation with policymakers.

AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Do you agree a pause to consider and plan advanced AI is necessary? Feel free to comment below.

Newsletters

Daily round-ups or a weekly refresher, straight from Cult of Mac to your inbox.

  • The Weekender

    The week's best Apple news, reviews and how-tos from Cult of Mac, every Saturday morning. Our readers say: "Thank you guys for always posting cool stuff" -- Vaughn Nevins. "Very informative" -- Kenly Xavier.