Musk Signs Letter Requesting 6 Month Moratorium on Advanced AI

  • by:
  • Source: UncoverDC
  • 09/19/2023

Tesla, SpaceX, and Twitter CEO Elon Musk, WEF's Yuval Noah Harari, and Apple's Steve Wozniak are among the over 1100 signatories of a letter to pause AI "more powerful than GPT-4" for a period of 6 months. It should be noted that the signatories, in many cases, are the ones behind the development of the very transformative AI technologies they now fear. Musk told an audience at his Tesla Investor Day 2023 the industry may need regulation. He added he "fear[s] he may have done some things to accelerate it."

The letter advances the notion of an "'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt." It also proposes a wish to avoid unintended, "potentially catastrophic effects on society." However, the letter also suggests a not-so-comforting solution, that the government intervene if "such a pause is not enacted quickly." So now governmental intervention becomes the only way those involved can stop themselves? It seems very odd unless they know they are too far gone to control their own impulses or, even more terrifyingly, the technology itself.

The signatories of the letter contend that Advanced AI carries significant risk and the potential for profound change in life as we know it. They cite a competitive environment with "AI labs locked in an out-of-control race" to advance and "deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control."

The letter calls for a 6-month pause on powerful AI research while referencing another petition letter with over 5700 signatures dedicated to the 23 Asilomar AI Principles.

The 23 Asilomar AI Principles were conceived at the 2017 Asilomar conference by the Future of Life Institute (FLI), whose organizational focus is on the "governance of transformative technologies." The FLI conference hosted a series of panels in Asilomar, CA, to assess the risks and benefits of AI machine learning and its potential capacity for developing a "Superintelligence." Among the attendees were leaders in the field of AI technology. Elon Musk, Stuart Russell, Ray Kurzweil, and others discussed the possibility of a human-level superintelligence in the future. They all seemed to concur that AI can develop beyond human capabilities at an accelerated rate once human-level superintelligence is reached.

As such, one of the primary purposes of the conference was to develop a set of principles that would limit harm from Advanced AI experimentation. 23 principles "received support from at least 90% of the conference participants."  It is these 23 principles that are now referenced in the letter requesting the 6-month moratorium on the development of Advanced AI.

The state of California adopted legislation in support of the 23 principles. According to FLI, they were the most "widely adopted effort of their kind [and] endorsed by AI research leaders at Google DeepMind, GoogleBrain, Facebook, Apple, and OpenAI. Signatories include Demis Hassabis, Yoshua Bengio, Elon Musk, Ray Kurzweil, the late Stephen Hawking, Tasha McCauley, Joseph Gordon-Levitt, Jeff Dean, Tom Gruber, Anthony Romero, Stuart Russell, and more than 3,800 other AI researchers and experts."

What Are the 23 Asilomar Principles?

The 23 Principles are segmented into three broad categories addressing challenges related to "Research, Ethics and Values, and Longer-term issues." Under the Research heading is the worry over the competitive nature of the AI race. One of the more critical questions goes unanswered and centers on defining which "set of values" will be used in the development of AI. Equally important is whether human beings will ultimately be automated out of their jobs or their life's purpose.

23 Asilomar Principles/Research

Questions about values are essential because, in the end, algorithmic and societal biases ultimately inform machine learning and output.

The Ethics and Values heading deals with safety, transparency, responsibility, risks, and privacy questions. These are questions that ponder the benefit to humanity of developing transformative AI technologies:

• Is AI safe?
• If it causes harm, why?
• What happens when it is misused?
• What human values are autonomous advanced AI systems aligned with?
• What happens if human liberties are violated?
• Will humans have ultimate control with regard to choosing and delegating decisions to AI systems?

23 Asilomar Principles/Ethics and Values

The Longer-term issues discussed are probably the most taxing to project or imagine fully.

• What is AI ultimately capable of?
• How will Advanced AI serve humanity as opposed to the state or chosen institutions?
• What kinds of profound change will we see?
• Will it do more harm than good?

23 Asilomar Principles/Longer-term issues

What is GPT-4?

GPT-4 is a scaled-up OpenAI, "deep learning" multimodal model that accepts input from images and text. The technology is capable of producing "human-level performance on various professional and academic benchmarks," such as the ability to pass a "simulated bar exam" with scores in the top 10 percent of test takers. Deep learning means the AI can utilize vast amounts of data to train the AI to perform a given task.

The OpenAI website addresses the issue of safety, referencing the intention to "ensure that these machines are aligned with human intentions and values." The OpenAI Charter, published in 2018, lays out a set of principles that are broadly committed to "acting in the best interests of humanity," whatever that means. The Charter recognizes the potential for "societal impact" when developing AI technology. Well-intended or not, the language is vague enough to be concerned that the development of powerful AI could easily result in unintended negative consequences for society as a whole.

Big Questions and No Clear Answers

The letter to pause AI experiments poses many existential questions that should be considered before further experimentation.

Questions/Pause Letter

The biggest question is whether the train has already left the station. It is evident from the FLI-sponsored event that participants in the development of Advanced AI were and have been well aware of the potential risks to humankind, even as they advanced their own technologies. The letter references the very "unelected tech leaders" who are also signatories to the petition. As the engines of their various profitable enterprises continue to whir, they seem to be speaking out of both sides of their mouths. It begs the question; how serious are they about stopping the experimentation, and will their business decisions reflect the gravity of the questions they ask?

Get the latest news delivered daily!

We will send you breaking news right to your inbox

© 2024 uncoverdc.com