According to signatories of the letter, AI is developing too fast, before important human ideological questions have been answered. For example, the letter notes we must first answer these questions: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
Currently, the answers to those questions and the decisions are in the hands of technology leaders conducting big AI experiments and creating new innovations, the letter observed. By pausing current AI projects, humanity will have time to answer those questions. The signatories asked projects to implement a pause, and, if they will not, for governments to step in and institute a moratorium.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter stated.
The pause would also enable scientists to create a shared set of safety protocols for advanced AI design and development that could be “rigorously” audited and checked by outside experts. The letter also hoped for regulatory authorities dedicated to AI, oversight and tracking of AI systems, liability in the case of AI harm, public funding for AI safety research, and resources to deal with the upheaval of AI in the economy and political environment.