OpenAI is called out for opposing a proposed AI safety bill



Ex-OpenAI employees William Saunders and Daniel Kokotajlo have written a letter to California Gov. Gavin Newsom arguing that the company’s opposition to a state bill that would impose strict safety guidelines and protocols on future AI development is disappointing but not surprising.

“We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing,” Saunders and Kokotajlo wrote. “But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.”

The two argue that further development without sufficient guardrails “poses foreseeable risks of catastrophic harm to the public,” whether that’s “unprecedented cyberattacks or assisting in the creation of biological weapons.”

The duo was also quick to point out OpenAI CEO Sam Altman’s hypocrisy on the matter of regulation. They point to his recent congressional testimony calling for regulation of the AI industry but note “when actual regulation is on the table, he opposes it.”

Per a 2023 survey by the MITRE corporation and the Harris Poll, only 39% of respondents believed that today’s AI tech is “safe and secure.”

The bill in question, SB-1047, the Safe and Secure Innovation for Frontier Artificial Models Act, would, “among other things, require that a developer, before beginning to initially train a covered model … comply with various requirements, including implementing the capability to promptly enact a full shutdown … and implement a written and separate safety and security protocol.” OpenAI has suffered multiple data leaks and system intrusions in recent years.

OpenAI reportedly strongly disagrees with the researchers’ “mischaracterization of our position on SB 1047,” as a spokesperson told Business Insider. The company instead argues that “a federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the US to lead the development of global standards,” OpenAI’s Chief Strategy Officer Jason Kwon said in a letter to California state Sen. Scott Wiener in February.

Saunders and Kokotajlo counter that OpenAI’s push for federal regulations is not in good faith. “We cannot wait for Congress to act — they’ve explicitly said that they aren’t willing to pass meaningful AI regulation,” the pair wrote. “If they ever do, it can preempt CA legislation.”

The bill has found support from a surprising source as well: xAI CEO Elon Musk. “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he wrote on X on Monday. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.” Musk, who recently announced the construction of “the most powerful AI training cluster in the world” in Memphis, Tennessee, had previously threatened to move the headquarters of his X (formerly Twitter) and SpaceX companies to Texas to escape industry regulation in California.

Update: This post has been updated to include the comments from Elon Musk.








Source link

Previous articleBitcoin investors turn cautious as speculative appetite wanes – Glassnode