Apple is one of more than 200 companies and other organisations to join the US AI Safety Institute Consortium (AISIC), at the request of the White House.
Amazon, Google, Meta, Microsoft, and Nvidis are among the other companies to join the consortium in response to an executive order by President Biden to ensure that artificial intelligence is “safe, secure, and trustworthy” …
Executive order on AI safety
AISIC was founded in response to a White House executive order issued in October of last year. It set out a whole raft of demands intended to ensure that US citizens were protected from the potential risks of AI systems. These include:
- Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
- Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
- Protect against the risks of using AI to engineer dangerous biological materials.
- Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
- Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
AI Safety Institute Consortium (AISIC)
The consortium was created to help businesses, academics, and government agencies to work together to achieve these goals, reports Reuters.
The Biden administration on Thursday said leading artificial intelligence companies are among more than 200 entities joining a new U.S. consortium to support the safe development and deployment of generative AI. Commerce Secretary Gina Raimondo announced the U.S. AI Safety Institute Consortium (AISIC) […]
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” Raimondo said in a statement […]
The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a “new measurement science in AI safety,” Commerce said.
One agreement has already been reached, in an effort to stem the use of fake images generated by AI. Major companies behind generative image apps agreed to include digital watermarks in AI-generated imagery so that it can be easily identified as such.
Apple’s AI work has been rather leisurely paced when it comes to Siri, and it’s likely this is in large part due to the company’s concerns about generative AI’s tendency to ‘hallucinate’ – that is, make completely false statements.
Photo by Igor Omilaev on Unsplash
FTC: We use income earning auto affiliate links. More.