Global tech rivals pledge to tackle potential harm of artificial intelligence


But Mr Husic said “working as one-off countries” was too difficult when facing the profound risks and opportunities of fast-evolving AI – prompting a bigger effort to meet in the middle.

“This is a moment where people usually approach these things thinking that it’ll be hard to get different countries working together on something,” he said. “You’re seeing here a movement to actually find ways to co-operate on incredibly complex technology. It’s a pretty significant move, and I think it gives us a greater basis to see stuff happen.”

In a “Bletchley Declaration”, issued on day one and backed by all 29 governments present, the countries pledged to collaborate on the science, look for a common understanding of the challenges, and keep the summit process going.

But it was unclear just how co-ordinated the group of countries would be. US Vice President Kamala Harris used a speech in London to confirm the White House would set up an AI Safety Institute, which duplicates a British initiative.

Elon Musk is among the high-profile attendees at the AI summit in the UK. Bloomberg

Ms Harris said the US version would “create rigorous standards to test the safety of AI models for public use”. This would be part of a voluntary set of “minimum baselines” on safety and security, which AI tech companies would be asked to follow.

And although she said the US wanted to work with countries such as China on developing international rules and norms for AI, Ms Harris was clear that America wanted to call the shots.

“We intend that these [American] domestic AI policies will serve as a model for global policy,” she said.

The Bletchley Declaration did seem to sign up China to some unusually liberal language.

The signatories concurred on the need to “foster public trust and confidence in AI systems”. This meant addressing issues such as “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection”.

“We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation,” the officials agreed.

Britain and the US also seem to have steered the EU into some market-friendly language that may temper Brussels’ zeal for a wide-ranging law governing the nascent sector.

“Countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI,” the statement said.

But Mr Husic said that during the summit he was “hearing more and more … a recognition of rights” – a reference to the EU model that emphasises the protection of citizens and consumers.

He also said the US had this week taken a step back from “a laissez-faire, self-regulatory, let-the-market-sort-it-out” approach, because the White House’s executive order set tougher expectations on companies.

The Bletchley Declaration did put the onus on technology firms to step up, even if only on a voluntary basis.

“We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks,” the 29 governments said.

And they committed to a shared agenda of scientific research into the benefits and risks of AI. This would build a shared understanding, and also develop “evaluation metrics, tools for safety testing, and developing relevant public sector capability” – meaning tougher government oversight.

Mr Husic acknowledged the gaps between different regulatory models, particularly that of China, had not been papered over altogether.

“There may be, I think, regardless of those differences, a willingness to co-operate. How that plays out? We’ve taken some good positive first steps; we’ll ultimately be judged on experience – how it plays out in the real world.”

The participants agreed to keep the summit process going: Korea will host an online reunion in six months, and France will stage a second in-person AI Safety Summit next year.



Source link

Previous articleHow to uninstall NVIDIA or AMD graphics drivers
Next articleiPhone Tap to Pay comes to Ukraine