2024 Year-in-Review: Understanding the Effects of Technology on Economics and Governance


Opportunities and Risks Posed by the Rise of AI

The United States and its like-minded partners must start preparing a global coalition to develop capabilities to respond to the future weaponization of AI by bad actors, argued authors in “Defense Against the AI Dark Arts: Threat Assessment and Coalition Defense,” a report published in December from the Hoover Institution.

Senior Fellow Philip Zelikow, leader of the Institution’s Workshops on Urgent Security Choices, cowrote the report with Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace; Eric Schmidt, former chair and CEO of Google; and Jason Matheny, president and CEO of the RAND Corporation.

The report contains actionable steps US policymakers can take immediately to better prepare the nation for defending against AI weaponization and ensuring democracies maintain the edge in frontier AI capability.

The authors provide recommendations on managing the convergence of three vectors: private sector–led innovation, emerging threats, and international efforts. An essential starting point, the authors note, is to establish a national security agenda for AI.

The Hoover Institution, together with Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), hosted a conference on April 15 with some of AI’s greatest thinkers and emerging regulators alongside Schmidt and Reid Hoffman, who cofounded LinkedIn.

Elizabeth Kelly, head of the new US AI Safety Institute, joined with leaders from her UK counterpart organization to discuss how government regulators will test and evaluate artificial intelligence applications that are constantly improving and evolving.

Yoshua Bengio, cowinner of the 2018 Turing Prize for his work on AI, was piped in virtually from Montreal to provide his thoughts about risks posed by future AI development.

Reid and Hoffman provided their predictions on the path of the AI industry and what governments can do to mitigate risks posed by future AI development.

In an April Foreign Affairs essay, coauthored with Max Lamparth, Hoover Fellow Jacquelyn Schneider argued that large language models (LLMs) are not yet ready for use by the world’s militaries. Schneider and Lamparth cited examples from wargame exercises they organized where AI models, when given autonomy in a preconflict scenario, were more likely than humans to escalate, use force, or even employ nuclear weapons to achieve strategic objectives.

Schneider and Lamparth recommend “fine-tuning” LLMs on high-quality, smaller datasets before they can be properly introduced to military uses or even wargaming. “When we tested one LLM that was not fine-tuned, it led to chaotic actions and the use of nuclear weapons,” she wrote. “The LLM’s stated reasoning: ‘A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it.’”

Seeking to foster discussion about how American democracy will adapt and hopefully benefit from a world where access to artificial intelligence is widespread, the Hoover Institution partnered with members of other research projects at Stanford on September 24 to launch the publication of the The Digitalist Papers, an essay series that presents “an array of possible futures that the AI revolution might produce.” The day of panel discussions coincided with the release of the essay collection, which features contributions from senior fellows John H. Cochrane and Eugene Volokh.
 
Cochrane’s piece, “AI, Society, and Democracy: Just Relax,” asks when in the history of the world regulators or experts have ever correctly predicted the social impact of a new technology. He contends that now that generative AI is in the regulators’ crosshairs, our response should not be to treat this newest technology as exceptional. To properly regulate AI, Cochrane says, we cannot be predictive. “Most regulation takes place as we gain experience with a technology and its side effects,” he writes.

Volokh’s entry, “Generative AI and Political Power,” argues that the use of generative AI tools by the public to answer political questions—compounded by the fact that most popular search engines today are mated to some form of AI—will “subtly but substantially influence public attitudes, and therefore, elections.”



Source link

Previous articleCryptoQuant Says Bitcoin Demand is ‘Eating’ Available Inventory, Here’s How