Unintended consequences: US election results herald reckless AI development

Unintended consequences: US election results herald reckless AI development

Subscribe to our daily and weekly newsletters to receive the latest updates and exclusive content on industry-leading AI reporting. Learn more


While the 2024 US election focused on traditional issues such as the economy and immigration, its impact lingered AI policy could prove even more transformative. Without a single debate question or major campaign promise on AI, voters inadvertently tipped the scales in favor of accelerationists—those who advocate for rapid AI development with minimal regulatory hurdles. The implications of this acceleration are profound, ushering in a new era of AI policy that prioritizes innovation over caution and signals a decisive shift in the debate between them The potential risks and opportunities of AI.

President-elect Donald Trump’s pro-business stance leads many to believe his administration will favor those who develop and commercialize AI and other advanced technologies. His party platform has little to say about AI. However, it emphasizes a policy approach focused on repealing AI regulations, particularly targeting what they described as “radical left-wing ideas” in the outgoing administration’s existing executive orders. In contrast, the platform supported AI development with the aim of promoting free expression and “human flourishing.” She called for policies that enable innovation in AI while rejecting measures that could hinder technological progress.

The first indications of appointments to leading government positions underline this direction. However, there is an even bigger story unfolding: resolving the intense debate over The future of AI.

An intense debate

Since ChatGPT released in November 2022, there is a heated debate between those in the AI ​​field who want to accelerate AI development and those who want to slow down AI development.

As is known, in March 2023 the latter group proposed and warned about a six-month AI pause in the development of the most advanced systems an open letter that AI tools pose “significant risks to society and humanity”. This letter, led by the Institute for the Future of Lifewas prompted by the release of the GPT-4 Large Language Model (LLM) by OpenAI, a few months after the launch of ChatGPT.

The letter was originally signed by more than 1,000 technology leaders and researchers, including Elon Musk, Apple co-founder Steve Wozniak, 2020 presidential candidate Andrew Yang, podcaster Lex Fridman, and AI pioneers Yoshua Bengio and Stuart Russell. The number of signatories to the letter eventually rose to over 33,000. Collectively, they became known as “Doomers,” a term expressing concern about potential existential risks posed by AI.

Not everyone agreed. OpenAI CEO Sam Altman has not signed. Neither did Bill Gates and many others. Their reasons for not doing so varied, although many expressed concerns about possible harm from AI. This led to many conversations about the potential for AI to run amok and result in disaster. It became fashionable for many in the AI ​​field to talk about it Assessment of the probability of sinkingoften referred to as an equation: p(doom). Nevertheless, work on AI development was not interrupted.

For the record: my p(doom) in June 2023 was 5%. That may seem low, but it wasn’t zero. I felt that the major AI labs have made a serious effort to rigorously test new models before release and provide important guidelines for their use.

Many observers concerned about the dangers of AI have put the existential risks higher than 5%, with some putting them even higher. AI security researcher Roman Yampolskiy assessed the likelihood of AI End of humanity at over 99%. That is, a study The results, published earlier this year, well before the election, reflecting the views of more than 2,700 AI researchers, showed that “the median prediction for extremely bad outcomes, such as human extinction, was 5%.” Would you board a plane if there was a 5% chance of it crashing? This is the dilemma facing AI researchers and policymakers.

Must go faster

Others openly expressed their concerns about AI, instead pointing to what they believed to be the technology’s great advantage. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (professor of computer science and engineering at the University of Washington and author of “The master algorithm“). They argued instead that AI was part of the solution. As outlined by Ng, there are indeed existential threats such as climate change and future pandemics, and AI can be part of the way these are addressed and mitigated.

Ng argued that AI development should not be halted but accelerated. This utopian view of technology was shared by others known collectively as “effective accelerationists,” or “e/acc” for short. They argue that technology – and AI in particular – is not the problem but the solution to most, if not all, of the world’s problems. Startup accelerator Y combinator CEO Garry Tan, along with other prominent Silicon Valley executives, included the term “e/acc” in their usernames on X to show alignment with the vision. Reporter Kevin Roose at The New York Times captured the essence from these acceleration advocates, saying they are taking a “full throttle, no brakes” approach.

A Substack newsletter from a few years ago described the principles underlying effective accelerationism. Here is the summary they offer at the end of the article, as well as commentary from OpenAI CEO Sam Altman.

AI acceleration ahead

The 2024 election outcome could be seen as a turning point, enabling the accelerator vision to shape US AI policy for the next few years. For example, the President-elect recently named technology entrepreneur and venture capitalist David Sacks as “AI Czar.”

Sacks, a vocal critic of AI regulation and advocate of market-driven innovation, brings his experience as a technology investor to the role. He is one of the leading voices in the AI ​​industry, and much of what he has said about AI aligns with the accelerationist viewpoints of the new party platform.

In response to the Biden administration’s AI executive order in 2023, Sacks said tweeted: “The US political and financial situation is hopelessly broken, but we have an unparalleled advantage as a country: cutting-edge innovation in AI, fueled by a completely free and unregulated software development market.” That’s just ending.” While Sacks’ influence While AI policy remains to be seen, his appointment signals a shift toward policies that favor industry self-regulation and rapid innovation.

Elections have consequences

I doubt the majority of voters gave much thought to the implications for AI policy when casting their vote. Nonetheless, as a result of the election, accelerationists have made concrete gains, potentially sidelining those who advocate for a more cautious federal government approach to mitigating the long-term risks of AI.

As accelerationists plot the path forward, the stakes couldn’t be higher. Whether this era ushers in unprecedented progress or unintended catastrophe remains to be seen. As AI development accelerates, the need for informed public discourse and vigilant oversight becomes increasingly important. How we navigate this era will determine not only technological progress, but also our shared future.

To counterbalance the lack of measures at the federal level, it is possible for one or more federal states to issue various regulations, which has already happened in some cases California And Colorado. California’s AI security bills, for example, focus on transparency requirements, while Colorado addresses AI discrimination in hiring practices and provides models for state-level governance. Now all eyes will be on the voluntary testing and self-imposed guardrails at Anthropic, Google, OpenAI and other AI model developers.

In summary, the victory of the accelerationists means fewer restrictions on AI innovation. While this increased speed can lead to faster innovation, it also increases the risk of unintended consequences. I’m now revising my p(doom) to 10%. What is yours?

Gary Grossman is EVP of Technology Practice at Nobleman and global head of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

At DataDecisionMakers, experts, including engineers who work with data, can share data-related insights and innovations.

If you want to learn more about innovative ideas and current information, best practices and the future of data and data technology, visit us at DataDecisionMakers.

You might even think about it contribute an article Your own!

Read more from DataDecisionMakers



Source link
Spread the love
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *