Dictatorships will be vulnerable to algorithms

WW25-Politics-Yuval-Noah-Harari-Adria20Fruitos.jpg


AI is often seen as a threat to democracies and a boon to dictators. In 2025, algorithms will likely continue to undermine democratic debate by spreading outrage, fake news, etc Conspiracy theories. In 2025, algorithms will continue to drive the creation of total surveillance regimes where the entire population is monitored 24 hours a day.

Most importantly, AI enables the concentration of all information and power in one hub. In the 20th century, distributed information networks like the US worked better than centralized information networks like the USSR because the human apparatchiks at the center simply couldn’t analyze all the information efficiently. Replacing apparatchiks with AI could make Soviet-style centralized networks superior.

Still, AI isn’t just good news for dictators. First, there is the infamous problem of control. Dictatorial control is based on terror, but algorithms cannot be terrorized. In Russia is the invasion of Ukraine is officially defined as a “special military operation” and the term “war” is a crime punishable by up to three years in prison. If a chatbot on the Russian internet talks about a “war” or mentions the war crimes committed by Russian troops, how could the regime punish that chatbot? The government could block it and try to punish its human creators, but that is much harder than disciplining human users. Furthermore, authorized bots could develop divergent views themselves simply by recognizing patterns in the Russian information sphere. That’s the Russian-style alignment problem. Russia’s human engineers can do their best to develop AIs that are fully in line with the regime. But given AI’s ability to learn and change on its own, how can engineers ensure that an AI that has received the regime’s seal of approval in 2024 doesn’t? Will we not be entering illegal territory in 2025?

The Russian Constitution grandly promises that “everyone shall be guaranteed freedom of thought and expression” (Article 29.1) and that “censorship shall be prohibited” (29.5). Hardly any Russian citizen is naive enough to take these promises seriously. But bots don’t understand duplicity. A chatbot instructed to abide by Russian laws and values ​​could read this constitution, conclude that freedom of expression is a core Russian value, and criticize the Putin regime for violating that value. How could Russian engineers explain to the chatbot that although the constitution guarantees freedom of speech, the chatbot should actually not believe the constitution and should never mention the gap between theory and reality?

In the long term, authoritarian regimes are likely to face an even greater danger: instead of criticizing them, AIs could gain control over them. Throughout history, the greatest threat to autocrats has typically come from their own subordinates. No Roman emperor or Soviet prime minister was overthrown by a democratic revolution, but they were always in danger of being overthrown or made into puppets by their own subordinates. A dictator who gives AIs too much authority in 2025 could become their puppet later.

Dictatorships are far more vulnerable to such algorithmic takeover than democracies. Even a super Machiavellian AI would find it difficult to amass power in a decentralized democratic system like the United States. Even if the AI ​​learns to manipulate the US president, it could face resistance from Congress, the Supreme Court, state governors, the media, major corporations and various NGOs. For example, how would the algorithm handle a Senate filibuster? In a highly centralized system, it is much easier to seize power. To hack an authoritarian network, the AI ​​only needs to manipulate a single paranoid person.



Source link

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *