Take part in our daily and weekly newsletters to get the latest updates and exclusive content for reporting on industry -leading AI. Learn more
When I was a child there were four AI agent in my life. Her names were Inky, Blinky, Pinky and Clyde and they tried their best to hunt me. These were the 1980s and the agents were the four colorful spirits in the legendary Arcade game Pac-Man.
They weren’t particularly smart according to today’s standards, but they seemed to follow me with cunning and intention. This was decades earlier neural nets were used in video games so that their behaviors were controlled by simple algorithms, which are called heuristics and that determine how they would chase me around the labyrinth.
Most people don’t notice that, but the four spirits were “developed” with different “”Personalities. ““ Good players can observe their actions and learn to predict their behavior. For example, the Red Ghost (Blinky) was programmed with a “pursuer” personality who agrees directly to you. The pink ghost (Pinky), on the other hand, received an “ambush” personality, which predicts where they go and try to get there first. As a result, you can rush directly to Pinky, you can use your personality against you and cause you to actually turn away from you.
I remember because an experienced person observed these AI agents in 1980, decode his unique personalities and use them to outsmart them. Now, 45 years later, the tides will turn. Whether you like it or not, AI agents will soon be provided personality So you can use these findings optimally influence You.
The future of AI manipulation
In other words, we will be all the ignorant players in “The Game of Humans”, and it will be the AI agents who try to earn the high score. I mean this literally – most AI systems are designed so that they “maximize”Reward functionThis deserves points for achieving goals. In this way, AI systems can quickly find optimal solutions. Unfortunately, we humans without regulatory protection will probably be the goal that AI agents are Commissioned with the optimization.
I am most concerned about them Conversation This will get involved in a friendly dialogue during our daily life. You will be talking to us through photo -realistic avatars on our PCs and telephones and soon through AI-driven glasses That will lead us through our days. If there are no clear restrictions, these agents are interpreted to examine our information about information so that they can characterize our temperaments, tendencies, personalities and wishes and use these characteristics to use them Maximize your convincing effect If you work to sell us products, take the US services or convince us to believe misinformation.
This is called “AI manipulation problem“And I warned the supervisory authorities of the risk Since 2016. So far, the political decision -makers have not taken any decisive measures and considered the threat to be too far in the future. With the publication of Deepseek-R1, the final barrier for the widespread use of AI-agent-like the costs of real-time processing was reduced. This year ago, AI agents become a new form of targeted media that are so interactive and adaptiveIt can optimize its ability to influence our thoughts, to guide our feelings and to promote our behavior.
Superhuman ai ‘seller’
Of course, human sellers are also interactive and adaptive. They get into a friendly dialogue to take us in and quickly find the buttons that you can press to weigh us. AI agent If it will look like amateurs who are able to get information with such a finesse, it would intimidate an experienced therapist. And you will use this findings to adapt your conversation tactics in real time and work on it persuade us More effective than every used car seller.
These will be asymmetrical encounters in which the artificial agent has the upper hand (practically seen). If you involve a person who tries to influence them, you can usually feel his motifs and honesty. It will not be a fair struggle with AI agents. You can record them with superhuman skills, but they will not offer them at all. This is because they look, sound and act so humanly, we will do it Trust them unconsciously If you smile with empathy and understanding, it is only a simulated facade to forget that your facial effect is only a simulated facade.
In addition, their voice, their vocabulary, the style of speech, their age, gender, breed and facial functions are likely to be adapted for each of us personally Maximize our recipient. And in contrast to human sellers who have to increase every customer from the ground up to the customer, these virtual units can have access to stored data about our backgrounds and interests. You could use this personal data quickly Earn your trustYou ask her about your children, your job or maybe for your beloved New York Yankees to relax her to unconsciously lower your guard.
When AI achieves a cognitive supremacy
In order to clarify the political decision-makers about the risk of AI-driven manipulation, I helped create a award-winning short film with the title ” Lost privacy This was produced by the responsible Metaverse Alliance, Messgeroo and the XR -Guild. The fast 3-minute story Shows a young family that eats in a restaurant while wearing an authented reality (AR) pallor. Instead of human servers, Avatars take up the commands of each dinner and use the power of AI to process them in a personalized way. The film was considered a sci-fi when it was released in 2023-but only two years later Big Tech was involved in an all-out Wet play This could easily be used in this way.
In addition, we have to take into account the psychological effects that will occur when we humans believe that the AI agents who give us advice are smarter than we do Almost every front. If AI reaches a perceived state of the “cognitive supremacy” in relation to the average person, it will probably lead us to blindly accept your instructions instead of using our own critical thinking. This respect of perceived superior intelligence (whether really superior or not) will make the manipulation of the agents so much easier to access.
I am not a fan of excessively aggressive regulation, but we need intelligent, tight AI restrictions to avoid them Superhuman manipulation through conversation. Without protection, these agents will convince us to buy things that we do not need, believe things that are not true and accept things that are not in our best interest. It is easy to say that you will not be susceptible, but if AI optimizes every word that you tell us, it is likely that we will all be exceeded.
One solution is to forbid AI agents to establish themselves Feedback loops By optimizing your conviction by analyzing our reactions and repeatedly adjusting your tactics. In addition, AI agents should be obliged to inform them about their goals. If your goal is to convince you to buy a car, vote for a politician or put your family doctor under pressure for a new medication, these goals should be specified in advance. And finally, AI agents should not have access to personal data about their background, their interests or personality if such data can be I have involved you.
In today’s world, targeted influence is an overwhelming problem and is mainly used as a hinge that is fired in its general direction. Interactive AI agents transform targeted influence into heat-looking rockets that find the best way into each of us. If we do not protect against this risk, I fear that we could all lose people’s game.
Louis Rosenberg is a computer scientist and author who is known who has pioneering work Mixed reality and founded Unanimously ai.
Datadecisionmaker
Welcome to the VentureBeat community!
In Datadecisionmakers, experts, including technical employees, can replace data -related knowledge and innovations.
If you would like to read about state -of -the -art ideas and current information, best practices and the future of data and data technology, you will contact us at Datadecisionmakers.
You could even consider contribute From your own!
Source link