Take part in our daily and weekly newsletters to get the latest updates and exclusive content for reporting on industry -leading AI. Learn more
Ai changes the way companies work. While a large part of this shift is positive, it introduces some unique cyber security concerns. AI applications of the next generation like Agent Ai Put a particularly remarkable risk of security for the organizations.
What is agent AI?
The Agentic AI refers to AI models that can appear autonomous, and often automates entire roles with little or no human input. Advanced chatbots are among the best known examples, but AI agents can also occur in applications such as business intelligence, medical diagnoses and insurance adjustments.
In all applications, this technology combines generative models, natural language processing (NLP) and other functions for machine learning (ML) in order to carry out multi -stage tasks independently. It is easy to recognize the value in such a solution. Understandably, Gartner predicts this a third These active ingredients will be used by all generative AI interactions by 2028.
The unique security risks of the agents -KI
The assumption of agents -KI will increase if companies want to do a larger selection of tasks without a larger workforce. As promising as it is, there has been so much power to give a AI model that have serious effects on cyber security.
AI agents usually require access to large amounts of data. As a result, they are the main goals for cybercriminals, since attackers could concentrate the efforts on a single application in order to uncover a considerable amount of information. It would have a similar effect to whale – what to do Losses of 12.5 billion US dollars In 2021 alone – however, it can be easier, since AI models could be more susceptible than experienced specialists.
Autonomy of the agents -KI is another problem. While all ML algorithms introduce some risks, conventional applications require that human permits do something with their data. Agents, on the other hand, can act without approval. As a result of Errors like AI Hallucinations Can slip through without anyone noticing it.
With this lack of those entitled to supervisory supervisors, existing AI threats such as data poisoning are all the more dangerous. The attackers can corrupt a model by only changing 0.01% of its training data setAnd this is possible with minimal investments. This is harmful in every context, but the incorrect conclusions of a poisoned agent would pass much further than one in which people first check the expenses.
How to improve the AI agent cybersecurity
In view of these threats, cyber security strategies must adapt before companies implement Agent Ai Applications. Here are four critical steps towards this goal.
1. maximize visibility
The first step is to ensure that security and business teams have the complete visibility in the workflow of a AI agent. Any task that the model does, every device or app to which it establishes a connection and all data it can access should be obvious. Uncovering these factors makes it easier to recognize possible weaknesses.
Automated network mapping tools may be required here. Only 23% of IT leaders Suppose you have complete visibility in your cloud environments and 61% use several identification tools, which leads to double data records. The administrators must first tackle these problems in order to get the necessary insight into which their AI agents can access.
Use the principle of the lowest privilege
As soon as it is clear what the agent can interact, companies must restrict these privileges. The principle of the lowest privilege – it says that every company can only see and use what it absolutely needs – is essential.
Any database or application with which an AI agent can interact is a potential risk. As a result, organizations can minimize relevant attack surfaces and prevent lateral movement by restricting them as far as possible. Anything that does not directly accuses the valuable purpose of a AI should be off-limits.
Limit sensitive information
Similarly, network administrators can prevent data protection violations by removing sensitive details from the data records to which the Agent -KI can access. The work of many AI agents naturally includes private data. More than 50% of the generative AI editions Go in chatbots that can collect information about customers. However, not all of these details are required.
While an agent from previous customer interactions should learn, it does not have to save names, addresses or payment details. Programming the system in order to scrub unnecessary personally identifiable information from AI-accessible data minimizes the damage in the event of a violation.
Pay attention to suspicious behavior
Companies also have to take care of the AI
Responsibility in real time is of crucial importance in this monitoring, since the risks of agent AI mean that violations could have dramatic consequences. Fortunately, automated detection and response solutions are highly effective and save an average of 2.22 million US dollars for data injury costs. Companies can slowly expand their AI agents after a successful attempt, but still have to monitor all applications.
With the progress of cyber security, cyber security strategies must also
AI’s quick progress is a significant promise for modern companies, but it Cyber
Agentive AI will bring ML to new heights, but applies to related weaknesses. However, this does not make this technology too insecure to invest it, but it guarantees additional caution. Companies must follow these essential security steps if they use new AI applications.
ZAC Amos is Features Editor at Recovery.
Datadecisionmaker
Welcome to the VentureBeat community!
In Datadecisionmakers, experts, including technical employees, can replace data -related knowledge and innovations.
If you would like to read about state -of -the -art ideas and current information, best practices and the future of data and data technology, you will contact us at Datadecisionmakers.
You could even consider contribute From your own!
Source link