Invisible, autonomous and hackbar: the AI ​​agent -Dilemma that nobody was coming

Invisible, autonomous and hackbar: the AI ​​agent -Dilemma that nobody was coming


This article is part of the special edition of Venturebeat “The Cyber ​​Resilience Playbook: Navigation in the new era of threats”. Read more of it Special edition here.

Generative AI raises interesting security issues, and when companies go into the agent world, these security problems increase.

If AI agents enter workflows, you must be able to access sensitive data and documents in order to do your work. So that they make a significant risk for many Safety -related companies.

“The increasing use of multi-agent systems will introduce new attack vectors and weaknesses that cannot be properly secured from the start” Darkrace. “But the effects and damage of these weak points could be even greater due to the increasing volume of connecting points and interfaces that have multi-agent systems.”

Why AI agents represent such a high security risk

AI agent – or autonomous AI that executes actions on behalf of the user – have become extremely popular in recent months. Ideally, they can be connected to tedious workflows and perform every task, from something so easy to find information based on internal documents, to recommendations for human employees.

However, they show an interesting problem for corporate security companies: You have to gain access to data that you make effective without accidentally opening or sending private information to others. Since agents do more of the tasks that have done human employees, the question of accuracy and accountability comes into play and may become a headache for security and compliance teams.

Chris Betz, Ciso von AWSsaid Venturebeat that the call-up spearing generation (LAB) and agent applications are “a fascinating and interesting perspective in safety.

“Organizations have to think about what the standard approval in their organization looks like because an agent will find something that will support its mission,” said Betz. “And if you overestimate documents, you have to think about the standard approval guideline in your organization.”

Security experts then have to ask whether agents should be regarded as digital employees or software. How much access should agents have? How should they be identified?

AI agent vulnerabilities

Gen AI has made many companies more attentive Potential weaknessesBut agents could open even more problems.

“Attacks that we see today that have an impact on individual agent systems such as data poisoning, immediate injection or social engineering in order to influence the behavior of the agent could be all weaknesses within a multi-agent system,” said Carignan.

Companies have to pay attention to which agents can access to ensure that data security remains strong.

Betz pointed out that many Security problems The surrounding access of the human employee can extend to agents. Therefore, “it depends on ensuring that people have access to the right things and only the right things. He added that “each of these phases is a chance” for hackers when it comes to acting workflows with several steps.

Give the agents an identity

One answer could be to give agents specific access identities.

A world in which modeling reason for problems is over the course of the days is “a world in which we have to think more about the identity of the agent and the identity of man who is responsible for this agent request in our organization Jason Clinton, CISO from Model Provider Anthropic.

Identifying human employees is something that the company has been doing for a long time. You have certain jobs; You have an e -mail address you can register with and are followed by IT administrators. You have physical laptops with accounts that can be blocked. You will receive individual permission to access some data.

A variation of this type of access and identification of employees could be used on agents.

Both Betz and Clinton believe that this process can cause the company managers to rethink access to information to users. It could even make organizations revise their work processes.

“The use of an agent workflow actually offers you the opportunity to bind the applications for each step on the way to the data as part of the rag, but only the data it needs,” said Betz.

He added that agents workflows “can help to answer some of the concerns about overwriting”, since companies have to consider which data is accessed to meet measures. Clinton added that in a workflow that was developed by a certain series of processes, “there is no reason why a step to access the same data that takes step seven”.

The old -fashioned audit is not enough

Companies can also look for agent platforms that enable them to look into the work of agents. For example Don Schuerman, CTO of the Workflow automation provider PegaHis company helps to ensure the security of the agents by telling the user what the agent does.

“Our platform is already used to check the work that people do, so that we can also check every step that an agent takes,” Schuerman told Venturebeat.

Pega’s latest product, AgentxEnables human users to switch to a screen in which the steps that an agent carries out. Users can see where the agent is along the workflow timeline and read the specific actions.

Audits, schedules and identification are not perfect solutions for the security problems of AI agents. However, if companies examine the potential of the agents and start providing them, more targeted answers could occur if the AI ​​experimentation continues.



Source link

Spread the love
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *