AI systems with “unacceptable risk” are now prohibited in the EU

AI systems with “unacceptable risk” are now prohibited in the EU


On Sunday in the European Union, the control authorities of the block can prohibit the use of AI systems that they prove for an “unacceptable risk” or damage.

February 2 is the first compliance period for the I have the deedThe comprehensive AI frame that the European Parliament finally approved last March after years of development. The act officially went into force August 1st; What now follows is the first of the compliance periods.

The details are explained in Article 5But on the whole, the law is intended to cover a large number of applications in which AI occurs and interacts with individuals, from consumer applications to physical environments.

Under the Blocks approachThere are four wide risk levels: (1) minimal risk (e.g. e -mail -spam filter) must not have regulatory supervision; . (3) A high risk – AI for recommendations in healthcare is an example – is very regulatory. And (4) Unacceptable risk applications – the focus of the compliance requirements of this month – is fully prohibited.

Some of the unacceptable activities include:

  • Used for social evaluation (e.g. based on the behavior of a person risk profiles).
  • AI that manipulates the decisions of a person subtle or deceptively.
  • AI that uses weaknesses such as age, disability or socio -economic status.
  • AI who tries to predict people, commit the crimes based on their appearance.
  • AI that uses biometry to close the properties of a person like their sexual orientation.
  • KI that collects “real time” biometric data in public locations for the purpose of prosecution.
  • Ki, who tries to close people’s feelings at work or at school.
  • AI that creates databases for facial recognition data – or expanded – by scraping pictures online or from surveillance cameras.

Companies that are the above AI applications in the EU are subject to fines, regardless of where they are headquartered. You could be connected to up to 35 million euros (~ 36 million US dollars) or 7% of your annual turnover from the earlier financial year, depending on what is ever bigger.

The fines will not take some time, Rob Sumroy, Head of Technology at the British law firm and May in an interview with Techcrunch.

“It is expected that organizations will be completely compliant by February 2, but … The next big deadline for the companies must be known,” said Sumroy. “Until then, we will know who the responsible authorities are and the fines and enforcement provisions will be effective.”

Preliminary promise

The deadline on February 2 is a formality in a way.

Last September over 100 companies signed I am a pactA voluntary promise to apply the principles of the AI ​​law before entering the application. As part of the pact, the signatories – including Amazon, Google and Openai – undertook the identification of AI systems that are probably classified as a high risk according to the AI ​​law.

Some tech giants, especially Meta and Apple, have skipped the pact. French Ki -Startup mistralOne of the toughest critics of the AI ​​Act also decided not to sign.

This does not mean that Apple, Meta, Mistral or others who did not agree to the pact will not meet their obligations – including the ban on unacceptable risky systems. Sumroy points out that most companies are not involved in these practices in view of the type of prohibited applications.

“For organizations, an essential concern of the EU -AAI Act is whether clear guidelines, standards and behavioral skills will arrive in good time -and crucial whether they offer organizations clarity about compliance with compliance,” said Sumroy. “However, the working groups have so far been their deadlines in the code of conduct for … developers.”

Possible exceptions

There are exceptions to several prohibitions of the AI ​​Act.

For example, the law enables the law enforcement authorities to use certain systems that collect biometrics in public places if these systems contribute to life. This exemption requires the approval of the corresponding board of directors, and the law emphasizes that law enforcement cannot make a decision that “creates a disadvantageous legal effect” on a person based exclusively on the results of these systems.

The law also creates exceptions to systems that conclude emotions at workplaces and schools where there is “medical or security” lining, such as systems for therapeutic use.

The European Commission, the Executive of the EU, said it would publish additional guidelines in “early 2025” after a consultation with the stakeholders in November. However, these guidelines still have to be published.

Sumroy said it was also unclear how other laws in the books could interact with the prohibitions and associated provisions of the AI ​​law. Clarity can only arrive later in the year when the enforcement window approaches.

“It is important for organizations to remember that the AI ​​regulation does not exist isolated,” said Sumroy. “Other legal framework conditions such as GDPR, NIS2 and Dora will interact with the AI ​​law and create potential challenges – especially with overlapping requirements for notification. Understanding how these laws go together will be as important as the understanding of the AI ​​act itself. ”



Source link

Spread the love
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *