Latest News
Photo of author

AI Act: Regulated Intelligence

All questions at a glance:

  • Will Europe become a role model worldwide?
  • What exactly is regulated in the AI Act?
  • Why do you need an AI law?
  • What regulations apply to particularly powerful AI?

Will Europe become a role model worldwide?

The EU is hoping for the “Brussels effect,” which should even affect large American corporations. The calculation is that the corporations do not want to do without their customers in the European Union; and since it would be expensive and time-consuming to produce products according to different standards, they end up following the rules of the most strictly regulated market – i.e. the European one. A positive example of this effect is the EU General Data Protection Regulation, which Microsoft adheres to worldwide.

However, how much the AI Act influences AI developers outside the EU will depend on where their systems are used. The AI Act will have little impact on local applications that are tailored to a specific country – such as systems for assessing the creditworthiness of Americans or AI applications in British authorities.

The situation is different with consumer products such as image generators or chatbots, which are operated across international platforms. The standards of the AI Act could become established internationally here. The same applies to applications that are classified as “high risk” by the EU and to which it sets particularly high security requirements.

This could bring contracts and funding to relevant research. In order to check whether the AI systems comply with EU rules, considerable scientific effort will be required in the future; also to develop AI systems whose functionality is understandable to third parties, which make fair decisions and are robust against cyber attacks.

And then there are the new transparency requirements: The AI Act, for example, requires companies to disclose their training data. So far, the big tech companies have been able to expand their market power because they kept their data sets secret – and independent developers had little chance of letting their machines learn with anywhere near as large amounts of data. The EU wants this secrecy to come to an end. However, whether and how the transparency obligation has a concrete impact will probably only become clear when the AI Act comes into force.

What exactly is regulated in the AI Act?

The core of the law is that AI applications are divided into risk classes. Low-risk programs are rarely regulated. However, special rules apply to high-risk applications.

A high-risk case, for example, would be an AI that helps decide whether a person receives unemployment benefits. If this program gives incorrect advice or disadvantages certain groups over others, it could have serious consequences for those affected. High-risk applications also include systems that could be used, for example, to process asylum applications or support human resources departments in searching application documents for suitable candidates.

Companies that develop or use such programs must meet requirements to minimize the risks. For example, they need to ensure that the data used to train the AI adequately represents the people it affects. In addition, humans must be able to monitor and review the AI’s decisions.

Of course, this cannot guarantee that something will never go wrong with such technology. Nevertheless, many experts believe that the approach is correct: not to regulate artificial intelligence in principle, but only to regulate certain applications of this technology. Even a very simple AI system can cause great damage if used in a critical area.

Why do you need an AI law?

An AI Act is better than no AI Act. This is what the proponents of the new EU law regulating artificial intelligence (AI) argue, somewhat pointedly. Last week, business associations, researchers and civil society organizations called for this AI law, the AI Act, not to be allowed to fail. A “missing legal framework” would be “risky for fundamental rights protection and innovation in Europe,” it said in one of several open letters. The concern was unfounded: last Friday the AI Act was adopted in the Council of the European Union. Europe’s Parliament still has to agree, but an important step has been taken towards the world’s first comprehensive AI law, which has been the subject of wrangling for years.

The law has been in the works since 2021. But it wasn’t until ChatGPT triggered the global AI hype in November 2022 that many people realized that artificial intelligence – similar to the Internet – could change every area of life in the future. This spurred the desire for rules to govern the use of the technology.

However, it was controversial until recently how far these rules should go. In the federal government, Digital Minister Volker Wissing in particular expressed concern that too much regulation could prevent European companies from catching up with American competition. However, Wissing was unable to assert himself in the cabinet; Like all other member states, the federal government also agreed to the AI Act.

This primarily concerns rules for companies that develop or use AI systems. Not everyone is happy with the duties that come their way; Nevertheless, even some companies spoke out in favor of the law in order to have legal certainty.

Among other things, the AI Act is intended to ensure that automated decisions that affect people are fair and understandable. Some AI applications are also simply banned. For example, employers are not allowed to install systems that automatically detect how employees are feeling based on their facial expressions or tone of voice. It should be clear to many people that such rules make sense.

What regulations apply to particularly powerful AI?

Until recently, there was debate about how to regulate the technology behind large language models such as ChatGPT. Because it has so many possible applications, it is difficult to classify it into one risk class. The so-called AI foundation models can not only drive chatbots, but also supplement the software of law firms or human resources departments, for example, to make critical decisions.

If such a system makes a mistake, it could be due to incorrect application – or the underlying basic model. In order to be able to understand this, the providers of such models must provide users with relevant information. This means: Large AI developers such as OpenAI or Google must reveal so many technical details to German medium-sized companies so that they can develop “a good understanding of the possibilities and limitations” of the basic AI systems.

Even stricter rules apply to providers of particularly large and technically sophisticated AI systems: for example, they must also take cybersecurity measures. A number of AI researchers have vehemently called for such rules because they assume that the most powerful models pose the greatest risks from AI.

Whether these strict rules apply to a system depends, among other things, on the computing power used to train the AI. The threshold is so high that hardly any system available on the market currently exceeds it.

During the negotiations, the federal government, among others, advocated that no special regulations should apply to basic models. The concern also played a role that the rules from Brussels could put European companies at a massive disadvantage compared to competition from the USA or China. This was pointed out in advance by both the French AI company Mistral AI and the German start-up Aleph Alpha, two of the most promising European AI hopes that are developing basic models themselves. She is worried that the AI Act could slow down her work before it has really gotten going. In this case, the Europeans once again left AI development (and future dominance in this field) to large American corporations.

SOURCE

Leave a Comment

MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld MnFld