×
About
Projects
About
Projects

AI Act: Europe and the Regulation of Artificial Intelligence

ARTICLE | 17 May 2024

AI Act: Europe and the Regulation of Artificial Intelligence

An article by Francesco Berlucchi

The newspaper La Repubblica recently called her "the most influential person in Europe in the field of artificial intelligence." She is a woman, she is Italian, and she is an alumna of the Università Cattolica, with a degree in Political Science from Milan. Lucilla Sioli, Director for "Artificial Intelligence and Digital Industry" within the Directorate-General for Communications Networks, Content and Technology (DG Connect) of the European Commission, led the work on the AI Act under the political guidance of Commissioner Thierry Breton.

The AI Act is the world's first set of rules on artificial intelligence and its use. "On one hand, we are talking about an extremely useful technology, considering its potential in health, transport, and manufacturing sectors," explains Sioli during the third event of the series "Humane Intelligence: Transforming Artificial Intelligence into Positive Technology," promoted by the Humane Technology Lab (HTLab), the Laboratory of the Università Cattolica that explores the relationship between human experience and technology.

"On the other hand, this technology is a black box, basing its behavior on probabilistic calculations, making it difficult to predict its outputs," continues Sioli. "Moreover, artificial intelligence uses data derived from our human experience, so the risk of violating fundamental human rights is real. It is therefore necessary to ensure that the technology is designed and trained in a way that minimizes these types of risks." In the event "AI Act: The European Approach to AI Regulation," Sioli explains that "writing this regulatory framework has been very complicated" because "there are tensions between defending our rights and having access to innovative technologies." These are the technologies that Europe must "develop and utilize," says Sioli, to avoid dependency on technologies from other parts of the world.

The example, of course, is ChatGPT. "Let's think about generative AI," she says. "It was created in the United States based on data often derived from that context. If we Europeans are not able to develop similar models, we will depend on those provided by others. This brings risks, both in terms of data and cultural implications." Daniele Bellasio, deputy director of Il Sole 24 Ore, points out that "artificial intelligence is based on eight language models, six of which are American. If we only talk about generative artificial intelligence, investments in 2024 are estimated at around 37 billion dollars, and almost a third of this market is American."

One of the risks is that Europe's leadership in terms of rights protection might drive investments toward "less regulated markets, such as China but also the United States," comments Bellasio. "The hope is that European governments find ways to invest and support companies in this sector so that Europe can complement its regulatory leadership with leadership in research and development." Sioli explains that for this reason, Europe co-finances a network of public supercomputers with member states, such as Leonardo in Bologna, to support the training of large AI models by European startups and the scientific community of the Old Continent.

"The etymology of cyberspace comes from the Greek κυβερ, meaning helm. So, who is at the helm, and where is the ship headed?" asks Gabriele Della Morte, professor of International Law and expert in AI Law. "Nathan Roscoe Pound, dean of Harvard Law School, wrote a striking sentence in his 1923 essay 'Introduction to Legal History': 'The law must be stable but cannot stand still.'" Della Morte refers to the tension between "the movement that the legal system must make to adapt to the changing needs of society" and its "need for rigidity," which ensures "legal certainty." This is "an acrobatic game to which the jurist has always been subjected, a difficult acrobatic test that becomes exacerbated when the law confronts technology."

The solution, as often in the world of law, lies in balancing interests. "We must look favorably upon innovation," explains Lucilla Sioli. "We have based the AI Act on the concept of risk. The rules are proportional to the level of risk that artificial intelligence can generate. And this level depends on the context in which the technology is used." The legislative approach is familiar to European citizens because it is already used in relation to the safety of products marketed within the European Union. We are talking about the CE mark, which certifies that the product meets the safety, health, and environmental protection requirements set by the legislator.

"There are cases where the European Union does not tolerate the use of artificial intelligence," continues Sioli. "One example is social credit. Then there are high risks, such as those related to applications for hiring people or medical devices. These applications must be certified before being placed on the European market." And again, there are transparency-related risks. "Think of a chatbot: it should make it very clear to the user that they are interacting with a machine; or, in the case of a deepfake, it should be labeled to avoid the spread of misinformation." The thought inevitably turns to its use during election campaigns, as recalled during the first event of this series of meetings.

"Only a few months passed between the arrival of tools like chatbots accessible to almost everyone and the approval of this European regulation," comments Bellasio. "This primacy does not come from the fact that the structure directed by Lucilla Sioli took little time to write a law on such a complex matter, but is the result of years of work by this European structure that studied how to regulate artificial intelligence while the major multinational companies in the sector were developing it. Then there is certainly the advantage of having models to draw on, in Europe, to regulate this sector as well, such as the use of the CE mark and the regulation of privacy protection."

Certainly, we are facing a revolution. "Revolution is the recurring word in scientific literature regarding artificial intelligence," says Professor Della Morte. "There are two orientations. Some think that we already have all the tools to adapt the old legal categories to new problems. I think, on the contrary, that digital transformation is genuinely revolutionary because it impacts the categories through which we interpret the world and legal phenomenology, which are space and time." According to Della Morte, by trying to "anthropomorphize artificial intelligence," we mask the problems, but two major issues remain: "how to regulate these innovations and what to regulate concretely."

"The choice adopted by the AI Act is very wise," comments Antonella Sciarrone Alibrandi, judge of the Constitutional Court, former Undersecretary of the Dicastery for Culture and Education of the Holy See, who spoke during the meeting. "Beyond the European primacy, it is virtuous precisely for the clear distinction between AI systems that are potentially harmful to fundamental rights and, on the other hand, all those applications that concern sectors such as banking and finance, where there is no impact on fundamental rights."

This was also evident when analyzing many uses of technology in the corporate environment during the second meeting of this series. Uses that are already very frequent. "As director of the Humane Technology Lab, I am very happy with the success of this trio of events," concludes Giuseppe Riva. "I think artificial intelligence is a crucial issue. Our goal is to continue exploring the most advanced topics in this sector. That's why a whole new series of events will start after the summer."

The interview is published on Secondo Tempo.

scroll-top-icon