Technology

How Europe wants to regulate artificial intelligences

How Europe wants to regulate artificial intelligences

On Wednesday 21 April, the European Commission presented a comprehensive proposal for regulating the use of artificial intelligence (AI) systems, with the aim of indicating their permitted and prohibited uses to protect the privacy and other rights of European citizens. The proposal was rather expected and is considered the most ambitious project so far carried out to regulate a sector in full expansion and with still blurred outlines. To enter into force, the new regulation will have to be discussed and voted on by the European Parliament and member states, a process that will take a few years to complete.

The Commission's initiative covers several areas and applications of AI, from systems for new staffing in companies to algorithms that make self-driving cars work, through to facial recognition by law enforcement. The regulation establishes what can and what is not allowed to do with AI and provides for fines of up to 6 percent of the annual turnover of the companies involved, with mechanisms similar to those used for the GDPR, the regulation for the protection of privacy in in force for some years in the European Union.

Facial recognition
One of the central themes, and already widely discussed before the proposal was officially presented, is related to systems for automatically recognizing individuals in camera footage safety. The proposal provides for the general prohibition of the “real-time” use of these systems in public spaces, even if for the purpose of activities conducted by the police. However, the regulation provides for numerous exceptions, including the possibility of using facial recognition for the police search for suspects in criminal activities.

For the latter case, the need to have authorization from the judicial authorities is cited, a requirement that according to several critics will not constitute a deterrent to avoid excessive use of facial recognition. It is rare that permits are not granted to carry out activities of this type, especially in emergency conditions and when it is necessary to search for one or more persons suspected of a crime.

As had been reported in recent months when anticipations of the text had emerged, the new regulation seems to maintain a rather unclear position towards technologies that could violate citizens' rights to privacy. While on the one hand it prohibits its use by reporting the risks, on the other it provides for various exceptions that will in fact allow the police forces to carry out mass surveillance activities, without many guarantees on data processing.

The ban also speaks of “real-time” surveillance activities, which seems to indicate that it is always allowed to carry out searches with facial recognition technologies on images already acquired. If this were the case, individual member states could use technological solutions that have already been available for some time and are increasingly used in the United States and China.

Other prohibitions
The regulation provides for other prohibitions in the use of AI, for example it will not be allowed to use technologies to calculate the “social score” of each individual, a a practice increasingly tested in China where each citizen is awarded points based on their behavior, which gives the possibility of accessing particular services that are not allowed to those with low scores.

It will also be forbidden to develop algorithms that can cause physical or psychological damage to individuals, or with the ability to manipulate their behavior even in a subliminal form.

Risk scale
AI are in any case already used in numerous fields and lead to innumerable benefits, on which the Commission is not willing to intervene. The regulation identifies different levels of risk for these technologies, indicating what companies must do to be able to use them without incurring fines and other penalties.

The lowest risk level includes systems such as filters against unwanted emails or phone calls, for which no particular problems are identified, except for their evolutions to be evaluated in the future. The next level includes 'limited risk' technologies such as automatic answering systems to provide online assistance, for example to book a flight or a medical visit, over which more careful scrutiny will be required because they still involve the use of personal data.

The third level, the “high risk” one, includes technologies that have a direct and tangible impact on the population, and that are beyond the control of individuals. It is a vast and constantly growing area, ranging from algorithms to determine who can access a loan to the control and forecasting of digital transactions, passing through systems that manage autonomous driving of vehicles or medical devices.

Controls
For these systems, the Commission has established different levels of controls to guarantee the privacy and other rights of the population. The regulation requires AI to be trained using data that does not contain bias (for example of gender or related to geographic origin), that have control systems that are easily accessible by humans and that their functioning is described in a detailed and clear way. to the competent authorities, and in a way that users can understand.

The proposal also contains the creation of a sort of European register of high-risk artificial intelligences, accessible to all and with information on the characteristics of the various AI developed and used by private and public companies and institutions. The idea was welcomed by observers, because it should encourage transparency.

Third way
Presenting the project, the European Commissioner for the Digital Agenda, Margrethe Vestager, explained that the European initiative tries to distinguish itself from the experiences carried out so far in the United States and China. This European “third way” aims to create legislation that covers all the main aspects of technologies that, for better or for worse, will be at the center of developments in the coming decades.

However, this approach does not meet the favor of the United States, which would have liked a more coordinated effort, especially to counter the rapid progress in the Chinese sector. The US approach, however, has so far provided for the adoption of few rules, leaving the large technology companies the responsibility of organizing themselves to develop solutions that respect the rights and laws that protect them. A similar approach has also been followed in the field of privacy, marking a strong difference between the rules envisaged in the United States compared to the much stricter ones included in the European GDPR.

Winning the competition from large US companies such as Google, Amazon, Apple and Facebook will not be easy, however, especially since these companies possess gigantic amounts of data, essential for educating artificial intelligences (machine and deep learning).

According to critics, the new European regulation on AI could discourage companies from developing their technologies in the European Union, precisely due to the presence of excessive constraints. However, the Commission plans to invest around one billion euros per year to encourage initiatives and programs related to digitization, financing companies and startups in the sector. The plan expects to attract investments of up to 20 billion euros by the end of the decade in the sector.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top