The European Parliament approves the AI ​​Act, what will change for our companies?

The European Parliament approves the AI ​​Act, what will change for our companies?

[ad_1]

The European Parliament passed the AI ​​Act today, thus entering its final phase. The final approval from the EU should arrive at the end of the year (according to sources close to the European institutions) and this regulation will enter into force in 2024.
As already known, the rules follow a risk-based approach and establish obligations for suppliers and for those who employ AI systems according to the level of risk that AI can generate. AI systems that present an unacceptable level of risk to people’s safety, such as those used for social scoring (classification of people based on their social behavior or personal characteristics), will therefore be banned.

What’s new in today’s text

Today’s novelty (compared to the known text) is the total ban of “real-time” remote biometric identification systems in spaces accessible to the public (rejected amendments which provided for exceptions). “Subsequent” remote biometric identification systems are possible but only for the prosecution of serious crimes and only with prior judicial authorisation.

The race for compliance begins

In short, the text passed definitively from a first EU institution to the plenary and can be considered fairly consolidated; the areas of modification will be few and concern almost only facial recognition, where there is intense debate on possible exceptions to the ban. It can be said that the race for compliance by companies that make and use AI has already started. “Since the entry into force there are two years of grace period, which however companies need all of them to build the processes of compliance with the regulation”, explains Massimo Pellegrino, partner of Intellera, a specialized consultancy firm. Generative AI alone (that of Chatgpt and similar systems) can bring 4.4 billion in value to the global economy and save workers 60-70 percent of time according to a report released today by McKinsey. A value that companies they prepare to seize. According to the latest data from the Milan Polytechnic, 61 percent of large Italian companies have launched AI projects, 34 percent are already adopting them. Interest and awareness in SMEs is growing rapidly. But making saucepans without lids is a big risk for companies: it means exposing yourself to liability for damages from AI and privacy penalties (among other things). The EU regulation explains how to adopt AI well from a compliance point of view.

How to adopt AI well, with the AI ​​act

«Europe mainly regulates the companies that produce AI and in cascade those that use it; these must make the best use of the information available to them that the AI ​​systems developed by the manufacturers comply with European rules», explains Stefano da Empoli, president of the I-Com research institute. Da Empoli and Pellegrino agree that the first step to take is a catalog of AI apps used or that you want to use to understand the level of risk connected according to the model present in the AI ​​act. «The main impact on companies lies in the mechanism envisaged by the risk-based approach which represents the main cornerstone of the measure», explains from Empoli. AI component of the products they want to bring to market is contained in the much larger list of high-risk applications,” he adds. «If this is the case, a conformity assessment must be carried out which can be carried out internally or by a certified third party. Due to the scarcity of qualified expertise and the lack of standardized procedures, it is easy to foresee that at least initially this second option will be used more rarely», continues from Empoli.

Find out more

The products tested with the assessments required by the AI ​​Act will appear on the market with the CE conformity stamp (physical or virtual). The manufacturer’s commitment does not end with the conformity assessment (or self-assessment) but must be followed by a robust after-sales monitoring system to identify any critical issues that were not foreseen or initially underestimated. If the manufacturer makes a radical change in a high-risk AI application (beyond simple continuous learning will need to reevaluate for compliance). The companies will have to keep and make easily accessible all the documentation produced for the conformity assessments to the national authorities that will be in charge of the supervisory activities. “The advice is to do governance of all AI apps, not just high-risk ones, to have better output, better performance,” says Pellegrino. Enterprises must adopt a new risk management process for AI app development and adoption; a data governance process for data management “above all to satisfy the obligation of non-discrimination envisaged by the regulation. We also need a structured process of AI governance; modify the company’s internal compliance process and adopt technological solutions to meet the requirement of algorithmic transparency, interoperability, cyber security, non-discrimination”, adds Pellegrino. There are thorns also in the adoption phase, “which must be done while keeping the compliance. If the company does a re-training of the models, perhaps to customize them with their own data, they will have to reproduce the documentation explaining how they did this process,” he continues. All this, it was said, for high-risk AI apps. According to Pellegrino, there are two high-risk AI applications in particular that companies will adopt: those for calculating credit/banking risk and that for analyzing resumes for recruitment purposes. low risk (which are all those that do not directly impact people’s lives), instead transparency obligations are provided at most, such as for example warning customers that artificial intelligence solutions are being used. Also necessary, on the privacy side, “to formulate increasingly transparent information, also in relation to the legal basis; implement suitable procedures to allow the interested party to exercise their rights”, explains the lawyer Anna Cataleta, of P4i.

[ad_2]

Source link