AI Act: Securing a fair and pro-innovative AI regulation

New technologies are undoubtedly impacting our daily lives - in business, at home, at schools, in the public sector. With that said, it is inevitable to enact regulations that protect consumers, market and financial stability, fair competition, labor standards, public safety and health, environment and other parts of society. On the other hand, it is also important to strike a balance and not overregulate and to make sure we adopt a regulation which supports innovations and further development. 

SAPIE is watching closely evolving digital policies and regulations long-term. Currently, there is an ongoing discussion process between the European Commission, Parliament and EU Council on the regulation on AI, called the AI Act. The AI Act represents a complex regulatory framework for AI proposed by the European Commission in April 2021. Since then, dozens of technical meetings and negotiations have taken place to discuss how to regulate the use of AI in various sectors. SAPIE has been monitoring the process and coordinated a joint statement of EU associations, NTAs, startups and innovators to raise their voice on the prepared AI regulation to become more balanced and proportionate, so that startups and SMEs can continue to innovate and develop their business across the EU. 

Running closer to the 6th December when the whole EU will be closely watching if the political agreement on the AI Act can be reached, we initiated an updated joint letter co-signed by 22 EU associations and NTAs to advocate for collaborative approaches to ensure that the most advanced developments are reflected and broadly accessible. 

In addition to our recent endeavors, we would like to touch upon further topics which would impact how AI technologies will be further developed, deployed and used within a given jurisdiction. It might also set standards for transparency, accountability or fairness in AI systems and to define the responsibilities of developers and users of AI technologies. 

Risk-based foundation models

The term foundation models cover the knowledge transfer from one context to another using some machine learning techniques. It does not necessarily cover only advanced AI systems, but it appears in creative content such as creation of apps or websites with very little or no technical expertise. As other sources state, the amount of data required is considerably more than the average person needs to transfer understanding from one task to another. In parallel to the huge potential benefits of foundation models, it is also inevitable to correctly understand their benefits, but also the harm they can create. In this regard, the risk-based approach seems like an ideal solution. We also believe that the recently proposed tiered-model of regulation would not be suitable for addressing the current developments and needs on the AIA front. This would create a significant legal uncertainty and raise doubts about proportionality and effectiveness of the proposed measures with regard to the objectives of the AI Act. 

Biometric categorisation

Yet another important discussion is on regulation of biometric and sensitive data and how to strike the right balance within the AIA. We share many of the widely-discussed concerns over the misuse of biometric and sensitive data. Biometric categorisation enables many beneficial uses while effectively protecting fundamental rights. We suggest that while fundamental rights protection is already addressed in specific regulations such as GDPR, positive use cases of biometric categorisation and emotion recognition are allowed, within dedicated frameworks and under specific rules. That could imply, for example, that a blanket ban on biometric categorization or emotion recognition tools should not be necessary. There are socially beneficial applications that could be implemented if properly addressed in the regulation, while protecting privacy rights. 

Copyright

Generally, we can say that the AIA is a horizontal piece of legislation and covers a wide range of topics. However, aspects such as copyright protection and rights holders' approaches to new technologies have been addressed in dedicated legislation, such as the European Copyright Directive, which we believe could be a solid framework for even new technologies such as Large Language Models (LLMs) that could be using materials on the internet. 

As an additional activity to support our endeavor within this topic, we share insights from the CEE business community and innovators to underline the need to create predictable and pro-innovative regulatory frameworks. The video campaign is available at SAPIE social media channel - LinkedIn and Facebook and by using #AI for Innovation. 

Previous
Previous

We were community partner of Impact Summit

Next
Next

Joint Statement on Regulation on Artificial Intelligence