AI Regulations for Tech Startups

On Tuesday May 30, 2023 SAPIE, League for Digital Boost in cooperation with Google held a panel discussion on AI Regulations for Tech Startups.

Read the key outcomes of the discussion in the article below.

We’ve been witnessing a regulatory storm in the EU over the last couple of weeks, months and years, especially in the area of digital, technology and innovation. We’ve been as a SAPIE following some of these legislative pieces, such as DSA, DMA and more. 

It is important to engage startups, scale ups, SMEs to the discussion on these issues with policymakers and to have a bottom up approach. We will prepare a joint position paper after the session and address it to EU and national policymakers to highlight the concerns and state points of startups and SMEs.

What is the current outlook of AI regulations in Europe? 

  • EU initiative on AI in spring 2018

  • 10 April 2018: 25 European countries signed a Declaration of cooperation on AI (Croatia joined on 16 July 2018). The Declaration is also available in all EU member states' languages, so people could read it in their own language. 

  • The Commission has selected 52 experts for the High-level Expert Group on AI (AI HLEG) giving them a mandate to deliver:

  • White paper: February 2020 (ecosystem of trust and ecosystem of excellence)

  • Draft AI ACT (Regulation) 21 April 2021 - coming in force expected December 2023

It is crucial to raise awareness about AI, but also about its potential, benefits and also its threats and risks. Where there is a human, there is a risk as well. 

Stakeholders from all parts of life should be involved - public, private, academia - policymakers, professionals, mothers, fathers, because it is a huge shift that will happen and is already here and we need all voices to be heard.

There are 7 requirements to be met in order to be trustworthy and we should be able to be ready to implement them from day one. Now we are at the final stage when the AI Act will be adopted, maybe by the end of this year, or in spring next year, but nevertheless it is about to come and we have to implement it.

AI is such an overwhelming technology that almost everybody has something to do with AI and it is very hard to put it on a straightforward level, to give a metrics or guideline on how the founders or scale up or startup should implement it in their daily business. This is the key challenge, that the burden of compliance and regulatory framework doesn’t kill the innovation process and that we will be able to still be leaders on our internal market when it comes to technology. 

Perspectives from big tech companies 

AI will be such a revolutionary file which will change how we work and live in our daily lives and how companies can implement their tools on a daily basis. From the perspective of big companies, those are trying to help people to get access to technologies easily, be it citizens or companies that are using products and services. The real challenge is that people do not understand the real implications of the systems in their practical applications (e.g. different use cases in different types of AI uses). The startups and SMEs probably do not have people in legal teams who are following these regulations closely and to assess what the implications mean in practice. The compliance will be much harder for SMEs than for big companies. 

We face the probability of having those systems General Purpose AI or Generative AI being regulated as high-risks from the very beginning. If this happens it will be really difficult because they are against the main idea of having high risk as a category for the use cases which are posing a real threat for human rights protection. Depending on the different use cases, it can be used without risks, but also misused to get information that is not accurate and poses the risk of accessing information online. Therefore, we need to have these discussions and be mindful of implications because this is the moment when we have a chance to be involved. We can then raise concerns to national stakeholders and decision makers and see how innovations can be encouraged and not to be stopped. 

We need to see how we could work together and enhance the capabilities we have and see what are the needs that could be addressed in designing this field - one idea are regulatory sandboxes that the EU has proposed to give space to innovators to test their products before launching them and seeing what is the exact impact on the market. But it is not easy for a small startup to develop this sandbox. These discussions are timely and we need to be aligned. 

Perspectives from startups/AI consultancies 

There is a great interest in AI, it is becoming a strategic space, and nobody wants to be left behind. Consultancies have different kinds of requests and practical things from startups and SMEs. One of these questions is how to navigate the regulatory landscape. 

The biggest thing you need to know in the initial phases when launching AI innovation, is getting the data - data regulation right and GDPR. There is also additional sectoral regulation, e.g. in the medical sector, such as medical device regulation, where there is a similarity with the AI Act and its high risk category. The companies usually have their consent, they have to get the gate when customers are giving them data etc. 

The question of what we should do to be on the safe side when it comes to regulations is becoming more tricky. The answer to that at the moment is that if the user case is entering the high risk category, companies should be prepared to cash out a lot more money for the development because the whole process would have this implication (as it is the case in a medical sector currently).Consultancies are following what types of applications will fall under the high risks category and therefore would be regulated. 

Where do companies or citizens find up-to-date information?

It is often not easy to comprehend what information is accurate. Also ChatGPT seems to be accurate and true, but when it comes to the AI Act, it does not have to be true because we are living in an extremely changing and developing environment. 

If you try to understand AI from the global perspective, it is really not easy - EU, European Parliament, UK Parliament, OECD, India AI, IBM etc. - everybody has its own definition and it is a problem. And if you are a startup, you only want to see what you should do about it because you have limited time and limited resources and you want accurate information and just then you go to ChatGPT which is a tool and not a miracle. 

The only institution which could provide accurate information is the European Union, so it is recommended to contact the official EU bodies. There is also an AI newsletter and some official EU bodies and institutions. 

How to navigate on a global level?

We are seeing different perspectives in navigating the AI regulatory landscape in the world. In the US we see a bigger approach towards innovation and competitiveness. In Europe, there is more human rights protection and in China, it is a totally different area. If we are working on a global level consensus, we need to build partnerships to have a multi-stakeholder approach, engage international organizations as well as academia or scientific councils to set up minimum principles to be adopted etc. 

For the first time, startups and scale ups are invited to these discussions with the aim to engage all with the aim that the implementations could be done well from the day one. 

It is all about what world we want to create. The world that is going to be reflected in AI should be the one that is the world we have in real life. We shouldn’t omit to include some parts of societies. This should be considered, it is regular economic talk to competitors, but they are sitting on the same values. 

In order to be considered trustworthy, it has to be lawful, robust (state of art and secure), and ethical. 

AI is too important to not regulate it and too important to not overregulate

We have to be responsible for how we regulate it. 

The AI Act is still in the parliament, the implementation period is probably 3 years, so it will not happen overnight. The real challenge are existing AI systems which have not been designed under the AI Act and are on the market already and how do we get them to the compliance machine and we should consider it as well. 

It is important to have educational capabilities, meaning  once we have an AI Act implemented, it is going to be a role of stakeholders to have enough capabilities to understand, comprehend and implement the regulation. These capabilities have to be built, do we have stakeholders who understand the technology and supervise it and guide with good instructions, user cases etc. When it comes to startups and SMEs, if you explain why and what is the reason behind AI compliance, they would go for it. Currently, we have the gap between the comprehension and implementation, so we need to speak the same language. It is one of the key challenges to address. 

The additional challenge is with the unintended consequences. Algorithms are designed to help us achieve what we want to get and we need to work together to avoid negative behaviour. 

We can look on the complexity of regulations from two perspectives:

  1. What it means for consultancies which provide advice and services - bigger amounts of work as there are no methods and they need to closely watch the final version of documents. It may also be a case that companies will be more often advised not to pursue some technology because of high finances and work

  2. What it means for startups/companies which have to go in the market as quickly as possibly - naturally, it will be less companies in the EU which will start something new in this market and those in Asia or the rest of the world would have the advantage to pursue

It is also important to understand that the EU wants our values and rights to be protected and the EU citizens want to have it protected, thus we need to regulate. We need to understand that those systems are helping us to be more protected and perceive in this sense all regulations.In order to be in compliance, the companies have to do more work. 

Maybe AI shall learn from the pharma industry which is highly regulated everywhere in the world while still introducing innovations and making money.

Previous
Previous

SAPIE Innovation Measures - Snap Elections 2023

Next
Next

Discussion: Bolstering a predictable AI regulatory framework - how could we all be winners?