CEO opinion piece on EU AI act published

From our CEO:

In light of the #EU's #AI act, I wrote another opinion piece for UKTN on #AI #regulation in my position as founder at Decorte Future Industries and president of the Artificial Intelligence Founders Association (AIFA). Great to see it as the main featured article on #UKTN all of this morning.

👉 If you're an AI #startup and wondering how the #AI Act will impact your product pathway and how you deploy, read the article below. While there's many uncertainties still regarding the interpretation and implementation of the Act, #startups should be aware of these uncertainties and plan for them.

🔎 As mentioned in the article, there is currently a sword of Damocles hanging above #Europe's AI startups, and this is a unique moment for the #UK government to profile itself through a different, more slow and methodical approach to #AI #regulation.

Future standard pathways for #AI #startups are being actively delineated, and we should all pay attention - choices made now will have very long-term consequences.

Find the article on UKTN here or read below.

EU AI Act: what it means for AI startups like mine

As the EU parliament celebrates being the first to pass a pervasive global framework for AI regulation, the global AI startup community cautiously awaits the interpretation and implementation of many unclear aspects of the AI Act. The danger that these aspects may ultimately push startups out of the bloc is real, which would only achieve the very opposite of what the Act was designed to do.

AI geopolitics

The EU parliament-endorsed Act is set to become law in the following weeks. It will apply not just to companies based in the EU, but to all companies that deploy AI in EU member states, even those providing services that directly or indirectly impact companies based in the EU.

Like many recent initiatives regarding AI safety, the passing of the AI Act reflects a broader trend: major political powers Europe, the UK, the US and China - are vying to establish themselves as the dominant force in AI regulation, seeking to control regulatory standards long-term. The stated aim of establishing a mature, safe and internally consistent market for AI companies is intertwined with states’ political goals of being the first to regulate AI.

Since the explosion of public interest in AI in 2023, virtually all major geopolitical powers have sought to address – and, above all, be publicly seen as addressing – the question of “controlling” AI. Most governments have not been shy, for better or worse, about their desire to claim the title of being the “first” to act, whether that is organising the first AI world summit (UK), bringing forward the first AI legislation (US), or being the first to introduce a wider AI legislative framework (EU).

It would be naïve not to acknowledge that the topic of AI safety has essentially become a diplomatic and geopolitical battleground. AI startups, meanwhile, which produce the majority of new AI technologies, are rarely included in the dialogue.

Safety classifications   

Some industry voices, and some EU voices, claim the Act should not worry innovators and startups like mine and those I represent through the AI Founders Association, as it only restricts “specifically risky” applications. In my view, this is at best predicting that the Act will be interpreted and implemented in the most favourable way possible for startups operating in the EU, or at worst significantly underestimates the latent scope of the Act.

Under the Act, startups operating in the edtech, healthtech, fintech, transport, recruitment and employment sectors are considered potentially “high risk”, and each of these may have to go through complex documentation, classification and auditing processes (requiring funds, time and resources most startups will not have).

Those six sectors cover the majority of AI startups that successfully raise funding each year. While it seems unimaginable and unlikely that Europe would request the majority of AI startups to observe the stringent and heavy administrative and regulatory burden set out under their “high risk” AI classification (and indeed the EU will likely seek to assess startups within these sectors on a case-by-case basis), this sword of Damocles hanging above innovators’ heads won’t make them sleep better at night.

EU startups building AI for healthcare could be particularly hard hit, even though the use of AI in medical contexts is generally acknowledged as one of the few ways to solve the fundamental unsustainability of modern healthcare systems.

The sector already filters for the toughest startups that can defend and differentiate their tech to compete with resource-heavy corporates in a costly and strongly regulated environment. Under the AI Act, however, EU healthtech startups will need to fund not just expansive clinical studies, regulatory consultants and long regulatory approval processes on the medical side, but somehow also find resources to simultaneously pay regulatory consultants, seek classification and regulatory approval to comply on the AI side.

It seems inevitable that these two regulatory processes will need to be harmonised or streamlined into a single regulatory pathway – with some calling the current situation a “regulatory lasagne”. In the meantime, however, uncertainty regarding the implementation of the Act may well lead to more healthtech startups fleeing the EU for the US (or UK).

 

UK’s AI tortoise to Europe’s AI hare

A sustainable regulatory framework that enables a healthy, safety-conscious, internally consistent and well-delineated market for AI companies to operate and compete in, cannot, in my opinion, be built exclusively from the top-down.

While virtually all governments – including the EU’s in its announcement of the Act – stress the importance of not stifling innovative startups, there is, as a rule, far too little actual engagement with the startup sector when designing regulation.

A bottom-up approach, working directly with startups, that addresses urgent current issues rather than primarily imagined future dangers, is, I believe, the only way to strike the right balance between maintaining AI safety and fostering innovation.  

Many AI startups in my circle are encouraged by the UK’s more cautious approach, which since the AI summit has seemingly moved away from the political game of being “first”, towards seeking to regulate methodically, even slowly. 

They fear the regulatory costs and burdens present in the EU’s AI Act could mean only the largest can play. And if we describe a future where only a handful of established big tech companies control all applications of AI in Europe, with innovative newcomers unable to compete, aren’t we describing the very nightmare the EU is trying to avoid? If the careful approach remains a commitment of successive UK governments, it may well become the deciding factor in the UK becoming an AI powerhouse.  

 

Roeland P-J E Decorte is the Founder & CEO of Decorte Future Industries and the president of the Artificial Intelligence Founders Association.


Previous
Previous

Decorte nation-wide coverage in The Times: “our goal is to be on 3 billion smartphones by 2027”

Next
Next

CEO comments on UK budget & AI published in UKTN, startups.co.uk and Startup Magazine