AI ACT adopted by the EU Parliament
Wednesday, 13th March 2024, the EU Parliament aopted the AI ACT, the first-ever legal framework on AI.
With the AI ACT, AI applications are to be categorised into a system of risk groups in Europe. The higher the general risk of an AI application is considered to be, there are more requirements in safety measures and also in transparency.
In particular, AI applications that could violate EU values are completely prohibited in the EU under the AI ACT. These include
- Social scoring - the evaluation of social behaviour
- Recognising emotions in the workplace and in educational institutions
- Facial recognition in public spaces (with exceptions for police and security authorities: they are allowed to use facial recognition in public spaces to prosecute certain criminal offences such as human trafficking or terrorism.
For companies with AI applications, however, the security and transparency requirements of the AI ACT are of particular interest. What has been decided in this regard?
Transparency requirements
"General purpose AI (GPAI) systems and the GPAI models on which they are based must fulfil certain transparency requirements, including compliance with EU copyright law and the publication of detailed summaries of the content used for training", according to the press release published by the EU Parliament.
In praxis, this raises further questions concerning copyright and training data. Answers are being sought in the courts:
In the US, the New York Times is suing OpenAI, and author Silverman is suing META's AI: was the generative AI trained in violation of copyright? More...
And is there a copyright for AI-generated works that are created without human intervention? There is already a judgement on this from the USA. More...
Training of the AI in the patent claim - comprehensibly disclosed in a European patent application? The European Patent Office has already ruled on this several times. More...
Risk rating of AI systems
The higher the risk rating of AI systems, the higher the requirements for risk and cyber security.
This can also include GPAI models.
"For the more powerful GPAI models that could pose systemic risks, additional requirements apply, including the performance of model assessments, the assessment and mitigation of systemic risks and incident reporting," according to the EU Parliament's press release.
When do these requirements apply? It remains to be seen which GPAI models will be categorised as so powerful that they pose systemic risks.
An EU database is planned in which high-risk AI systems will be registered.
Labelling of AI-generated content
In addition, artificial or manipulated images, audio or video content ("deepfakes") must be clearly labelled as such, according to the press release published by the EU Parliament.
Outlook for AI innovation in Europe
The AI ACT also favours support for AI innovation in Europe and for start-ups and SMEs that take care of AI applications. Regulatory "sandboxes" and practical tests should therefore be set up at national level and made accessible to start-ups and SMEs. This should enable innovative AI applications to be developed until they are ready for the market in Europe.
Here, you'll find the latest version of the AI Act as full draft text from 24th January 2024, source: EU Data Consilium.
What happens now? Yesterday's approval of the AI ACT opens the way for the law. It will be finalised in detail and then published in the Official Journal of the EU. It will enter into force 20 days after publication in the Official Journal and will be applicable throughout the EU 24 months later.
Any questions about innovation including AI applications?
Please contact our patent law firm Köllner & Partner, an enquiry is non-binding for you.
You can reach us by telephone on +49 69 69 59 60-0 or please send us an email info@kollner.eu.