News

AI ACT of the EU comes - into force in August 2024



AI ACT of the European Union comes into force in August 2024

On 12 July 2024, the EU's AI ACT, the legal act on artificial intelligence, was published in the Official Journal of the EU. It will therefore automatically enter into force on 1 August 2024.

What does the AI ACT of the EU finally say?
You can find the publication of the AI ACT in the Official Journal of the EU HERE. Our text will summarise some of the aspects shortly.

AI generated content must be labelled as such


Providers of AI systems, including general-purpose AI systems, that generate synthetic audio, image, video or text content shall ensure that the output of the AI system is labelled in a machine-readable format and can be recognised as artificially generated or manipulated.

The term synthetic content refers to synthetic or artificial content that is created by generative artificial intelligence (AI) or learning algorithms in whose actual creation process a human is no longer involved. It can be assumed that this applies to a large number of already known generative AI tools such as Chat GPT and AI-based translation systems as well as generative AI tools for the output of video and audio files.

No requirements for open source AI applications


One such AI tool is StyleGan from Nvidida. And it is precisely this AI tool that does not impose any obligation for labelling or transparency of the AI-based content. This is because Nvidia already published the associated source code as open source in 2019, and this code can be viewed and used by everyone via GIT HUB. According to Article 2 (12) of the AI Act, open source AI systems are generally exempt from the regulations of the AI ACT of the EU.
However, this does not apply to high-risk AI systems or prohibited AI practices.

What are prohibited AI systems and AI practices?


To say it simply, the EU bans AI applications that have a (covert) manipulative or discriminatory effect or tap into personal data, such as facial recognition in public spaces or emotional categorisation in schools, education and the workplace. Biometric categorisation is not permitted, nor is social scoring, i.e. the evaluation of social behaviour. The ban on the use of AI for subliminal influencing does not require that a person or group of people affected is actually or probably suffering significant harm. According to the current AI ACT, a "reasonable" probability of harm is the measure for a ban on the use of AI (see Article 5 of the EU AI ACT).

High-risk AI models - which AI model is included?


The higher the risk classification of AI systems as a so-called AI high-risk model, the higher are the requirements for transparency regarding the AI application and, in particular, risk and cyber security. It remains to be seen which of the current AI application providers will be categorised in this way.

According to the AI ACT, the EU has given itself until 2 February 2026 to define the guidelines for practical implementation (see Article 6 in the EU's AI ACT). By then, a comprehensive list of practical examples of high-risk use cases of AI systems and of AI systems that do not pose a high risk should also be available.

AI sandboxes in the EU - safe AI application?


After all, the AI ACT also provides for a real application of AI in the EU: Tests under real conditions for AI systems are to take place in the so-called AI sandbox. This refers to a regulatory sandbox. Companies and start-ups register tests for their AI applications for an EU AI sandbox. The intention of the EU Parliament is to promote innovation in the field of artificial intelligence (AI) throughout the EU with the sandboxes.

It is mandatory for EU member states to have their competent authorities set up at least one AI sandbox at national level. By 2 August 2026, every EU Member State must have an AI sandbox ready for operation.

This is de facto a monitoring of new AI applications. A regulatory sandbox is an instrument that gives companies the opportunity to test new and innovative products, services or companies under real-life conditions - under the supervision of an EU AI regulatory authority. According to the AI ACT, AI sandboxes enable and promote the tools and infrastructure for testing, benchmarking, evaluating and explaining dimensions of AI systems. The AI ACT sees the regulation of accuracy, robustness and cybersecurity as relevant, as well as measures to mitigate risks to fundamental rights and society as a whole.
Personal data generated in the sandbox cannot be shared outside the sandbox. All personal data processed within the sandbox - e.g. for the training of the AI application - will be protected by the EU and deleted as soon as participation in the sandbox has ended or the retention period for the personal data has expired.

AI ACT of the EU comes into force on 1 August 2024


The AI ACT will enter into force 20 days after publication in the Official Journal of the European Union, i.e. on 1 August 2024.

Companies that violate the AI Regulation face heavy fines.

AI Tools or AI use are your issues?


Our patent law firm offers expertise in patents for AI implemented inventions. Please contact us, by phone at +49 69 69 59 60-0 or send us an email info@kollner.eu.





More News

Your benefits of our work

Next

Contact

We appreciate personal contact. Please do not hesitate to get in contact by phone or e-mail.

Phone: +49 (0)69 69 59 60-0
Telefax: +49 (0)69 69 59 60-22
e-mail: info@kollner.eu

You will find us in the Vogelweidstrasse 8 in 60596 Frankfurt am Main