What does the EU law on artificial intelligence mean for businesses?
In 2024, the European Union passed a comprehensive law on artificial intelligence — officially known as the EU AI Act — to establish a unified legal framework for the development and use of AI systems. As the first legislation of its kind globally, this regulation aims to promote the safe adoption of AI while minimizing its risks.
In comparison, the United States has yet to adopt a comprehensive federal law on artificial intelligence. Instead, AI development is governed by a patchwork of sectoral and state-level rules, supported by the NIST AI Risk Management Framework and recent executive orders.
- One platform for the most powerful AI models
- Fair and transparent token-based pricing
- No vendor lock-in with open source
Why did the EU introduce a law on artificial intelligence?
The EU’s law on artificial intelligence was introduced to establish a clear and unified legal framework for the use of artificial intelligence across Europe. The European Commission presented the first draft in April 2021, and the final version was adopted in January 2024. The regulation was prompted by rapid advancements in AI technology, which bring both opportunities and serious risks. Societal and ethical challenges — such as algorithmic bias, lack of transparency in automated decisions, and the potential misuse of AI for mass surveillance — made it clear that legal regulation was urgently needed.
The aim of the law, officially known as the EU Artificial Intelligence Act (EU AI Act), is to encourage innovation without compromising fundamental European values such as data protection, security, and human rights. The EU has taken a risk-based approach, imposing strict regulations or outright bans on high-risk AI applications. At the same time, the law aims to strengthen European companies in the global market by fostering trust and legal certainty.
The EU AI Act is part of a broader legal ecosystem. Businesses engaging with the EU market, including US companies, should be aware of other applicable rules and regulations, such as the Geo-blocking ban, the ePrivacy Regulation, and the Cookie Directive.
How the Law classifies AI systems by risk category
The EU’s law on artificial intelligence categorizes AI systems into four risk levels based on their potential impact:
- Unacceptable risk: This category includes AI systems that are considered a threat to safety, livelihoods, or individual rights. These systems are prohibited. Examples include social scoring systems, where the behavior or personality of individuals is assessed by government bodies, or AI systems used for facial recognition in public spaces without consent.
- High risk: These systems are allowed but are subject to strict requirements. They also impose significant obligations on system providers and operators. This category includes AI systems used in critical infrastructure (e.g. for safety in transport), as well as AI used in HR. In this case, decisions on hiring or firing must meet specific standards to protect both employees and applicants.
- Limited risk (transparency risk): This third risk category covers AI systems designed for direct interaction with users. These systems have specific transparency requirements, meaning users should be informed when interacting with AI. Most generative AI falls into this category.
- Minimal risk: Most AI systems fall into this category and are not subject to specific obligations under the EU law on artificial intelligence. Examples include spam filters or AI-driven characters in video games.
Interested in using AI and want to create an AI-powered website? Then take a look at the following dedicated articles:
What should AI developers and providers keep in mind?
The EU’s law on artificial intelligence establishes a set of requirements for developers and providers of AI systems, particularly high-risk ones, to ensure these technologies are used responsibly. The requirements cover various aspects, including transparency, security, accuracy, and the quality of the underlying data. They are designed to ensure the safety and trustworthiness of AI technologies without unduly hindering innovation.
Risk management
Companies must implement a continuous risk management system to identify, assess, and minimize potential risks. This includes regularly reviewing their AI system’s impact on individuals as well as on society as a whole. Focus areas include the prevention of discrimination, unintended biases in decision-making, and risks to public safety.
Data quality and bias prevention
The training data used to develop an AI system must meet high quality standards. This means the data must be representative, error-free, and sufficiently diverse to avoid discrimination and biases. Companies are required to establish mechanisms to detect and correct these biases, especially when AI is used in sensitive areas such as personnel decisions or law enforcement.
Documentation and logging
Developers must create and maintain comprehensive technical documentation for their AI systems. These documents should not only describe the structure and functionality of the system but should also make the AI’s decision-making processes understandable. Additionally, companies must keep records of their AI systems’ operations to allow for future analysis or potential troubleshooting.
Transparency and user information
The EU AI Act requires that users be clearly informed when they are interacting with an AI system. For instance, chatbots or virtual assistants must disclose that they are not human counterparts. In cases where AI systems make decisions with significant impact on individuals (e.g., regarding loan or job applications), affected persons have the right to an explanation of how the decision was made.
Human oversight and intervention
High-risk AI systems must not operate fully autonomously. Companies must ensure that human control mechanisms are integrated so that humans can intervene and make corrections if the system behaves erroneously or unexpectedly. This is particularly important in areas such as medical diagnostics or autonomous mobility, where wrong decisions can have severe consequences.
Accuracy, robustness, and cybersecurity
The EU AI Act mandates that AI systems be reliable and robust to minimize the risk of erroneous decisions and security threats. Developers must prove that their systems function stably under various conditions and cannot easily be affected by external attacks or manipulations. This includes measures for cybersecurity, such as protection against data leaks or unauthorized manipulation of algorithms.
Conformity assessments and certification
Before a high-risk AI system can be brought to market, it must undergo a conformity assessment to verify that it meets all regulatory requirements. In some cases, an external audit by a notified body is required. The regulation also provides for continuous monitoring and regular re-evaluations of the systems to ensure that they continue to meet the standards.
What are the implications for businesses?
The EU’s AI Act provides businesses with a clear legal framework, aiming to promote innovation and trust in AI technologies. However, it also increases compliance burdens, technical adjustments, and strategic planning requirements. Companies that develop or use AI technologies must carefully study the new requirements. Doing so will help them avoid legal risks and remain competitive in the long term.
Increased compliance burdens and costs
One of the biggest challenges for companies is the additional costs associated with complying with the new regulations. For providers and users of high-risk AI systems, extensive measures are required, which may involve investments in new technologies, skilled personnel, and potentially external consultants or auditing bodies. Small and medium-sized enterprises (SMEs), in particular, could face difficulties raising the financial and personnel resources needed to meet all regulatory requirements.
Companies that fail to comply with the regulations risk heavy fines, similar to those already faced under the EU’s General Data Protection Regulation (or GDPR).
Opportunities for innovation
Despite the additional regulations, the law could help strengthen trust in AI systems and promote innovation in the long term. Companies that adapt early to the new requirements and develop transparent, safe, and ethical AI solutions could gain a competitive advantage.
By introducing clear rules, a unified legal framework has been established within the EU, reducing uncertainty around AI development and use. This makes it easier for companies to market their technologies throughout the EU without dealing with different national regulations.
The EU AI Act is also one of the first of its kind worldwide and sets high standards. Companies that meet these can position themselves as trusted providers in the marketplace, giving them an advantage over competitors adhering to less stringent rules.
Global reach and impact on US companies
The EU’s law on artificial intelligence does not only apply to companies based in the EU. It also applies to international firms that offer AI systems in the European Union or use EU-collected data for AI applications. For example, a US-based company offering AI-powered recruitment software in the EU must comply with European regulations.
This global reach forces many companies outside the EU, including US ones, to adjust their products and services to meet the new standards if they want to access the European market. While this could lead to a more globally uniform approach to AI regulations, it could also pose a barrier for non-European companies seeking to enter the EU market.
However, there are concerns that European companies could fall behind internationally due to these regulations. While countries like the United States and China push AI innovations forward with fewer restrictions, the strict EU regulation could slow down the development and implementation of new technologies in Europe. This could be particularly challenging for startups and SMEs in Europe, as they compete with tech giants with significantly larger resources.
- Get online faster with AI tools
- Fast-track growth with AI marketing
- Save time, maximize results

