The EU AI Act: How Your Business is Affected (And What to Do About It)
Artificial Intelligence (AI) is no longer a futuristic concept. From email filters and automated scheduling to the ‘shadow AI’ your employees use to draft reports, technology has outpaced oversight.
The problem here is the disconnect. While leaders significantly underestimate AI usage within their teams, the regulations are being tightened. The EU AI Act is not just a set of guidelines for tech giants; it sets a standard for digital risk management. To navigate this, you must move beyond passive consumption and toward active governance.
It’s not just for developers
A misconception is that the EU AI Act only targets engineers creating artificial intelligence tools. In reality, the law draws a sharp line between Providers (those who build AI) and Deployers (those who use AI in the workplace).
If your business uses an AI tool for recruitment, HR filtering, marketing or customer service, you are a Deployer. It's important to understand how the law affects your organisation and the way your employees use generative AI platforms, such as ChatGPT or Gemini, as well as other AI-powered tools.
Why your location might not matter
Much like the GDPR, the EU AI Act features 'extraterritorial reach', meaning you have to comply even if you're not based in the EU. Your headquarters in London, New York, or Singapore provide no immunity if your AI output is used within the EU.
This creates a standard where the EU effectively regulates global tech behavior. Any firm serving EU customers must align its strategy with the guidelines set out in the European Union Artificial Intelligence Act (EU AI Act).
Navigating risk
The EU AI Act classifies AI into four tiers based on the risk to fundamental rights. Understanding this hierarchy is your first task and will help you better understand the risk hierarchy of AI tools:
Unacceptable risk: Anything in this classification is strictly prohibited. This includes social scoring, manipulative AI, and emotion recognition systems in the workplace — a common pitfall for companies exploring productivity monitoring.
High risk: This is heavily regulated and covers AI in critical infrastructure, education, employment, and recruitment. If an algorithm ranks candidates for a job, for example, it is high risk.
Limited risk: This category requires transparency. Article 50 of the EU AI Act mandates that users must know they are interacting with AI. Crucially, outputs must be marked in a machine-readable format and detectable as artificially generated. For example, if you use an AI chatbot for customer service, this must be made clear to your users.
Minimal risk: These largely unregulated tools don't pose much risk to the organisation. This includes light-touch tools such as spam filters.
When it comes to high-risk use cases, there are financial stakes to consider. Non-compliance with prohibited practices or high-risk requirements can trigger fines of up to 7% of your global annual turnover.
AI literacy is now mandatory
As of early last year, Article 4 of the EU AI Act imposes a legal obligation that all providers and deployers must ensure a "sufficient level of AI literacy" among their staff. A literate workforce is your best protection against risk (and those big fines).
In this case, literacy means the ability to make informed decisions and understand the risks before they manifest as corporate liability. This isn't about learning to use prompts for better outputs when using tools such as ChatGPT, nor is it about learning to code and train AI models.
Instead, an AI literacy programme should include:
Technical understanding: Employees must grasp that AI is probabilistic (predicting the next word or pixel) rather than deterministic (knowing facts). It does not think. They must also understand AI's context window and that it can lose track of context in long conversations, leading to operational errors.
Practical application: This involves iterative refinement in prompting and, most importantly, identifying when NOT to use AI. If the cost of an error is high and verification is impossible, AI is the wrong tool.
Ethical awareness: Staff must be trained to spot algorithmic bias, particularly in settings where AI can inadvertently replicate human prejudices. They must also understand the risk of data leaks and what not to share with AI tools.
Compliance is not a hurdle; it is an opportunity for innovation. A formal Corporate AI Governance policy alongside AI literacy training protects your brand and empowers your team. This allows them to use AI tools safely and productively.
Interested in AI literacy training designed for non-technical teams?