articleIcon-icon

Article

17 min read

Navigating the EU AI Act: Key Takeaways from Deel’s Expert Webinar

AI

Image

Author

Jemima Owen-Jones

Last Update

April 30, 2025

Published

February 14, 2025

Table of Contents

What Is the EU AI Act?

Organizations’ biggest AI compliance concerns

Everyday use of AI and what the act applies to

The AI Act’s risk classification system

Role classification under the AI Act

How to prepare for the AI Act

How Deel mitigates ethical risks

Will the AI Act go global?

AI and job loss

Will the AI Act impact innovation?

Getting support

Get AI compliant with Deel

Key takeaways
  1. The AI Act is a new EU regulation that ensures AI is used safely and ethically. As AI technology grows, businesses must be aware of these rules to avoid legal issues and protect fundamental rights.
  2. To prepare, businesses should assess their AI systems and ensure they meet the AI Act’s requirements, like risk assessments and transparency. Staying proactive will help avoid fines and maintain a strong reputation.
  3. Deel can help you achieve AI Act compliance with its AI consulting services and Deel AI-powered platform, which simplifies global hiring, payroll, HR, and compliance for your international teams.

The EU AI Act is set to reshape how businesses develop and deploy artificial intelligence, with full enforcement expected by 2026. To help companies prepare, Deel hosted a webinar featuring insights from its in-house experts on AI compliance and risk mitigation.

Emily Johnson, Senior Privacy and AI Compliance Manager at Deel, and Tima Anwana-Gangl, Privacy and AI Manager, shared their firsthand experience working across teams to ensure Deel’s compliance with the upcoming regulations. They discussed key provisions of the EU AI Act, practical steps businesses can take now, and strategies for navigating compliance challenges.

This recap highlights the most important insights from the discussion, equipping you with the knowledge to stay ahead of regulatory requirements and mitigate AI-related risks.

Alternatively, watch the full webinar on-demand here: Navigating the EU AI Act: Ensuring Compliance and Mitigating Risks

What Is the EU AI Act?

The EU AI Act is the first comprehensive law regulating artificial intelligence. Introduced in response to the rapid expansion of AI tools, the Act aims to ensure that AI development and usage are safe and transparent and that fundamental rights are respected.

Scope and timeline

The AI Act applies to EU-based entities and non-EU companies offering AI systems in the EU market. Initial requirements, such as AI literacy obligations, began rolling out in February 2024, with full enforcement set for August 2025.

If you can imagine a US-based tech firm putting an AI system on an EU market and making it available to EU companies, then they would also be subject to this act.

—Emily Johnson,

Senior Compliance Manager, Deel

Compliance and overlapping regulations

Companies must align their AI systems with existing regulations, including the GDPR, as AI applications often involve processing personal data. For example, AI-driven candidate tracking and interview analysis tools may trigger additional privacy obligations.

It might be the case that if you’re using some of the examples of systems [...] like candidate tracking systems, for example, or interview analysis, you’re also likely to be processing personal data at the same time. So the GDPR will also apply.

—Emily Johnson,

Senior Compliance Manager, Deel

Why compliance matters

Non-compliance carries significant risks, with penalties reaching up to €35 million or 7% of global annual turnover—surpassing even GDPR fines. Beyond financial consequences, companies face potential operational suspensions and reputational damage. Ensuring compliance is essential for maintaining trust and business continuity.

See also: Get Global HR Compliance Consulting with the AI Assistant

Continuous Compliance™
Unlock Continuous Compliance™ with Deel
Keep your finger on the pulse of global compliance issues like never before. Our Compliance Hub provides access to the latest regulatory updates and risk warnings, offering guidance and actionable alerts to enhance compliance—all in a single place.

Organizations’ biggest AI compliance concerns

One of the biggest challenges for companies implementing AI is gaining a full understanding of their AI ecosystem. Compliance starts with mapping all AI systems in use—both those developed internally and those provided by third-party vendors. At Deel, this required collaboration across teams to identify every AI tool in use before determining the necessary compliance measures.

At Deel, we’re in a unique position because we provide AI systems to our clients. We’re developing AI systems internally, but we also use vendors who are providing us with AI systems. So it was really the first step of mapping all of that, going to the different teams, understanding what tools we’re using, what we’re developing, and getting the whole picture before we started understanding what we need to implement from a compliance perspective.

—Tima Anwana-Gangl,

Data Privacy Compliance Manager, Deel

Beyond visibility, securing company-wide buy-in is another common hurdle. Ensuring teams understand the importance of AI compliance and follow the necessary protocols is critical. Building trust—both internally and with clients—requires clear policies on AI tool use, strict data privacy measures, and thorough vetting of AI systems to align with regulatory requirements.

Everyday use of AI and what the act applies to

AI is increasingly integrated into everyday business operations, and the EU AI Act applies to many of these uses. Here are some key examples:

  • Language and conversational AI: Tools like ChatGPT and Deel AI are examples of general-purpose AI, capable of tasks such as text and image generation, idea structuring, and more. These tools serve multiple functions across various industries
  • Employment and worker management: AI is commonly used in recruitment and HR for tasks like CV screening, candidate ranking, and interview analysis. These tools help automate and streamline the hiring process
  • Monitoring and validation: AI is also used in transaction monitoring to detect suspicious behavior or fraud, and in document validation to ensure employees receive the correct documents on time, adding an extra layer of security
  • Assessments and automation: AI tools are becoming more common in performance reviews, analyzing employee data and comparing performance across time to assist in evaluating and guiding reviews

I think it’s important to note that if you’re using any of these AI tools and you’re based within the EU, then the EU AI Act will apply to you,

—Emily Johnson,

Senior Compliance Manager, Deel

Deel AI
Get global HR insights fast with Deel AI
From Spain’s maternity leave policy to your August payroll spend, ask Deel AI anything to navigate your global workforce.

The AI Act’s risk classification system

The EU AI Act categorizes AI systems based on their potential risk and assigns regulatory requirements accordingly. This approach ensures higher-risk applications face stricter compliance measures.

Unacceptable risk (prohibited AI systems)

AI systems that pose severe risks to individuals or society are outright banned. These include AI-driven deception, manipulation, or exploitation of vulnerabilities, and predictive policing or crime risk assessments.

High-risk (strictly regulated AI systems)

High-risk AI systems are allowed but must meet stringent compliance requirements due to their potential for significant harm. Examples include AI in medical devices, children’s toys, education, and—crucially—HR and employment. AI tools used for hiring, promotions, terminations, and worker management fall into this category.

Limited risk (transparency requirements)

AI systems that generate content, such as chatbots and image-generation tools, are classified as limited risk. These must include transparency measures, such as informing users that they are interacting with AI and labeling AI-generated content.

Minimal risk (no additional regulations)

Minimal-risk AI systems, such as grammar checks, translation tools, and spam filters, can be used freely without additional restrictions, as they present little to no risk.

The AI Act ensures that regulations are proportionate to potential harm by structuring AI compliance around these risk levels.

The EU is really acknowledging there are certain benefits to AI systems. But there is also the potential for very serious risks, whether related to the user, other individuals, or even society more broadly.

—Emily Johnson,

Senior Compliance Manager, Deel

See also: AI in HR Management: How Does It Mesh with European AI Regulation?

Role classification under the AI Act

The AI Act also assigns compliance responsibilities based on an entity’s role in developing, distributing, or using AI. This ensures that all parties involved in AI systems are accountable under the law.

Providers (AI system developers)

Providers develop AI systems or market them under their brand. They are primarily responsible for ensuring compliance with the AI Act. For example, OpenAI is the provider of ChatGPT.

Downstream providers (integrators & white-label users)

Downstream providers integrate or white-label an AI model developed by another entity. They must ensure compliance for their specific implementation.

A great example of this is Deel AI. So Deel AI is built on ChatGPT functionality. So here, Deel is a downstream provider of OpenAI.

—Emily Johnson,

Senior Compliance Manager, Deel

Deployers (AI system users & implementers)

Deployers use AI systems within their organizations and are responsible for applying compliance measures, such as defining AI usage policies.

At Deel, we have a list of approved AI systems which is a great practice […] we have an AI usage policy for those systems. And we outline what’s allowed in terms of what data can be used from a personal data perspective, from a business confidentiality perspective, and so on.

—Emily Johnson,

Senior Compliance Manager, Deel

Organizations must determine their role under the AI Act to understand their obligations and ensure compliance.

See also: Bridging the Talent Gap in AI Industry: Hiring and Managing Global Contractors

How to prepare for the AI Act

Deel has implemented a three-step approach to comply with the EU AI Act. Organizations can follow a similar process to ensure readiness.

Step 1: AI systems mapping

Begin by identifying all AI systems your organization develops and uses. Work with different teams to get a complete inventory.

  • Check for tools that may have added AI functionality over time. Example: Google Workspace now includes Gemini AI, even if it didn’t when first adopted

Step 2: Role identification & stakeholder mapping

  • Determine your organization’s role in the AI supply chain: Are you a provider, downstream provider, or deployer?
  • Identify internal stakeholders responsible for AI systems. Different teams manage different AI applications

Example: Engineering oversees AI development, while Talent Acquisition manages AI-powered recruitment tools.

With Deel, we have our engineering team that’s responsible for the development of AI systems. But we also have other teams, like our talent acquisition team, who are responsible for the AI-powered recruitment tools that we’re using in the hiring process. It's important to identify these [...] stakeholders because they will essentially be responsible for actually implementing the compliance requirements on a day-to-day basis.

—Tima Anwana-Gangl,

Data Privacy Compliance Manager, Deel

Step 3: Risk classification

Assess each AI system based on the EU AI Act’s risk categories shared above. Here you’re going to assess the AI system in connection with the risk classification that the act has specified.

Recruitment tools, Ashby, those kinds of tools that are in the [...] recruitment and employment sphere and other systems might be lower risk, like [...] Grammarly and things like that. So it’s important that you’re able to identify where each system fits.

—Tima Anwana-Gangl,

Data Privacy Compliance Manager, Deel

By following these steps, organizations can proactively comply with the AI Act and manage AI-related risks effectively.

Create an AI Action Plan

Once you have mapped AI systems, identified roles, and classified risks, the next step is to develop a comprehensive AI action plan. This plan serves as a roadmap for achieving compliance with the EU AI Act by outlining compliance gaps, risks, mitigation measures, and implementation priorities.

Outline your risks

A key component of your AI action plan is identifying and documenting risks associated with AI systems. This involves:

  • Classifying AI systems according to their risk levels (e.g., minimal, limited, high, or unacceptable risk)
  • Evaluating potential risks, including bias, discrimination, data privacy violations, cybersecurity threats, and legal non-compliance
  • Developing risk mitigation measures, such as human oversight, bias detection, and regular system audits
  • Identifying gaps in governance, including missing policies, accountability structures, or insufficient technical documentation

By addressing these risks early, organizations can prevent compliance failures and ensure ethical AI deployment.

Implementation in accordance with Act requirements

Human oversight

The EU AI Act mandates that high-risk AI systems must have human oversight to prevent harmful decisions and ensure accountability. Organizations must:

  • Establish supervisory controls to monitor AI systems and intervene when necessary
  • Ensure human approval of AI-generated decisions, especially in critical areas like recruitment, credit scoring, and law enforcement
  • Implement a “human-in-the-loop” approach, ensuring AI outputs are reviewed and validated by trained personnel before final action is taken

At Deel [...] we have humans involved in supervisory controls [...] who can intervene if an AI system malfunctions or if it starts making harmful decisions. [...] We also have humans involved in approvals and reviews. So we don't make final decisions purely based on automated processing. We make sure that AI outputs which are used to inform final decisions [...] are also reviewed by humans.

—Tima Anwana-Gangl,

Data Privacy Compliance Manager, Deel

Transparency and explainability

To comply with the EU AI Act, organizations must ensure that AI systems are transparent and explainable. This means:

  • Providing clear documentation on how AI systems work, including training data sources and decision-making logic
  • Ensuring users understand AI-driven decisions, particularly when these decisions affect individuals’ rights or opportunities
  • Publishing transparency policies, incorporating AI-specific terms in privacy policies, and maintaining AI usage FAQs

At Deel, we have developed transparency policies. We’ve implemented specific terms into our existing privacy policies. We’ve developed AI terms of use for all of our AI-facing products. We have FAQ pages that outline to our clients exactly how each system is working. So for Deel AI, we outline how it processes personal data [...] and then, we also have the AI Usage policy.

—Tima Anwana-Gangl,

Data Privacy Compliance Manager, Deel

Conformity assessments

High-risk AI systems must undergo conformity assessments to verify compliance with EU regulations. These assessments:

  • Evaluate AI models for technical, ethical, and legal compliance
  • Differentiate between human rights impact assessments (focused on social risks) and conformity assessments (focused on product compliance)
  • Document compliance efforts, ensuring regulators and stakeholders can verify adherence to legal standards

Organizations deploying AI must maintain detailed records of conformity assessments and be prepared for external audits.

Technical measures assessments

Technical compliance involves implementing safeguards to ensure AI systems operate safely and ethically. This includes:

  • Data governance frameworks ensuring training data is unbiased and representative
  • Cybersecurity protections preventing AI system manipulation or hacking
  • Regular testing and validation to confirm system accuracy and fairness
  • Maintaining up-to-date technical documentation for internal and external compliance reviews

By proactively addressing these technical requirements, organizations can avoid penalties and ensure AI systems function as intended.

Registration requirement

High-risk AI system providers must register their models in an EU-wide database. This involves:

  • Submitting detailed system documentation outlining functionality, intended use, and risk mitigation strategies
  • Regularly updating records to reflect changes in system capabilities and compliance efforts
  • Ensuring transparency in AI deployment, allowing regulators to monitor AI applications across industries

Organizations that fail to register applicable AI systems risk regulatory fines and restrictions on system use.

Literacy and training

From February 2025, organizations must ensure that employees receive AI literacy and compliance training. Training should cover:

  • Legal and ethical obligations under the EU AI Act
  • AI risk assessment and mitigation strategies
  • Incident response protocols for AI-related failures or violations
  • Tailored training for different teams (e.g., engineers receive technical training, while HR focuses on ethical AI use)

Deel has implemented AI-specific training programs to ensure employees across departments understand responsible AI practices.

See also: AI in HR: How Employers Can Close the Readiness Gap

Supplier assessments

Companies using third-party AI systems must conduct supplier assessments to ensure compliance. This involves:

  • Reviewing contracts to verify that AI vendors meet EU regulations
  • Conducting independent audits of supplier AI models, focusing on bias, transparency, and security
  • Requiring AI providers to complete compliance questionnaires, ensuring their systems align with regulatory standards

Deploying AI without proper supplier assessments can expose organizations to legal and financial risks, making due diligence a critical compliance step.

By following these structured steps, organizations can ensure AI compliance, mitigate risks, and align with the EU AI Act’s regulatory requirements.

How Deel mitigates ethical risks

At Deel, mitigating ethical risks in AI tools, particularly in recruitment, is a top priority. Tima Anwana-Gangl shared that Deel ensures a strong “human-in-the-loop” policy to prevent potential biases in automated processes, such as racial or gender-based bias.

We may use AI tools to filter CVs [...], but [...] the decisions about who’s being hired and who's not is made by the hiring manager. We’re not solely relying on our AI recruitment tools to tell us who to hire [...] we’re using them to [...] make the process more efficient.

—Emily Johnson,

Senior Compliance Manager, Deel

In addition, Deel carefully reviews third-party AI tools to ensure ethical standards are met. This includes scrutinizing the training data used and understanding how personal data is processed. The focus is on using AI as a supportive tool, not a decision-maker, to preserve human judgment in the recruitment process.

Deel also handles ethical risk assessments in-house, leveraging the expertise of professionals like Emily Johnson and Tima, who bring years of experience and advanced qualifications in the field.

Deel also offers AI compliance services, so you could outsource this kind of work to a company that has the expertise to do the assessments for your AI systems.

—Emily Johnson,

Senior Compliance Manager, Deel

See also: Human in the Loop: Leverage Crowdsourcing for AI Data Labeling

Will the AI Act go global?

The EU AI Act is groundbreaking, but it’s not an isolated development. Tima Anwana-Gangl highlighted that similar initiatives are emerging worldwide.

We see developments happening in the US. For example, on a state level. We see that states like California have implemented laws specific to generative AI, which are kind of mirror of the EU requirements for transparency and explainability.

—Tima Anwana-Gangl,

Data Privacy Compliance Manager, Deel

Emily Johnson pointed out that while the EU AI Act is the first of its kind, countries like South Korea and Singapore are following suit with their own AI regulations, often aligning with the EU’s framework.

Though no longer bound by EU law, the UK is also developing principles around fairness, transparency, and human rights in AI use, reflecting many of the same concerns.

At Deel [...], we've really taken the EU AI act [...] as our golden standard. We are aiming to be fully compliant with this law because [...] there’s a lot of mirroring in different jurisdictions. [...] of course, some jurisdictions have little nuances [...] but overall compliance with the EU AI act does put you in good standing to be compliant with laws in other parts of the world.

—Tima Anwana-Gangl,

Data Privacy Compliance Manager, Deel

AI and job loss

Concerns about AI replacing jobs are common, especially in HR. Tima Anwana-Gangl noted that many clients worry about AI taking over their roles. However, Emily Johnson pointed out that technological advancements historically shift job markets, often creating new opportunities rather than eliminating jobs.

If we look back in history, whenever there’s some big change or technological development, there will always be changes in the job market. [...] I’m seeing already from our team’s position [...] if we look 5 years ago, this wasn’t a job role. No one was talking about AI compliance or AI ethics [...], and now it’s a core part of our job.

—Emily Johnson,

Senior Compliance Manager, Deel

Tima agreed, adding that new positions like AI Ethics officers are emerging. These roles focus on ensuring AI development aligns with ethical standards. Both Tima and Emily see AI as an opportunity for professionals to adapt and thrive with new career paths in AI management and ethics.

I think we’re going to see a lot of [...] talent being developed in that area. And I think it’ll be a good opportunity for those of us who are kind of in this space already to really leverage our knowledge and be able to be assets to our companies.

—Tima Anwana-Gangl,

Data Privacy Compliance Manager, Deel

See also: Is the AI Hype Translating to AI Jobs?

Will the AI Act impact innovation?

A common concern about the EU AI Act is how it might affect innovation. Emily Johnson noted that some customers worry it could slow the uptake of AI systems.

Tima addressed this concern by explaining that while new laws regulating emerging technologies can be seen as stifling, the AI Act provides a structured framework for the responsible development of AI tools.

Perhaps I'm biased because I'm a lawyer, but I think, rather than stifling innovation, what the act does is it really provides a structured framework for the responsible development of these tools [...] creating trust amongst users and reducing legal uncertainty that existed before.

—Tima Anwana-Gangl ,

Data Privacy Compliance Manager, Deel

Tima highlighted that the Act imposes requirements that overlap with existing laws and guidelines, particularly concerning data management, quality standards, data governance, and the need for human oversight.

Additionally, Tima noted that the AI Act incorporates measures such as regulatory sandboxes, which enable companies to trial their AI tools in controlled environments.

You see the lawmakers trying to accommodate innovation and not wanting European companies to be left behind while the rest of the world is quickly innovating. [...] they’re trying to strike this balance between balancing human rights and also thinking about the needs of each industry.

—Tima Anwana-Gangl ,

Data Privacy Compliance Manager, Deel

Getting support

As businesses navigate the complexities of AI regulation, several key support options are available to help ensure compliance.

Deel’s AI Consulting Services

Deel offers AI consulting services led by Tima Anwana-Gangl, Emily Johnson, and their team. They provide in-house developed templates, like human rights impact assessments, to help clients meet AI compliance. For businesses lacking expertise, Deel also offers hands-on consulting to guide clients through the assessment process and share valuable knowledge.

The European Commission’s guidance and resources

Businesses can also rely on resources from the European Commission, the European AI Office, and data protection authorities. These organizations offer guidelines on AI Act compliance and the intersection of the law with AI. Staying updated with these resources is key for businesses navigating AI regulations.

Deel’s Compliance Monitor

Deel’s Compliance Monitor provides organizations with real-time updates on legal changes and new guidelines globally. It helps companies stay informed about AI regulations and ensures compliance, especially when determining if an AI system is high-risk or falls under unacceptable use categories.

Get AI compliant with Deel

To dive deeper into the insights shared during the webinar, watch the full session here. Or, to learn how Deel can help you achieve AI Act compliance while embracing Deel AI for seamless global hiring, payroll, HR, and compliance solutions for your international teams, book a call with the team.

Image

About the author

Jemima is a nomadic writer, journalist, and digital marketer with a decade of experience crafting compelling B2B content for a global audience. She is a strong advocate for equal opportunities and is dedicated to shaping the future of work. At Deel, she specializes in thought-leadership content covering global mobility, cross-border compliance, and workplace culture topics.

linkedin-icontwitter-iconfacebook-icon

Book a free 30-minute product demo

Experience a personalized product demo and get all your questions answered by our experts

G2OrangeLogo-icon

4.8/ 5

 7531 reviews

We respect your data. By submitting this form, you agree that we will contact you in relation to our products and services, in accordance with our privacy policy.

Contractor or employee? Sign up here instead.