Navigating the EU AI Act: What Business Leaders Need to Know

The EU AI Act is setting a new standard for responsible AI, and Workday is committed to leading the way. Learn how our innovative approach to AI development and deployment aligns with the act’s requirements while empowering customers to build trust and drive ethical AI adoption.

Man Working With Computer Near Window

The European Union’s Artificial Intelligence Act (EU AI Act) is poised to become the global standard for regulating artificial intelligence (AI). This groundbreaking legislation aims to foster innovation while mitigating the potential risks associated with AI systems. By taking a risk-based approach and prioritizing ethical considerations, the EU AI Act sets a precedent for responsible AI development and deployment worldwide. 

Given the significant implications of this legislation, many of our customers have been asking how Workday is approaching the EU AI Act and how we’re preparing for its requirements. We understand that navigating this new regulatory landscape can be complex, so we want to share our approach in the spirit of transparency and collaboration. By outlining our framework and the steps we’re taking, we hope to offer insights and best practices that others can adapt to their own situations.

At Workday, responsible AI is a core aspect of our development approach that guides everything we do. We’ve long been committed to responsible AI by design, and the EU AI Act provides a valuable framework for strengthening and formalizing these efforts.

Understanding the EU AI Act

The EU AI Act is a legal milestone for AI legislation with far-reaching implications for anyone who develops or implements AI technology in the EU. The act places AI into risk-based categories. It also lays out the roles and responsibilities for developing and deploying AI technology based on these categories.

The EU AI Act introduces four risk-based AI categories that have requirements to be met, depending on the complexity and scale of impact:

  • Prohibited-risk AI (PRAI): These are AI systems banned under the act. For example, these include systems that manipulate people’s decisions or predict the likelihood of criminal offenses.
  • High-risk AI (HRAI): This is a category of AI systems that are strictly regulated due to their potential impact on people. HRAI systems include tools that support hiring practices or healthcare.
  • Transparency-risk AI (TRAI): TRAI systems are technologies that interact directly with a person, including those that produce generated text, audio, or video outputs, which must now be disclosed. These include AI-powered chatbots and AI-generated content. 
  • General-purpose AI (GPAI): This category includes AI that can be used in a broad range of outputs and task completion, such as GPT-related tools.

The EU AI Act is a legal milestone for AI legislation with far-reaching implications for anyone who develops or implements AI technology in the EU.

Workday’s Approach to Roles and Responsibilities in AI Technology

The EU AI Act introduces a framework of accountability by defining distinct roles for providers, deployers, and downstream providers within the AI lifecycle. To effectively navigate this framework, organizations should first clearly identify their roles within the AI value chain. Organizations may also find themselves occupying multiple roles, requiring adaptable compliance strategies and governance frameworks. 

At Workday, we recognize the importance of understanding and fulfilling our responsibilities across all three of these roles: 

  • Provider: We develop AI technologies for the market, including our own machine learning (ML) models and AI-powered features within our products.
  • Deployer: We deploy and use AI systems within our own operations, leveraging AI to enhance our internal processes and services.
  • Downstream Provider: We integrate third-party AI models, such as those from the GPAI category, into our offerings to enhance functionality and provide greater value to our customers.

To ensure we meet the highest standards of responsible AI, we’ve adopted a comprehensive approach to compliance. As an AI provider, Workday is building on its already robust responsible AI governance framework to fulfill EU AI Act compliance requirements. We have a forward-thinking approach in this area, one that we have been building since 2019

The Workday responsible AI program already serves our customers above industry standards, for example, through our alignment with the NIST AI framework and performance of tests and documentation beyond what is legally required for lower-risk AI features. Now, we are taking the same forward-thinking approach to ensure alignment with the EU AI Act.

Furthermore, we have several dedicated internal programs underway to bolster our already strong practices to align with the EU AI Act. These include efforts around enhancing transparency, expanding role-based AI literacy, updating AI-related development and deployment policies, and improving explainability in our AI systems. 

Our AI Development Approach

While we adhere to responsible AI principles across all these areas, I’d like to focus this piece on our approach to AI development, where we hold significant responsibility for shaping the technology and mitigating potential risks.

Developing and deploying AI responsibly requires a structured and proactive approach. By embedding responsible AI considerations into every stage of our development lifecycle, we ensure our AI systems are not only compliant with regulations like the EU AI Act but also aligned with our core values.

To ensure we meet the highest standards of responsible AI, we’ve adopted a comprehensive approach to compliance.

Our approach is grounded in a formalized risk-based methodology. We’ve updated our AI risk evaluation process to specifically reflect EU AI Act requirements, ensuring that our assessment procedures align with the act’s risk categories and principles.

Here’s a high-level overview of how we integrate responsible AI into our development process:

Every new AI use case goes through a risk evaluation during the ideation stage. The evaluation aligns corresponding risk categories based on the use case’s characteristics and intended use. Protocols, or practices, are then assigned to address the associated risks.

Our responsible AI practices, sampled below, are intended to ensure our products are safe and compliant to the broadest needs.

Empowering Customers in the Age of AI 

We believe that responsible AI requires a partnership with our customers. Because of this, we’re committed to empowering our customers with the tools and resources they need to make informed decisions about how they use AI and manage their data.

We provide a range of capabilities that give customers control and transparency:

  • Feature enablement: Our customers decide which features to use (and which not to use), including features that leverage Workday AI.
  • Data contribution control: Customers control whether their data is used to improve Workday ML models.
  • Transparency: Workday provides resources ranging from documentation to in-tenant tools that allow customers to review the data used to improve ML models.
  • Granular-access controls and security: Customers are provided with tools to set up security groups that can consist of their choice of relevant stakeholders, including AI governance teams, security, privacy, legal, and compliance. These groups can make data use decisions in Workday AI. Workday provides tools to help ensure data contributions and decisions work for our customers’ needs.

Our focus on transparency leads us to provide a range of resources to keep customers informed and empowered. These include the detailed AI fact sheets describing our generally available AI features; each fact sheet includes an overview of an AI feature, model inputs and outputs, model training, and privacy considerations. Furthermore, we regularly enhance these tools and resources to help customers better control their use of Workday AI.

We believe that responsible AI requires a partnership with our customers.

An Ongoing Journey of Collaboration

Aligning with the EU AI Act not only ensures compliance but also fosters trust among our customers and stakeholders. We are dedicated to ethical innovation and contributing to a future where AI is used for good.

We view responsible AI as an ongoing endeavor at Workday, which requires us to continually anticipate changes to AI standards and evolve to meet and exceed customer needs. While the EU AI Act is a landmark in the world of ethical AI, it won’t be the last time new regulations will affect AI. When changes do come, we’ll be ready.

To explore key principles, best practices, and real-world examples of responsible AI, download the whitepaper Responsible AI: Empowering Innovation with Integrity.

More Reading