While the EU AI Act Stalls, Workday Leads on Trusted, Compliant AI

Workday champions a strong, workable EU AI Act while already operating ahead of the curve with independently verified governance that defines the standard for trusted AI.

a group of people walking down the stairs

While the EU Artificial Intelligence (EU AI) Act  faces targeted implementation delays, Workday is moving ahead – operating to ambitious governance standards today so our customers have the transparency and controls they need to meet their own EU AI Act obligations with confidence. 

Workday is already operating to an EU AI Act-ready standard, backed by a mature, risk-based AI governance program.

At Workday, we’ve always believed technology must be built on a foundation of trust, and in this new era of AI, that principle matters more than ever. While we continue to support a strong,  workable EU AI Act that combines smart safeguards with the flexibility to innovate, the current timeline provides a vital window for our industry to align around consistent, high-integrity standards. Strong AI governance is made stronger with clear regulations that offer consistent grounding for the technology. 

Drawing on years of practical implementation experience, Workday is already aligning to EU AI Act–ready standards and stands ready to collaborate with customers, peers, and policymakers to make trustworthy, innovation-friendly AI the norm. Since 2019, we have engaged directly with EU policymakers to ensure the Act has a nuanced, risk-based approach, clarified the high‑risk scope, supported alignment with the General Data Protection Regulation (GDPR), and promoted practical approaches to requirements such as logging. 

Workday’s Governance Foundation for EU AI Act Readiness

Workday has operated a dedicated responsible AI program since 2022, with clear principles and cross‑functional oversight across legal, product, security, and ethics teams. This program governs both Workday AI development for customers and AI deployed internally at the company.

Workday’s risk‑based AI evaluation processes are mapped to the EU AI Act’s Annex III high‑risk categories and prohibited AI practices. This means AI systems are assessed against EU‑inspired criteria before launch and throughout their lifecycle.

Workday’s AI management system is certified to ISO 42001, the emerging global standard for AI management systems, and independently verified to be aligned to the NIST AI Risk Management Framework. You can learn more about ISO 42001 and the NIST framework on the Workday Blog.

Independent certification shows that responsible AI at Workday is baked into our systems, not bolted on. 

Making High-Risk AI More Manageable

We apply systematic evaluations to identify potential high-risk AI systems, including those that may fall under Annex III of the EU AI Act, such as hiring and recruiting technologies. That mapping informs a tiered control framework that scales safeguards based on the potential impact of a given system.

A central part of our approach is human oversight where it matters most. A human-in-the-loop design requires human decisionmaking at critical points in a workflow, and ultimately means people remain accountable. Particularly in the case of higher risk AI systems, this means that humans remain in control, including meaningful oversight, the ability to contest outputs, and clear escalation paths.

We have also strengthened logging and traceability across relevant AI systems to support audits, incident response, and post-market monitoring. These capabilities are designed to align with the EU AI Act’s expectations around technical documentation, record-keeping, and ongoing monitoring, while making high-impact AI more auditable, traceable, and easier to govern in practice.

Human-in-the-loop design and strong logging make Workday’s high-impact AI auditable, traceable, and easier to govern.

Building AI Literacy and Transparency

At Workday, we treat AI literacy as a core capability. To ensure our team is ready to lead in this new era, we’ve rolled out "EverydayAI" learning programs and established a dedicated network of AI champions. These communities of practice allow us to share patterns and guardrails across the organization, helping our teams move with an east-west mindset and a shared understanding of how to build tools that serve the greater good.

This commitment to transparency extends directly to our customers. We provide AI Fact Sheets, in-product notices, and generative AI disclosures to make our technology understandable and controllable. By focusing on transparency and clear communication, we empower our customers to make smart, informed decisions about how they use AI—ensuring that innovation never comes at the cost of integrity or trust.

Policy Priorities for a Strong, Workable EU AI Act

Workday has consistently supported balanced AI policy in Europe and around the globe, showing up to play a constructive role like supporting the workable timelines aligning with the availability of necessary technical standards.  We advocate for risk-based approaches, clear responsibilities along the AI value chain, and tried and true accountability tools. Joining the EU AI Pact in 2024 created an important opportunity for Workday to demonstrate the concrete steps we are taking to operate to an EU AI Act-ready standard, and to promote our best practices for AI literacy.

To help ensure the AI Act delivers both trust and innovation, we believe these priorities are essential.

Better guidance is needed on high-risk classification, transparency measures, and the interplay between the AI Act and the GDPR. Clear guidance will be critical to providing legal certainty and enabling consistent implementation across sectors.

Second, watermarking and content integrity requirements should remain risk-based and proportionate. Policymakers should distinguish between materially deceptive content, such as deepfake video, and lower-risk business-to-business text content.

Third, the EU must avoid layering on additional overlapping AI-specific regulation that creates complexity without improving outcomes. A strong AI Act should be implemented clearly and enforced consistently across member states and through the EU AI Office.

Clear guidance and proportionate rules are essential to fostering the confidence needed to make the EU AI Act workable in practice for businesses of all sizes.

The Bottom Line

The EU AI Act is about more than compliance alone. At its best, it can help create the conditions for trust at scale and give businesses the confidence to innovate responsibly.

By combining our years of direct engagement with EU policymakers with our own mature, internal governance frameworks, Workday is already operating at an EU AI Act-ready standard. We believe the best path forward is clear guidance, proportionate rules, and accountable governance that makes responsible innovation practical. 

While we look forward to additional regulatory guidance to further enable consistent implementation across the industry, we are innovating responsibly, so that our customers can be confident that Workday has the capabilities necessary so they can meet their own compliance obligations under EU law. We aren’t just waiting for standards to change, but delivering the transparency and accountability our customers need to lead the next era of AI.

More Reading