Trust by Design: How Workday Builds AI That Puts People First
Workday shares its ongoing commitment to responsible AI.
Workday shares its ongoing commitment to responsible AI.
At Workday, we believe AI should help people do their best work, make better decisions, and spend more time on what matters most. This approach requires technology designed with people in mind, to be used with care and intent.
That belief guides our AI-powered, human-centric approach to building technology. Workday’s AI capabilities are designed to support customers, their employees, and job candidates by unlocking opportunity, delivering clear and responsible insights, and using AI to support—not replace—human decision-making. From how our systems are designed to how they’re governed and deployed, trust isn’t an afterthought—it’s built in from the start.
For many years, Workday has taken a proactive, enterprise-wide approach to responsible AI. This work spans product development, governance, and operational practices, reflecting our long-standing view that trust must be built in, not added on later.
Workday helps organizations manage their most important assets—their people and their money—across everything from recruiting and payroll to workforce planning, accounting, and more. From day one, we’ve taken that responsibility seriously, building technology designed for trust, efficacy, and accountability.
Within that context, Workday builds AI to support people, not replace them, and this is of particular importance when it comes to hiring. Our AI does not make employment decisions, automatically reject candidates, or determine who gets a job. Instead, it’s designed to help recruiters and hiring teams manage high-volume processes more efficiently, surface relevant information, and reduce administrative work so teams can spend more time applying their expertise and judgement to hiring decisions.
Importantly, customers remain in control of how Workday’s AI is used. Hiring managers and recruiters decide how to engage with AI-supported insights. This human-centric approach is intentional: it helps ensure technology is used responsibly, transparently, and in a manner that respects both job seekers and employers.
Workday doesn’t ask customers to simply take our word for it. We have voluntarily subjected our responsible AI framework to rigorous, independent third-party evaluation and alignment with respected global standards.
Our approach aligns with the National Institute of Standards and Technology (NIST) AI Risk Management Framework, a widely recognized set of best practices developed by the U.S. Department of Commerce to help organizations identify, assess, and manage AI-related risks. We have also achieved ISO/IEC 42001 certification, the international standard for AI management systems, reinforcing our commitment to fairness, privacy, and fundamental human rights. These accreditations are independently verified by leading assessors Schellman and Coalfire, underscoring Workday's dedication to developing AI responsibly and ethically. In addition, Workday has been named one of the World’s Most Ethical Companies by Ethisphere for five consecutive years.
Responsible AI requires more than just principles—it requires clear governance and accountability. At Workday, oversight of our AI systems is built into how we operate, with defined roles, review processes, and leadership engagement.
Our internal Responsible AI Advisory Board, managed by me, Workday’s Chief Responsible AI Officer, meets regularly to review and guide our responsible AI program, including new capabilities and use cases. In addition, the Workday Board of Directors provides oversight of our use of emerging technologies, including generative and agentic AI, as part of its broader governance responsibilities. Together, these structures help ensure our AI is developed and used thoughtfully, consistently, and in line with our values.
Workday believes responsible AI also requires thoughtful public policy. Beyond our own internal framework and practices, we actively engage with policymakers and regulators to help shape smart safeguards that build trust while supporting innovation.
In the United States, we have been a strong supporter of the NIST AI Risk Management Framework from the beginning, and we work constructively with lawmakers on responsible AI approaches at the federal and state levels. Globally, we have engaged with regulators on the development and implementation of the EU AI Act and participate in ongoing policy conversations in regions including the United Kingdom and Ireland. We are also proud members of Singapore’s AI Verify Foundation and continue to engage with governments in countries such as Australia and Japan as they consider their approaches to guiding AI governance.
Four individuals, Derek Mobley, Faithlinh Rowe, Sheilah Johnson-Rocha, and Jill Hughes, have filed a lawsuit against Workday alleging that our AI tools discriminate against job applicants based on race, age, disability, and gender. These claims are false, and we stand behind the integrity of our products and our approach to responsible AI.
As of January 2026, the case remains ongoing. Importantly, the court has made no substantive findings against Workday and has dismissed all claims of intentional discrimination.
To be clear about how Workday’s AI recruiting tools operate:
Workday AI does not make hiring decisions and is not designed to automatically reject candidates.
Customers retain full control and human oversight throughout their hiring processes.
Workday’s AI recruiting tools are not designed to identify—nor are they trained on—protected characteristics such as race, age, disability, or gender.
There is no evidence that Workday’s AI technology results in harm to protected groups.
We invest significant resources in proactively identifying and mitigating the risk of bias.
We remain committed to accountability, transparency, and trust as we continue building AI capabilities that help organizations solve real business problems while keeping people at the center of work.
More Reading
Is understaffing killing your revenue? Learn how to calculate the cost of “not doing” and build a winning business case for AI automation at an NRF 2026 workshop led by Workday's Josh Secrest.
Modern enterprise IT depends on integrated managed and cloud service partnerships, combining scalability with governance for unified, future-ready operations.
AI is redefining enterprise systems—and Workday is uniquely positioned for this moment.