We’re almost back to the same playbook from two decades ago. If you’re going to adopt a company’s AI and their agents, it’s really important to be able to trust that vendor, respect that vendor, and know they have high integrity and operate on a set of principles. Given our 20-year track record as a reliable and trusted partner to our amazing customers, we have a real and meaningful advantage in this race.
Q: Earlier this year, Workday announced it had achieved two top AI certifications for its commitment to responsible AI. Can you share practical examples of how Workday applies its ethics principles to responsible AI and AI agents?
Sauer: We have a great set of principles that guide our AI development and deployment, but principles are the easiest part. Everybody has them and they all kind of look the same. The hard part is how do you implement and put those principles into practice?
When I got here 6 ½ years ago, we started a Responsible AI team. We now have a team staffed by data scientists and social scientists, and that’s been critical. We created an intake process where we risk-rate the features that Workday is developing. We make sure we understand what they are, and anything that’s making consequential decisions that will impact human lives is something we pay very close attention to. We curate the development process, we test, and we do everything we can to make sure the results are high integrity and fit for purpose.
Q: Some people see integrity and ethics as “brakes” on innovation. How can a strong ethical foundation actually help companies move faster, especially with AI?
Sauer: It’s true that innovation and engineering groups often see the legal department as slowing them down. That’s not how our engineering department thinks. They understand the importance of what we’re doing, and they themselves want to make sure that what they’re developing is ethical and meets high standards.
Our customers expect high standards in the innovation we’re releasing, so in some ways, it’s self-policing. We do this well because we demand it of ourselves, and we’ve got to do this well because it’s what the market demands.
Q: How do you think about human judgment as you automate more processes with AI?
Sauer: We consider risk as low, medium, medium-high, or high. When something is medium to high risk and we need to make anything considered a consequential decision, we have a principle that says humans must be in the loop.