Workday’s Approach to Roles and Responsibilities in AI Technology
The EU AI Act introduces a framework of accountability by defining distinct roles for providers, deployers, and downstream providers within the AI lifecycle. To effectively navigate this framework, organizations should first clearly identify their roles within the AI value chain. Organizations may also find themselves occupying multiple roles, requiring adaptable compliance strategies and governance frameworks.
At Workday, we recognize the importance of understanding and fulfilling our responsibilities across all three of these roles:
- Provider: We develop AI technologies for the market, including our own machine learning (ML) models and AI-powered features within our products.
- Deployer: We deploy and use AI systems within our own operations, leveraging AI to enhance our internal processes and services.
- Downstream Provider: We integrate third-party AI models, such as those from the GPAI category, into our offerings to enhance functionality and provide greater value to our customers.
To ensure we meet the highest standards of responsible AI, we’ve adopted a comprehensive approach to compliance. As an AI provider, Workday is building on its already robust responsible AI governance framework to fulfill EU AI Act compliance requirements. We have a forward-thinking approach in this area, one that we have been building since 2019.
The Workday responsible AI program already serves our customers above industry standards, for example, through our alignment with the NIST AI framework and performance of tests and documentation beyond what is legally required for lower-risk AI features. Now, we are taking the same forward-thinking approach to ensure alignment with the EU AI Act.
Furthermore, we have several dedicated internal programs underway to bolster our already strong practices to align with the EU AI Act. These include efforts around enhancing transparency, expanding role-based AI literacy, updating AI-related development and deployment policies, and improving explainability in our AI systems.
Our AI Development Approach
While we adhere to responsible AI principles across all these areas, I’d like to focus this piece on our approach to AI development, where we hold significant responsibility for shaping the technology and mitigating potential risks.
Developing and deploying AI responsibly requires a structured and proactive approach. By embedding responsible AI considerations into every stage of our development lifecycle, we ensure our AI systems are not only compliant with regulations like the EU AI Act but also aligned with our core values.