Responsible AI Takes Two: Developers and Deployers Must Partner

AI developers and the companies that deploy and use their technology must work together to ensure AI is leveraged for good. All parts of the AI value chain have a role to play in the responsible development and use of AI.

Group of office workers meeting in an open concept workplace.

If we want to ensure that AI has a positive impact on society, we need shared responsibility and accountability between those who build AI and those who use it in the course of business. Responsible AI is a team activity in which every actor in the AI value chain plays an important role in creating the future we all want to see.

Here are some best practices for AI technology developers, such as Workday, and the deployers of that technology, such as our customers.

Best Practices for Developing Responsible AI

The following tenets are foundational for responsible AI and are best practices that we follow and encourage other developers to adopt as well. What’s more, this approach is multipronged and requires buy in and support at all levels of the organization. Responsible AI is not the job of just one person, team, or executive steering committee—it’s a company-wide imperative that leadership must recognize and propagate throughout the company.

Perform consistent AI risk evaluation. Every new AI product that we build goes through our scalable risk evaluation, which was developed to align with the risk categories defined by the EU AI Act. This analysis enables us to efficiently determine the risk level of the new use case as it relates to both its context and characteristics, including technical and model design, potential impact to individuals’ economic opportunities, and surveillance concerns.

“High risk” use cases differ from “disallowed” use cases. “High risk” means the use case requires more guidelines and safeguards, along with the application of more thoughtfulness and care. “Disallowed” use cases, such as intrusive productivity monitoring and biometric surveillance, are contrary to our values, and therefore we choose not to build solutions that fall into this category.

Advanced developers will have programs in place, including dedicated teams focused on responsible AI governance.

Adhere to appropriate responsible AI guidelines. For allowable use cases, the result of the risk evaluation specifies required risk mitigations in the form of guidelines. This allows the development team to understand and document adherence to the relevant guidelines as the solution is built. For higher-risk technologies, such as those our customers will use to assist with consequential decision-making that may impact workers’ economic opportunities, we require a greater number of responsible AI guidelines compared to lower-risk technologies, such as those intended to assist with the identification of budget anomalies. Responsible AI guidelines cover areas including transparency, fairness, explainability, human-in-the-loop, data quality, and robustness. The goal is twofold: In the service of providing quality and trustworthy AI technologies to our customers, we must avoid introducing unintended consequences, such as bias, and also make sure we’re adhering to quickly developing best-practice frameworks and regulations.

Provide transparency to customers. Workday provides our customers with AI fact sheets for each AI tool that we offer. This gives customers transparency into how our AI offerings are built, tested, and trained; known limitations for their usage; and summaries of risk mitigations for each feature. We also share how we lead and educate our workers on the importance of responsible AI development.

Best Practices for Deploying AI Responsibly

The following best practices are key to the responsible deployment of AI technologies.

Understand distinct roles and responsibilities in the AI value chain. Regulations and best-practice frameworks are helpful for identifying AI risk management responsibilities not only for developers such as Workday but also for deployers. For example, Article 26 of the EU AI Act specifies obligations for deployers of high-risk AI systems. Another resource worth reviewing is the Future of Privacy Forum’s “Best Practices for AI and Workplace Assessment Technologies.”

In addition, recently adopted legislation in Colorado includes both developer and deployer obligations; this legislation mirrors the emerging framework Workday has been advocating for at the state level. Deployers have an important role to play, and resources are available to help clarify their AI risk management responsibilities.

Work with AI developers you can trust. Trustworthy AI developers will be familiar with existing and (just as importantly in such a fast-moving field) developing regulations and best practices. And they will have proactively built responsible AI-by-design and risk-mitigation frameworks that align to the dynamic regulatory environment.

When choosing an AI developer, ask about its understanding of and alignment with the EU AI Act, the NIST AI Risk Management Framework, and other regulatory guidance and best practices. The developer should be prepared to share a description of its risk identification and mitigation practices with you. Advanced developers will have programs in place, including dedicated teams focused on responsible AI, and will be focused on responsible AI-by-design, building trust directly into their AI products and technologies.

Trustworthy AI developers will be familiar with existing and (just as importantly in such a fast-moving field) developing regulations and best practices.

Ensure responsible use and effective oversight of AI systems. Deployers of AI systems are vital to the AI value chain because they most directly occupy the space between the system and the end user. Deployers must first determine the business challenge they wish to address and whether a given vendor-supplied AI technology provides an effective solution. Then they must consider what responsible AI guidelines should look like for their AI usage. For example, while developers should engage in fairness testing on aggregate data samples, deployers should complete this type of testing on their own local data.

While developers should design systems to allow for effective human oversight, deployers must provide this oversight as they consider how the system most effectively fits within the process they wish to optimize. Deployers must also configure the system to help optimize this process, and oversee and monitor the system’s operation within this process.

Moving Forward, Together

As responsible AI system providers, we at Workday understand and respect our responsibility to develop trustworthy AI systems. We are also mindful that, as developers, our role occupies a specific space in the larger AI value chain. Only when all parties in that value chain commit to working together can we ensure that these technologies are used to amplify human potential and positively impact society.

Get more details on our responsible AI governance program in the “Responsible AI: Empowering Innovation with Integrity” whitepaper, where we describe the principles, practices, people, and public policy positions that drive our approach.

More Reading