If we want to ensure that AI has a positive impact on society, we need shared responsibility and accountability between those who build AI and those who use it in the course of business. Responsible AI is a team activity in which every actor in the AI value chain plays an important role in creating the future we all want to see.
Here are some best practices for AI technology developers, such as Workday, and the deployers of that technology, such as our customers.
Best Practices for Developing Responsible AI
The following tenets are foundational for responsible AI and are best practices that we follow and encourage other developers to adopt as well. What’s more, this approach is multipronged and requires buy in and support at all levels of the organization. Responsible AI is not the job of just one person, team, or executive steering committee—it’s a company-wide imperative that leadership must recognize and propagate throughout the company.
Perform consistent AI risk evaluation. Every new AI product that we build goes through our scalable risk evaluation, which was developed to align with the risk categories defined by the EU AI Act. This analysis enables us to efficiently determine the risk level of the new use case as it relates to both its context and characteristics, including technical and model design, potential impact to individuals’ economic opportunities, and surveillance concerns.
“High risk” use cases differ from “disallowed” use cases. “High risk” means the use case requires more guidelines and safeguards, along with the application of more thoughtfulness and care. “Disallowed” use cases, such as intrusive productivity monitoring and biometric surveillance, are contrary to our values, and therefore we choose not to build solutions that fall into this category.