8 Ways to Help Ensure Your Company’s AI Is Ethical
Based on our experiences, here are eight lessons for technology companies looking to champion a commitment to ethical artificial intelligence (AI) across their organization.
Based on our experiences, here are eight lessons for technology companies looking to champion a commitment to ethical artificial intelligence (AI) across their organization.
Keeping up with artificial intelligence (AI) and data privacy can be overwhelming. While there’s loads of promise and opportunity, there are also concerns about data misuse and personal privacy being at risk. As we evaluate these topics and as the Fourth Industrial Revolution unfolds, questions arise about the promise and peril of AI, and how can organizations put steps in place to better realize the value of it.
Integrating “ethics” into technology products can feel abstract for engineers and developers. While many technology companies are independently working on initiatives to do this in concrete and tangible ways, it is imperative that we break out of those silos and share best practices. By working collaboratively to learn from each other, we can raise the bar for the industry as a whole. A good place to start? Focusing on the things that earn trust.
Trust has been foundational to Workday since day one. Our customers know we take their privacy and security seriously and have done so for years. That’s because privacy protections are central to Workday services through our privacy principles, our approach to GDPR, and our robust privacy program. And as we continue moving towards a people-centric, machine learning (ML)-enabled future, we are leveraging our privacy- and security-first approach to ensure we design and deliver ML in an ethical way.
Many companies are releasing high-level principles about their approach to designing and deploying AI products. But principles are only valuable if they actually get implemented. Workday recently published our commitments to ethical AI to show how we operationalize principles that build directly on our core values of customer service, integrity, and innovation. Based on our experiences, here are eight lessons for technology companies looking to champion those principles across their organization:
1. Define a common agreement of what AI ethics means. This definition needs to be specific and actionable for all relevant stakeholders in the company. To Workday, AI ethics means our machine learning systems reflect Workday’s commitments to ethical AI: We put people first; we care about society; we act fairly and respect the law; we are transparent and accountable; we protect data; and we deliver enterprise-ready machine learning systems.
2. Build ethical AI into the product development and release framework. These cannot be separate processes that create more work and complexity for developers and product teams. Workday has built our principles into the fabric of our product development and created processes that drive continued compliance with them. New ML controls have been incorporated into Workday’s formal control framework to serve as additional enforcement of our ML ethics principles. Our development teams examine every ML product through an ethical lens by asking questions about data collection and data minimization, transparency, and values-based questions. We have a long history of this in the privacy space, including privacy-by-design processes as well as third-party audits against our controls and standards. Workday has embraced a set of ethics-by-design controls for ML, and has in place robust review and approval mechanisms for the release of new technologies, as well as any new uses of data. We are committed to ongoing reviews of our processes, and evolving them to incorporate new industry best practices and regulatory guidelines.
3. Create cross-functional groups of experts to guide all decisions on the design, development and deployment of responsible ML and AI. Early on in this journey, Workday established a Machine Learning Task Force comprised of experts inside Workday spanning Product and Engineering, Legal, Public Policy and Privacy, and Ethics and Compliance. These groups examine future and existing uses of ML in our products. Bringing these diverse sets of skills and views together to discuss future and existing uses of ML in our products has been really powerful, and enabled us to identify potential issues early on in the product lifecycle.
4. Bring customer collaboration into the design, development, and deployment of responsible AI. Workday engages our Customer Advisory Councils from a broad cross-section of our customer base during our product development lifecycle to gain feedback around our development themes related to AI and ML. And through our Early Adopter program, we work closely with a handful of customers who act as design partners to test out new ML models and features through our Innovation Services. This enables us to understand and address customers’ ideas and concerns around AI and ML early on as we co-develop people-centric ML solutions.
5. Take a “lifecycle approach” to bias in machine learning. Machine learning tools represent a phenomenal opportunity to help our customers leverage data to enhance human decision-making. With that opportunity is the responsibility to build enterprise-ready tools that maintain the incredible trust our customers place in us, which is why one of the focal points of Workday's commitments to ethical AI is mitigating harmful bias in ML. Workday is focused on a lifecycle approach, which contains checkpoints where we perform various bias assessments and reviews starting from the initial concept of a product through post release.
6. Be transparent. The ethical use of data for ML requires transparency. Because machine learning algorithms can be so complex, companies should go above and beyond in explaining what data is being used, for what purpose, and how it’s being used. We explain to customers how our ML technologies work, the benefits they offer, and describe the data content needed to power any ML solutions we offer. We demonstrate accountability in ML solutions to customers.
7. Empower your employees to design responsible products. We do this through required ethics training modules, toolkits, seminars, employee onboarding, and workshops to ensure Workday employees are trained in how to uphold our AI ethical commitments. For example, a human-centered design thinking workshop uses different scenarios and personas to help Workday employees understand our commitments to ethically creating ML technologies.
8. Share what you know and learn from others in the industry. We do this through participation in industry groups and peer meetings such as the World Economic Forum Steering Committee for Ethical Design and Deployment of Technology to help develop an ethical framework for the tech industry. In addition, Workday makes it a priority to monitor and contribute to new standards and plans. In the United States, Workday has engaged heavily with lawmakers and agency officials on ethical AI, including developing and participating in a Congressional AI Caucus staff briefing on “Industry Approaches to Ethical AI,” and playing the role of convener between industry and policymakers in multiple venues. In addition, we provided support for the National Science Foundation’s update to the National Artificial Intelligence Research and Development Strategic Plan and National Institute of Standards and Technology’s development of their report “Artificial Intelligence Standards and Tools Development” and continue to advocate for an expanded role for NIST in the development of AI ethics tools. In Europe, Workday participated in a pilot program to evaluate the European Union’s High-Level Expert Group’s Ethical Guidelines’s (HLEG) Trustworthy Artificial Intelligence Assessment List.
As we navigate this evolving world of ethical AI, it will be more important than ever to share practices and identify what we’ve learned along the way. We are eager to hear from others on what approaches have been effective for scaling and implementation, and we welcome the opportunity to share. In fact, the aim of Workday’s collaboration with the World Economic Forum is to instigate others to join us in sharing their best practices for championing responsible and ethical tech. The pursuit of responsible, ethical artificial intelligence and technology is critical—and is greater than any single company or organization.
Together, we should be building goodwill and trust through our actions, allowing us to realize the benefits of these powerful new technologies.
More Reading
Read more on how we’re helping our customers succeed with innovative HR technology.
If there’s one thing about financial services regulation that never changes, it’s the fact that regulation always changes. In this installment of our podcast series Shift: Moving Financial Services Forward, we talk about not just keeping up with regulation, but making it work to create value for financial services organisations.
Earlier this year, Professor Taha Yasseri was appointed the inaugural Workday Professor of Technology and Society at Trinity College Dublin and Technological University Dublin. In this episode of the Workday Podcast, we speak to Professor Yasseri about this new and exciting partnership and how it will help to bridge the gap between technology and society.