Safeguarding Privacy While Innovating With AI at Workday

Workday Chief Privacy Officer Barbara Cosgrove discusses the growing focus on the convergence of data privacy and artificial intelligence (AI). Learn how Workday balances privacy with the need to maximize value from AI technology.

May 25 marks the five-year anniversary of the General Data Protection Regulation (GDPR), and what an amazing ride it’s been! GDPR ushered in a new wave of transparency and accountability requirements in data processing, and mandated that businesses embrace privacy-by-design frameworks. In my interactions with customers and regulators, conversations are increasingly focused on the convergence of data privacy and artificial intelligence (AI) and machine learning (ML). Depending upon whom you ask, the future of AI and ML may seem bright, bleak, or both. To me, this comes as no surprise as historically, innovation has garnered that kind of response. 

Data privacy is of paramount importance in AI and ML technology. Models are highly dependent on the quality and quantity of data they receive. However, businesses are faced with the challenge of maximizing value from AI and ML solutions without disrupting privacy. Business leaders want the security advancements that help their company’s information remain safe while automating mundane tasks so that their employees can focus on more meaningful work. 

The good news is that innovating responsibly and protecting the personal rights of individuals do not have to be mutually exclusive. At Workday, we take a balanced approach that allows us to leverage the latest advancements in AI and ML technology, while preserving our commitment to privacy and AI ethics principles.

AI Innovations at Workday

Proper data privacy measures can help build trust and confidence in AI and ML. This, in turn, encourages greater adoption and use of these technologies. Just as we look to innovate with our products and services, we find innovative ways of embedding privacy into the development of our AI and ML solutions.

By definition, innovation is the introduction of something new. Traditional compliance standards may not be easily applied or a perfect fit for a new idea, a new product, or a new way of doing things. At Workday, we don’t wait for someone to hand us the standard. We proactively create internal standards and collaborate and advocate externally to support the standards that help us meet our compliance goals. We also help our customers evaluate what we mean by compliance. Here are some examples of how:

We anticipate customer needs. When GDPR took effect, there wasn’t an approved audit or certification for compliance. Workday quickly added a mapping in our SOC 2 report so that customers could understand our compliance. We then engaged with Scope Europe to help create the EU Cloud Code of Conduct, which demonstrates compliance to GDPR. We were the first company to certify adherence.

Building upon our experience with GDPR, we understood that customers would need information about our ML capabilities to trust that we’re living up to our values. To provide transparency to customers, we provide data sheets containing descriptions of how our AI solutions operate. This level of transparency helps customers conduct impact assessments to identify possible risks associated with AI and ML.

We support the development of standards, frameworks, and best practices. Workday is a leader in the development of trustworthy and responsible AI. From its inception, Workday supported and contributed to the National Institute of Standards and Technology’s AI Risk Management Framework (NIST AI RMF). We work with regulatory authorities and policy makers around the globe to help establish national and global standards for the development of trustworthy and responsible AI.

Just as we look to innovate with our products and services, we find innovative ways of embedding privacy into the development of our AI and ML solutions.

Furthermore, our Co-President Sayan Chakraborty, in his personal capacity, is a member of the National AI Advisory Committee (NAIAC), which is the nation’s highest-level team of cross-disciplinary AI experts, directed by Congress to advise U.S. President Joe Biden on AI matters. I also co-chair the Responsible AI Institute's HR Working Group, a nonprofit organization aimed at providing tools to buy, sell, or build safe and trustworthy AI systems, and I’m on the board of the International Association of Privacy Professionals, which just announced a new AI Governance Center. There’s still a lot of work to be done as standards, frameworks, and best practices are maturing, but we're working with lawmakers, regulators, and industry leaders to help create viable and sustainable solutions.

We don’t reinvent the wheel, we rely on strong fundamentals. Our ML Trust team leverages Workday’s long history of privacy-by-design principles in the development and management of our ML governance program. ML Trust works closely with me and the privacy team on key aspects of the program including principles, policies, standards, and guidelines. Also, these teams are closely knit to help ensure no one works in silos.

As an example, the teams partner on data minimization. A cross-functional group works together to evaluate and even challenge the need for the data for a feature or product to ensure the data is absolutely necessary. This scrutiny continues even after data collection begins, and data that’s no longer deemed necessary is promptly removed.

We provide guidance, training, and education so employees can innovate responsibly. At Workday, it's a top priority to properly train and educate our Workmates who are working with AI and ML technology on the potential risks and ethical considerations associated with its use. That's why we establish clear guidelines for the use of these technologies, such as generative AI. These guidelines also include specific use cases and scenarios where it should be used and where it should be avoided. This helps us mitigate the potential risks, as well as ensure that the use of AI and ML technology aligns with our company values and ethics. We want to empower our developers to move fast and responsibly—to build the best innovative and trustworthy products for our customers, in line with our privacy commitments.

Unwavering Commitment to Privacy

Innovation is moving at lightning speed, and with technologies like AI, privacy will always remain a priority for us. Workday remains an advocate for risk-based regulatory approaches that balance privacy with innovation. We also continue to monitor the privacy and AI and ML landscapes to anticipate changes that may impact our customers or Workday technologies. Our ultimate aim is to make sure our customers can continue to benefit from the productivity, insights, and elevation of human potential that Workday products offer.

More Reading