May 25 marks the five-year anniversary of the General Data Protection Regulation (GDPR), and what an amazing ride it’s been! GDPR ushered in a new wave of transparency and accountability requirements in data processing, and mandated that businesses embrace privacy-by-design frameworks. In my interactions with customers and regulators, conversations are increasingly focused on the convergence of data privacy and artificial intelligence (AI) and machine learning (ML). Depending upon whom you ask, the future of AI and ML may seem bright, bleak, or both. To me, this comes as no surprise as historically, innovation has garnered that kind of response.
Data privacy is of paramount importance in AI and ML technology. Models are highly dependent on the quality and quantity of data they receive. However, businesses are faced with the challenge of maximizing value from AI and ML solutions without disrupting privacy. Business leaders want the security advancements that help their company’s information remain safe while automating mundane tasks so that their employees can focus on more meaningful work.
The good news is that innovating responsibly and protecting the personal rights of individuals do not have to be mutually exclusive. At Workday, we take a balanced approach that allows us to leverage the latest advancements in AI and ML technology, while preserving our commitment to privacy and AI ethics principles.
AI Innovations at Workday
Proper data privacy measures can help build trust and confidence in AI and ML. This, in turn, encourages greater adoption and use of these technologies. Just as we look to innovate with our products and services, we find innovative ways of embedding privacy into the development of our AI and ML solutions.
By definition, innovation is the introduction of something new. Traditional compliance standards may not be easily applied or a perfect fit for a new idea, a new product, or a new way of doing things. At Workday, we don’t wait for someone to hand us the standard. We proactively create internal standards and collaborate and advocate externally to support the standards that help us meet our compliance goals. We also help our customers evaluate what we mean by compliance. Here are some examples of how:
We anticipate customer needs. When GDPR took effect, there wasn’t an approved audit or certification for compliance. Workday quickly added a mapping in our SOC 2 report so that customers could understand our compliance. We then engaged with Scope Europe to help create the EU Cloud Code of Conduct, which demonstrates compliance to GDPR. We were the first company to certify adherence.
Building upon our experience with GDPR, we understood that customers would need information about our ML capabilities to trust that we’re living up to our values. To provide transparency to customers, we provide data sheets containing descriptions of how our AI solutions operate. This level of transparency helps customers conduct impact assessments to identify possible risks associated with AI and ML.
We support the development of standards, frameworks, and best practices. Workday is a leader in the development of trustworthy and responsible AI. From its inception, Workday supported and contributed to the National Institute of Standards and Technology’s AI Risk Management Framework (NIST AI RMF). We work with regulatory authorities and policy makers around the globe to help establish national and global standards for the development of trustworthy and responsible AI.