Custom Content from WSJ is a unit of The Wall Street Journal Advertising Department. The Wall Street Journal news organization was not involved in the creation of this content.

 

In 1959, a little-known Swedish engineer developed a product that would revolutionize transportation. Up until then, automobile drivers had little to secure them in case of a crash (the standard lap belt of the time was ineffectual and often went unused). This invention, the three-point seat belt, changed that. The technology was quickly adopted by car manufacturers around the world and is one of the primary reasons the rate of driving fatalities has plummeted in the past few decades. Although some crashes are inevitable, the benefits of driving a car combined with rising safety standards have made it a risk worth taking for billions around the globe.

This, Shane Luke, vice president of AI and machine learning (ML) at Workday, argues, is how we should be thinking about artificial intelligence. “Every time you get in a car, there’s a chance you can get in an accident,” says Luke. “But you mitigate potential risks, and then you can take those risks knowingly. That’s how we think about deploying AI.”

In this analogy, Luke is one of the people building the seat belts. In his role at Workday, he oversees the company’s efforts to develop AI applications responsibly, particularly when it comes to data. His work is animated by the belief that technology at enterprise scale can be nothing short of transformative, helping a business to be more efficient in everything from employee retention to auditing. That is, as long as the safety architecture is in place to ensure clean, transparent and secure data sets.

“There is some inherent risk with AI. But we’re committed to building out the safety technologies for these systems that allow businesses to mitigate those risks and fully take advantage of the benefits,” Shane Luke, vice president of AI and machine learning at Workday.

“We have built our company around protecting end-user data and hosting it on behalf of our customers.”

headshot of Jim Stratton Jim Stratton Chief Technology Officer Workday

Where the Wild Data Are

According to research from Workday, while 98% of CEOs see some immediate business benefit from implementing AI and ML, uncertainty about data privacy and a lack of trust are holding business leaders back from fully embracing and adopting the technology. Much of the fear around data privacy and security in AI comes from consumer-facing generative AI models. In these applications, the data pool is opaque. “To a large extent, ethical AI means being transparent about what goes into the system,” says Jennifer King, a privacy and data policy fellow at Stanford University’s Institute for Human-Centered AI. “The data supply chain is very important.” With these public AI platforms, the inputs can be problematic, she says, ranging from discriminatory information that may violate civil rights to fictitious data sets.

But for enterprises, AI is an entirely logged-in experience. “You know who the user is. It provides accountability versus something wide open on the internet,” Luke says. So common fears like prompt injection (where malicious actors hijack AI training to produce toxic content) can be fully audited and traced back to the user.

“There is some inherent risk with AI. But we’re committed to building out the safety technologies for these systems that allow businesses to mitigate those risks and fully take advantage of the benefits.”

headshot of Shane Luke Shane Luke Vice President of AI and Machine Learning Workday

Accuracy of output is another issue. According to Workday research, 61% of business leaders cite potential errors as a top concern. With public applications, more data means more noise, which means more room for error in machine learning systems. That’s less of a concern when relying on an enterprise technology partner, which uses smaller, cleaner data sets for purpose-built AI. Take Workday. The company has a machine learning application that allows customers to map their workforces based on skills, making it easy to see gaps or growth opportunities. The tool works not because it has access to the far reaches of the public internet, but because Workday has visibility into specialized information. “The data set might not be as big as that of a consumer app, but we have very specific data that is highly relevant and tailored to the business,” Luke says.

Not that enterprise-level AI isn’t robust. Workday, for instance, has access to over 60 million users representing nearly 450 billion transactions a year. Given that scale, a key question for many executives becomes, what happens to that employee data outside of the organization? The concern is fueled by the shoddy protections offered by most public AI tools. These open models, King notes, “violate a lot of people’s expectations of what they think should happen to their data.”

In enterprise applications, however, robust privacy controls are built into the system. Workday, for instance, centralizes its architecture, which ensures customers know where and how their data is used at all times. “We have built our company around protecting end-user data and hosting it on behalf of our customers,” says Jim Stratton, chief technology officer at Workday. “We maintain the same treatment of data with the same protections when it comes to AI and machine learning.”

The Role of Regulation

Risk mitigation for AI isn’t solely the responsibility of the companies that deploy it. Policymakers play a key role in creating guardrails for data usage. King is helping to lead the charge in Washington, D.C. “One of the accountability measures that I and others are arguing for is what we call data provenance,” she says. “Companies need to document where they’re getting data from and be able to trace it.”

While policymakers are responsible for regulation, ultimately the onus of data accountability lies with companies and their AI partners. Not only is it important for business leaders to protect their proprietary data and their employees’ personal data, but companies must also be able to show that they’re leveraging the technology in a safe and ethical way.

“At Workday, our work is built on the fundamental principles of privacy and security of data,” Luke says. In other words, the safety belts have been created—businesses just have to buckle up.

More Reading