Workday Podcast: Charting a Smarter Future for AI in the Enterprise

Workday's Jason Albert and James Cross discuss why enterprise artificial intelligence (AI) and machine learning will be used much differently than in the consumer space, how we can shape the future we want to see, and how companies can quickly ramp up their own AI efforts.

Jason Albert, deputy general counsel, and James Cross, vice president of product strategy at Workday, talk with me about why enterprise artificial intelligence (AI) will be used much differently than in the consumer space, how we can shape the future we want to see, and how companies can quickly ramp up their own AI efforts. 

Listen on SoundCloudWorkday Podcast: Charting a Smarter Future for AI in the Enterprise

Listen on Apple Podcasts: Workday Podcast: Charting a Smarter Future for AI in the Enterprise

 

Josh Krist: When it comes to what the future will look like, we’re not just spectators, we’re participants. Whenever I’m able to spend some time talking to my next guest, I find myself thinking a lot about this idea. The future doesn’t just happen to us. It’s not—or at least it doesn’t have to be—a spectator sport. Jason Albert is deputy general counsel here at Workday. James Cross is the VP of corporate strategy. Today on the Workday Podcast we’re going to talk about where public policy and the future of the intelligent enterprise intersect, and what you can do today to help shape our tomorrow. I’m Josh Krist, welcome gentlemen.

Jason Albert: Thank you it’s great to be here.

James Cross: Yeah, great to be here, Josh.

Krist: So let’s start by first defining what we’re talking about when we talk about AI in an enterprise context. James, would you like to start?

Cross: Yeah, absolutely. So that’s a really great question to start with. I think AI is a bit of a nebulous term, and there’s so much hype around it in the media and in movies and TV shows today. When you think of AI, you think of this kind of all-knowing civic computer with human traits that has feelings and emotions. But really what we’re thinking about is much more narrowly defined in the enterprise. It’s much more about machine learning and deep learning, and it’s really about algorithms and models that are very good at doing very specific and narrow tasks.

So things like identifying an anomaly in an expense report, or identifying who’s a high-potential or who might be a good fit for which role—we’re thinking about these kind of very narrow machine-learning capabilities that are driven by reams of data that they’ve learned to dissect patterns in and make predictions based upon. So that’s the way I like to think about it in the enterprise.

Albert: From my perspective, AI is a very interesting term because a lot of things fall under it. People use it very expansively, whether in the policy space or in the marketing space because it’s a very sexy term, it’s very high profile. But it really encompasses a number of different technologies like the things James mentioned. Things like big data analytics, like machine learning, correlations, things like that. And a lot of people think about AI in the context of automation. Like it’s going to take away jobs or we’re going to have robots or robot-like things replacing us in the future.

When really in the enterprise, a lot of the AI is about helping humans make better decisions. Providing more insights, providing more analysis than we’re necessarily capable of on our own, but always having that human touch at the end to be able to figure out: How do you apply that? How do you exercise judgment around that? And so to me, AI just encompasses such a broad range of things and it’s important when we have a conversation about it to make sure that we focus on the promise that it holds and not just the idea that we’re going to see a bunch of Terminators walking down the street at some point in the future.

Krist: Right. So then, where does policy come in if it’s happening anyway, right?

Albert: Yes. And like any sort of new technological development, you start by having the new technologies, then business models develop around that. But the fact is, we as a society always decide how we deploy technology. We decide how technology is used. Technology isn’t this inexorable force that acts upon us. And so, like any other technology, it is going to be regulated. We don’t know enough about the promise of AI, we don’t know about the possibilities. But at some point we’re going to get to a situation where it’s going to be time for regulators to act. 

And so from a policy perspective, the approach we’ve really been taking has been to try to educate policy makers. People are moved by what they experience or what they read about in the paper, so you hear a lot about consumer services, about how AI is being deployed by large consumer companies, how we might have self-driving cars, or we might have automated hamburger servers. And those are things that are really easy for people to relate to, much more than perhaps some of the enterprise examples, but we can’t let them drive the AI policy debate.

Krist: Right.

Albert: Because if we do, then you’re going to end up with regulations based on one narrow slice. And so our goal has been to try to promote efforts that—and I’m happy to talk about this in more detail—help educate policy makers about the breadth of AI, about its promise, and how it’s being deployed today.

Krist: Right. And then there’s also the worker/workforce policies. I mean, this is not just going to impact businesses—this is going to impact people. 

Albert: I’ll start and I know James is going to have a lot to contribute here as well. When you think about it from the workforce [perspective], people are concerned about automation. But the thing to remember is that AI offers all sorts of possibilities for helping workers too. One of the great things in [the] Workday [product] is the opportunity graph. Somebody on my team or I can go look and think, “All right, I’m ready for my next role. What have people in my job done? Where have they gone? What skills do they need to get there?” And they can go look and see, “All right, for the next role a lot of people have gone into this position, a few have gone into that position. And oh, I need these five skills. I have four of them but the fifth one I need to get through training or some on the job experience.” 

And the other thing that Leighanne Levensaler, our SVP of Corporate Strategy, is always fond of saying is, “AI can help us get rid of a lot of repetitive tasks.” People want to have rewarding jobs where they’re contributing, and so AI helps relieve them of that type of burden, helps them move up the value chain, do more interesting work. My boss Jim Shaughnessy, our GC [now Executive Vice President for Corporate Affairs] is fond of telling an anecdote that he heard at a speech. Where, we had ATMs and ATMs came in and now we all go to an ATM, we can get our money and we don’t have to go in the bank, we don’t have to interact with the teller. And people thought this was going to destroy bank teller jobs, but it hasn’t. There actually are more bank tellers now than there were before ATMs. I’m very grateful for this because my wife’s a bank teller. And so he’s talking about this and I go home and I ask Katie [wife], “What do you do all day?”

Krist: Yes.

Albert: Because people can get stuff from the ATMs, we do all of our banking online. And she’s like, “Well there’s certain things you can only do at a desk like getting a cashier’s check or counter check or something like that. But a lot of people want to talk to someone, want a lot more complex transactions, and the value that the tellers add is introducing people to more bank services, cross-selling services, helping people deepen their relationship with the bank.” So that’s a job that’s been transformed by AI, but far from being eliminated, it’s actually grown. 

Krist: So they’re actually value-add jobs too, they’re not just transactional.

Albert: Exactly.

Cross: That’s a great example. And it reminds me of when we were building the Workday Learning product, one of our design partners was a large bank in the U.S. And they wanted to deploy Workday Learning because they saw that their people were having to do new types of tasks. People who were bank tellers were previously doing very transactional things, but now they suddenly had to be cross-selling and working very consultatively with customers. And they wanted a platform to help them enable those people to develop those new kinds of consultative skills. So it was really great that we’re—our customers today—are using Workday solutions to help reskill people as their roles change.

Talking about automation—you mentioned Leighanne’s great point about taking the drudgery away from jobs. Have you ever wondered when you check into a hotel, there’s a lot of typing and clicking happening behind the desk there for what is a very simple transaction. And I always wondered about this, and the reason is that employee behind the desk—and the same goes for call centers and even finance transactions—they’re having to work within multiple applications. Often, they’ll be legacy green-screen applications, they’ll have some cloud applications in there too, and they’re having to jump between multiple applications and multiple windows. And that’s why all that kind of frantic typing is happening back there and it takes 10 minutes to check in to a hotel. So we recognize lots of our customers are in this mixed ecosystem today of legacy applications and cloud applications, and it’s up to that person at the desk to kind of jump between them.

But with new technologies, such as robotic process automation, can actually take a lot of that drudgery away. They can interact with all those different applications on behalf of that employee, and they can actually make it much more seamless for them so they can get the job done quicker. So RPA is another really interesting area to us today too. And you can also start to apply some automation and intelligence as—

Krist: RPA? 

Cross: Robotic Process Automation. 

Krist: Okay.

Cross: It’s a new breed of applications that help to bridge the divide between all of these different apps and allow that person behind the desk or in the call center to do their job, which is helping the customer rather than navigating and traversing lots of different applications and copying and pasting data today.

Krist: Right. And I know, as a former educator, you’re very passionate about the workforce of the future and how they will be impacted by automation. So what are your thoughts there?

Cross: Absolutely. So, I think it’s a really good argument that a lot of the drudgery is going to be taken away from some jobs and people are going to be reskilled within the companies. But I think we do need to recognize, too, that there are going to be some people who are displaced, maybe more so than others. So if you think about retail workers for instance, if you look at some of the developments we’re seeing in automated retail stores and automated checkout, that’s probably going to impact a lot of jobs for a lot of people in the future.

And I think it’s up to all of us to be proactively thinking about this and planning ahead and building bridges between industry and community colleges and education institutions—making sure that these people are able to access learning opportunities that help them develop different sets of skills for the different types of jobs they’ll be doing in the future, too. So I think it’s super important that we start that conversation as well.

Krist: Okay.

Albert: Yeah, I think that’s an incredibly important conversation and an incredibly difficult problem. I mean, one of the things that’s interesting about today’s political environment, which as hyper-partisan as it is, there is a tremendous interest from both political parties, with the Democrats and the Republicans, in worker retraining. I think is going to be one of the most important policy conversations that we have over the next 5 or 10 years. And there’s just tremendous room for a huge amount of greenfield and innovative thinking in that area.

Krist: Right. And you had actually mentioned earlier that we are doing what we can to be a part of the conversation. Can you talk about some of the efforts that Workday is making—I know that we published a blog reaffirming our privacy principles—especially in the face of data powering all these algorithms. So can you talk to that for a few minutes?

Albert: Sure. I’d love to talk about that. So one of the things to think about in the context of AI/machine learning is that these technologies will grow and we will only realize their benefit if people trust them. And fundamental to that trust is having trust that their personal data isn’t going to be misused or used in a way that isn’t anticipated or somehow captured in a way that they lose control over it. 

And we’ve certainly seen some recent very prominent examples in the news that have cast some questions about whether or not there are sufficient protections. That’s one of the reasons why Workday supports comprehensive federal privacy legislation. But we wanted to take a further step as a company and say, “As we deploy AI, as we think about machine learning, as we build this into our products, that we’re going to reaffirm the core privacy principles on which the company’s always been built.” We’ve had privacy from the very beginning, it’s been critical to our business success, given the types of data that are in our HCM systems. We’ve had to protect privacy, our customers have demanded it, and it’s the right thing to do for them and for their employees. 

So we announced three core privacy principles that will apply to everything we do, including our use of AI and machine learning. One is to put privacy first. We’re always going to be transparent about our data uses, we’re always going to make sure that our customers and their employees aren’t surprised by data usage.

The second is to innovate responsibly. We’re going to make sure that we’re transparent about how we design systems and products, we’re going to work and partner with our customers around that, as we have from the very beginning. And we’re going to make sure that we address concerns around bias and around data use.

And the third is that we’re going to safeguard fairness and trust. And we do that by proving out what we say through our standards, through our adherence to Privacy Shield and Binding Corporate Rules, and all those mechanisms that we’ve put in place to ensure that we continue to maintain the trust of our customers. And this is important because without trust people won’t let their data be used and the world won’t realize the benefits of these technologies. 

And then another thing is we’re trying to take that same message further on to Capitol Hill. One of the things that we’ve been supporting this year was a bill to create a federal study commission around AI. And again this goes back to the point I was mentioning earlier—

Krist: A federal study commission around AI? Is that what you said?

Albert: Around AI, yes.

Krist: Okay.

Albert: Because we want to make sure that as policy makers and regulators think about this, that they think about the broad scope of it. That they’re not just influenced by what’s in the paper or what happens to be top of mind or their own experience. 

Krist: Right.

Albert: And we think that that’s an important effort and that’ll help serve the baseline for thinking about regulation that would be consistent with the principles that we’ve aligned in our blog, that’s consistent with our support for federal privacy legislation. So for us this is all part of the same thing: what are we advocating from a policy perspective? What are we going to do as a company? How are we going to treat our customers and their employees? What are the protections that we think people have? And they shouldn’t just rely on a company’s good will to have them. 

Krist: Right. So James, if I’m a business leader listening to this and I realize, “You know, I haven’t been thinking about this.” Or I’ve been watching the news thinking, “Oh my gosh, what’s going to happen? What can I do? What should I do to get my business ready to take advantage of this and take care of my people?”

Cross: So I think one of the best places to start is by using smart applications that have machine learning and deep learning and AI built in. And so it’s not like you have to go out and hire a bunch of machine learning developers and build your own models and tools. If you speak to a lot of your cloud vendors like Workday and people like Salesforce, you’ll find that they’re actually infusing their products with AI and machine learning. That can help to bring those technologies into your enterprise. So that’s a really good place to start. 

Another great place to start are services like Amazon Web Services and Microsoft’s Azure Machine Learning [Studio], which expose lots of really great machine learning tools and features in a way that’s easy to consume and to build applications from. So it’s a combination of smart applications and using these pre-made building blocks to solve problems. But then when you think about the business problems you can start to solve, the first place to look is probably any inefficiencies that exist today. So, that person behind the hotel desk, that’s someone who is going to jump between those applications. That’s low-hanging fruit in terms of an inefficiency that machine learning and AI can solve.

And I think it’s a bit of a spectrum. You start by making your enterprise more efficient, then you maybe start incorporating more use of data and more use of predictions, pushing data to your people managers to allow them to use data to make decisions in the field and empowering them. And I think later on you get to a point where, really, driven by competitive dynamics, by things that other competitors in the market are doing, you end up in a place where your whole business is being reshaped around these new technologies. 

So if you look at companies like Stitch Fix, for instance, that have entered the clothing retailing market—which is a really difficult market to enter—by using machine learning—by offering these curated clothes-as-a-service boxes that get sent to you and a machine combined with a human will actually decide what’s in your box of clothes, and then will learn over time what your preferences are. They’ve been able to come into this market with a completely new business model enabled by these technologies. 

And we’re seeing the same with some of the automated shopping places, and with automated food places. I know in San Francisco there’s an automated burger restaurant that’s just opened, which I’m quite excited to check out. But they’re doing things that were impossible before, and entering very competitive markets by doing things in a fundamentally new way. And I think whichever industry you’re in, eventually you’re going to be facing these kind of AI- and machine-learning-native competitors doing things in different ways. And so that’s probably going to force your hand eventually. You’re probably going to have to really think differently in the future, so the time to start thinking about this is right now.

Krist: Right. Much in the same way that cloud is now a matter of fact. Because if you don’t, for the most part, you have to.

Cross: Absolutely. Because if you’re not using cloud, and your competitor is and they’ve got more efficient operations because of it, then you’re at a disadvantage. And that’s going to be exactly the same case with AI and machine learning too.

Krist: Right. And then Jason, how about for your legal counterparts listening, I mean what do they need to start doing that maybe they’re not?

Albert: That’s a great question, Josh. Because the way that I think about it is that we all tend to analogize to our experience. And so when we think about, “All right, I’m going to develop an AI-based product or I’m going to adopt an AI-based product, that’s similar to looking at the development of any sort of past technological product or any other adoption decision.” But it really isn’t—you have to build on what you’ve done in the past but also think about a whole bunch of new areas. Privacy is going to be much more important, in particular when you’re developing insights from data, particularly with the increase in privacy rules around the world, with the EU having adopted GDPR, with bills pending in other countries. You have to really think that through and how that’s going to apply, and how you’re going to protect individuals rights.

And then you have to expand beyond that and look at all the other issues. So bias is a big issue in AI, you have to make sure that you’re not inadvertently introducing some into the system. It’s not just enough to take a bunch of data and throw it at a problem, because the data set itself may have issues. So you have to think: How do I curate the data? How do I make sure I have good data? How do I test that data against the algorithm—particularly if the data is coming from one source and the algorithm’s coming from another? How do you put those together? How do you figure that out?

You have to figure out transparency and explainability because if something spits out a result or a prediction or a number, do you understand how that’s arrived at? How would you explain that? Because you can’t act upon that, you can’t exercise the judgment to make the decision if you don’t have some understanding of the process by which that prediction was arrived. 

Krist: The black box problem.

Albert: Exactly. The range of legal issues and advice that you have to think about is really much broader than with any sort of traditional product development or product adoption. And so having a broad set of thinking, trying to think of new things to consider is probably the most important thing.

Krist: Right. Okay great. Was there anything I should’ve asked that I didn’t?

Cross: Just on that last topic.

Krist: Yes?

Cross: We’ve talked about people being displaced from their jobs and having to develop new skills and reskilling. But I think also, if you think about your managers and just your employees today, if they’re working in a very data-driven environment in the future and they’re working alongside these machine predictions, they’re going to need to develop new skills for their own jobs to be able to effectively use these predictions. And I know that a lot of companies here in the [Silicon] Valley actually send their employees through data boot camp. I know Facebook does this. 

When you join Facebook in any role, you have to go through a two-week data boot camp where you learn how to use data to make effective decisions in your day-to-day job. And I think we’re probably going to see much more of that in the future. Greater access to data and predictions means that you now need to know how to work with that data and those predictions. And that’s really a new skill set of managers.

Krist: Yes, I would imagine just a basic understanding of statistics will almost be mandatory. Because we hear all these numbers but, you know, I personally love statistics, that’s the only math I was good at, but what’s the difference between a mean and a mode and median?

Cross: Yup.

Krist: Right?

Albert: Exactly. I was sitting here thinking, James is telling me I should’ve paid a lot more attention to fundamentals of engineering statistics when I was in college.

Cross: And I think it’s—we’re also seeing lots of new technologies emerge that make this data much more manageable and accessible to end users. We invested, through Workday Ventures, in a company called data.world. And they expose all of the data sets that exist within an organization to end users and employees. And I know at some companies see a ton of usage where end users and employees are actually going and accessing data that they’ve never had access to. So, I think we’re seeing lots of innovation in broadening access to this data too.

Krist: Wow. Great. All right, well thank you both. Thanks Jason, thanks James. That’s all the time we have for today. This is Josh Krist for the Workday Podcast, signing off.

More Reading