Jeremiah Barba: Generative AI exploded onto the market with the release of ChatGPT. As we all started experimenting with it, there was a mixed bag of reactions. Some feared their job was now in jeopardy. Many educational institutions banned it. Others saw it as a potential paradigm shift, helping them to solve problems they couldn’t before. Whatever your experience with it, generative AI is an innovation that can bring instant tangible impact, which has prompted huge interest in applying it in measurable ways. One area that generative AI is poised to make a massive impact is on the workforce. So what are the opportunities and challenges for HR leaders as they look to harness this potentially transformative technology? Today on the Workday Podcast, it’s a conversation between Mike Stamback, senior product marketing director for AI at Workday and Kim Morick, global leader, data and technology talent transformation at IBM. Hope you enjoy the episode!
Mike Stamback: Kim, thanks for joining us.
Kim Morick: Thank you.
Stamback: Explain a little bit about what you do.
Morick: So I'm part of IBM Consulting, and we work with all sorts of clients, small, medium, extra-large global clients. My charge is driving AI, data services, and automation into HR processes to drive efficiencies, mainly working with the ecosystem of technology that exists. And obviously, Workday is one of our client's largest ecosystems. We're a Workday Partner for IBM Consulting, so driving data-driven decision-making is what I had been focusing on for the last 10 years. Now I'm focusing on AI-enabled assistance within HR to increase the effectiveness of all of the HR workers out there.
Stamback: That sounds like a really small charter.
Morick: Yeah, yeah, you know.
Stamback: So as we dive in, let's set the stage a bit and talk about the workforce landscape. How has the workforce changed in the last few years?
Morick: Yeah, so it's so interesting, right? From all sorts of different tasks, from our frontline workers to the manufacturing space to the professionals, right, I mean, COVID has just changed expectations of everybody, where some folks on the front line were forced to not go to their jobs because of the shutdown. They were forced to find different jobs, enabling IT jobs. People now have the ability to do so many different things. The appetite and the ability to upskill yourself is there. And so some of these more admin or repetitive-type jobs that had existed in the past that, potentially, individuals thought that maybe that was their charge in life, they now realize that that is not where they need to be, and they can accelerate themselves, and they're expecting to do so. They're expecting their employers to provide paths for them to do so, if they do have to do that task, you know, in the beginning. So it's, it's a changing landscape. Obviously, the hybrid notion is difficult for organizations to figure out, trying to figure out even how they write job reqs, how they source roles, how they create mobility for employees when you're hiring a remote workforce when maybe you needed to be in the office to be that type manager. I mean, it's really changing the mindset of HR operating models, and as well as the business functions, and how you need to attract and retain these individuals. It's like power has shifted to the employee, right? Power has shifted to the employee. I mean, we can see the power is slowly shifting back to the organizations as more and more are kind of doing this back-to-the-office type thing, but human nature. Once you see that you have the power, then you're going to stand and you're going to want to enable that change. Digitization that was forced now allows organizations to be primed for what we're talking about now with generative AI. They've got the infrastructure set up. They've got a majority of their employees using digital platforms. And so it really has accelerated the capabilities of what you can do next, I think.
Stamback: So what are organizations doing to address those changes you mentioned?
Morick: They're playing, and they're experimenting. Generative AI, I think the economic study says, by 2030, there's going to be a $13.7 trillion lift in GDP as a direct result of artificial intelligence. AI has been contributing to GDP for the last 15 years, but this is going to be a massive shift where, all the consultancies and all the big platform players are, they're building it native into the platform to do, the sort of manual jobs to free up humans to work with other humans collaboratively, to go out and attract new talent, come up with strategies, develop new revenue streams. These are the things that people are focusing on. These are things that typically give employees and humans excitement, validation that they are contributing to something good for the organization that makes them happy. And so that's where people are trying to say, some of these more laborious tasks, let's not even try and outsource them anymore. Let's try and automate them.
Stamback: So if you can address some of these changes in the workforce using AI, how do HR leaders really feel about that? How do they feel the impact of generative AI is going to transform their organization?
Morick: At IBM, we have an Institute for Business Value. You know, a lot of organizations do these sort of independent research studies. Back in April, sort of right after the GPT, OpenAI and Microsoft splash, we did a study to talk to HR organizations. How many of you are focused on using generative AI or talking about it from a strategic lens? Then same study was conducted in June, and 65% of the respondents were saying, "Yes, this in our charter. This is something that we're focusing on. It's coming from board-level pressure down through the CEO, down through all of, all of the enabling functions, all of the supporting functions." With HR specifically dealing with humans that are employees or that are your candidates, it's very important that there's trust, transparency in AI, that it's unbiased, and human in the loop, which I know Workday is also very focused on keeping the human in the loop, when we're talking specifically about these talent decisions. And so I think there's some skepticism there, about automating the human out of HR. And when HR is a very humanistic function, we don't view it that way. We view it as giving them a co-pilot. So you're creating an assistant for them so that they can do the things that are important, and to be that guidance to the business, and provide that continuity. So I know folks are a little nervous about that, and they want to make sure that there's the ability to have the human still part of the decision-making process.
Stamback: Right. I think, partially, the market is not quite ready for us to go full-bore automation, automate full tasks. But, at the same time, a lot of the risks that are associated that you hear about generative AI right now associated with potential bias, hallucinations, and those kind of things, it requires humans to still be in the loop. So my colleague likes to say, "I've seen that movie before." Can you talk a little bit about that? Why did previous efforts around this not succeed?
Morick: Yeah, so it's interesting. There was obviously a lot of hype with artificial intelligence. Everybody jumped on board, 10 or so years ago thinking that it was going to solve all of the problems. And in some cases, where data is very deterministic, like we've seen in recommendation engines and things like that, it can be super helpful. Even those, they thought, "Oh, this is cool. This is helping everybody," didn't realize that there was bias built in because they're automatically taking, based upon number of views, number of likes, automatically discounting something that new came into the catalog. And so there's been a lot of lessons learned as to how bias is creeping into artificial intelligence, decision-making as far as data inclusion or data exclusion, And I think the majority of the platform players out there are very aware of this. All of the consultancies out there now are very aware of this and know that artificial intelligence applications are no longer just a data science experiment. They are the human who does the process along with the data scientists, along with the architects, along with the end users, looking at this from a holistic perspective. Do we have all the data that we need in order to make a human-level decision? Does that exist?
Pretty much every organization now has AI ethic principles. I know we at IBM do, and we follow them very strictly and for all of our clients. I know Workday does too. So it's something that you bring to the forefront. The other part of this ethical principle is somebody might have a great idea for use of artificial intelligence that is to solve a specific use case, but then you've got to bring in some other people that aren't data scientists, that aren't engineers, to say, “what's the secondary or possible tertiary effect of unleashing this into our organization? Or did you think about all of these other things?" And so I think, now, we're at that stage where all that contemplation is done, at the forefront, and because I build data platforms too, a lot of people's data landscape was not where it needed to be in order to be able to drive AI-based decisions at, light speed. It just didn't exist. We're seeing that now, even, with generative AI, more than ever, people need to get their data landscape in place where it's trusted. You’ve got to clean it. You’ve got to make sure you're feeding the machine proper, trusted data. So I think people know what needs to happen. The rules are there. There's enough caution from the buyers and they're not just fully believing that, "Yeah, this is great." A lot of the things that have come out of ChatGPT with the hallucinations are very helpful because a healthy bit of skepticism is important, when you're buying this and knowing and testing the solution.
Stamback: So what do you think is different this time? You're talking about responsible AI. There's still no standards. There's no industry standards in place.
Morick: I know.
Stamback: I mean, we're still trying to figure out regulation and policies.
Morick: I know.
Stamback: What's going to be different this time?
Morick: I know. What's a little different this time? Reputational risk. People are concerned about that. What's different this time is we at IBM are putting human in the loop, especially when it involves a decision made upon an individual. You can fall back on the human decision, and so that's what we're making different this time, that that is just part of the process.
Stamback: Well, just like with any other new trend in technology, there can be an over-rotation. So a lot of people can think that generative AI can solve everything. So what should HR leaders be reminded of that generative AI cannot do? What are some of the things that you shouldn't be applying it to?
Morick: So a lot of people think it's a magic bullet, and anybody that even developed old-school AI, we know that it always goes back to the data. AI is not going to fix your data mess. I had a couple clients that we've been talking about policies, lots of stuff that people are thinking this would be a good use to summarize generative AI on all of our various different policies. Well, if you're an organization that has policies scattered all over your internal website, you haven't done a very good job of updating them or taking old policies down, and you're going to come tell us, "Just web scrape our entire internet, use that to respond to these policy documents," they're mistaken because it's not going to be able to differentiate what policy is current and what's out of date. There still is logic involved to be deterministic about which policies I need to see as a US employee versus what somebody else might need to see as a European employee. These kind of things, there's rules that still need to go in place and data that still need to go in place. So they need to understand that you can't just implement a solution and pray that it's going to do everything right and be able to understand. There is still rules-based things that need to occur in order to ensure that folks are getting the information that is deemed appropriate for them.
Stamback: So this transformation seems like it's way more than just implementing AI. It sounds like there's a lot of cultural aspects to it that need to occur too. What are some of the biggest challenges that HR leaders are going to face as they try to implement it? I think you've been talking about data. You've been talking about rules. You've been talking about responsible AI. Is there more?
Morick: So it's interesting. We've seen this, right? So from the top down, there's a vision to see how this technology is going to change organizations and the way organizations function. And they want to be able to drive these efficiencies and to be able to create space for employees to grow. HR leaders, when they start thinking about it, they need to figure out what that space is that their people are going to grow because there's change that's involved in this, right? This is operating model change. Some of that might require you to provide new education to folks because if 30% of your job was admin and now, we're asking you to do something different, we want you to be more of a strategic partner to your sub-IT group, well, you probably need to learn a lot more about this IT process and functions so that you can be a very good, collaborative business partner to the folks that you're supporting there. They need to think about change, need to iterate. This is not a silver bullet. Human is in the loop. Trust but verify. That's, that's always part of it, the way that we're designing it, and I think you guys are too, with this generative AI version being your first draft. Like, it's your first draft, right? And you need to know that and review it and edit it because ultimately, your name is on the final draft, even though you didn't write the first draft.
Because if you're not following what's going on in public media and seeing some of the hallucinations or whatnot, I mean obviously these models are fine-tuned significantly better, and we can ingest with your own corporate policy documents, but at the end of the day, trust but verify.
Stamback: So how should HR leaders be thinking about using generative AI to help with the skills gap issue, upskill their workforce, grow their talent pool that they have available within the organization? Can generative AI actually help them do that?
Morick: There's some thought work that's being done on this, starts a little bit with job descriptions. Most organizations' job descriptions, skills are not classified within the job descriptions. There's no linkage between a skill and a job and a skill of a human. So it's very fuzzy. Some of the organizations we're partnering with are trying to close this gap. So starting with job descriptions, writing all the text that you have to have in a job description when you're posting it and when it's going internal in your system, but then listing all the skills. The skills then can go into a relational database, then you can start doing some really cool things because those skills were generative AI skills, but now, you're storing them with a job role, right? And then again, with learning, so going through your learning catalog, using generative AI to pull out the skills that are embedded in the course, and then you can link and you can move forward that way. So now, you can, you know, basically create learning paths, and this is why it's been so fuzzy, because it's such hard work for a human to go through most large organizations, 5,000 job roles, 200,000 learning courses in their catalogs, never able to get down to that detail and to get to that linkage. But that'll be coming quickly, because people are really trying, as we talked about at the beginning, to attract and retain. And in order to do that, what skills do you have in your current role? Where do you need to get? And building that path.
Stamback: In fact, my colleague I was just talking to earlier made a really good point that skills has been talked about for years, as common currency that HR wanted to get to.
Morick: Oh, yeah.
Stamback: But it wasn't until AI came about that we can actually start to practically apply it.
Stamback: So creating those linkages you were talking about, only AI is going to help us be able to do that.
Stamback: Because it's too hard of a task for a human to do.
Morick: It's too hard of a task for a human to do. For any organization that is large enough, that needs to be a skill-based organization, not a task that a human can do. And once they got it done, they're going to have to do it again.
Stamback: Right. Right. It's going to be a constantly iterative process.
Morick: It's constantly iterative, and they're constantly adding new job descriptions and things, so yeah.
Stamback: We've been talking about generative AI and HR with Kim Morick from IBM. Kim, thank you for joining us and sharing your insights.
Morick: Thank you, everyone. And thanks, Mike.
Barba: Thanks for tuning in to this episode of the Workday Podcast. If you enjoyed what you heard today, be sure to follow us wherever you listen to your favorite podcasts. And remember you can find our entire catalog at workday.com/podcasts. Have a great workday!