Chandler Morse: The future of work isn't just an abstract idea somewhere in the far-off future. It's actually taking shape right at this moment, largely thanks to AI. I'm Chandler Morse, Workday's Vice President of Public Policy, and today on the Workday Podcast, I'm joined by Josh Lannin, our Vice President of Productivity Technologies. Just recently, Josh had the privilege to testify before the US Senate's Health, Education, Labor, and Pensions Committee, or the HELP Committee. Today, we're gonna get a glimpse into his valuable experience, both in front of Congress and at Workday. Together, we'll discuss the pivotal role that AI plays in molding talent strategies, closing the skills gap, and redefining the workplace. Let's get started.
Before we dive into these details, can you explain a little bit about your position and the crucial issues you addressed during the testimony before Congress on AI and the future work, just give us a little overview of what you shared with them?
Josh Lannin: Absolutely. We talked a lot about the need to partner between Workday and companies like us with the government to put in place AI regulation that's smart and effective to help enable people to manage through this changing future time. And we talked a lot about the relevance of a skills-based economy and why skills are the future of work and how AI is a big driver and accelerator of that.
Morse: So I wanna do a few personal interest-style questions because I'm sure you, like many others you've seen, you know, on CNN or different news outlets, testimony. you probably weren't a stranger to it, but you got to actually be in the witness chair, looking at the members. What were the things that sort of were a surprise to you in the whole process?
Lannin: Yeah. I mean, it was fascinating 'cause you have an impression from watching the news, and then when you're there, it hits you in different ways. So we were in the Russell Senate building, and it's this really stately, large room with heavy wood furniture. There's a lot of just gravity to the feeling. And one of the things that struck me right away was when you sit at the witness table, in the round is a semicircle, and that's where the senators are, and they're a few feet above you. And so you're sort of looking up at them, and they're looking down at you. And so there is, like, some, like, kind of magnitude of it, you know, just from the, the sheer sort of presence in the room. And then one of the things that we had to work through in front of each witness, is this 1950's-era microphone. And in order to talk, you need to remember, and you have to, like, practice, like, pressing this mute button on and off as you're giving your testimony, or as you're answering a question. And so there are just little things that you don't think about when you think about walking into this situation that suddenly become really important to remember to, unmute yourself, when you talk and, and, and things like that.
Morse: I think we were successful not only because there were no hot mic moments, but also, I feel like the hearing itself was an unbelievable success because o-of the tone and tenor. I feel like a lot of times, there was, you know, sort of an adversarial relationship in he-hearings, and, and there was a complete absence of that. What did you think the senators were there for?
Lannin: Yeah. That's right. And I feel like they were there to learn. In that environment with all this gravity, it was so nice to have the ranking member, Braun, and the chair of the subcommittee, Hickenlooper, from here in Colorado just be so welcoming. They were gracious and really just said, you know, they were appreciative of our presence to all the witnesses. They wanted to learn from us. And they had, just like, a lot of fun back and forth between them. It's clear that they're close, and they have, like, an interesting, shared background in being entrepreneurs who are now in the Senate. And so that really provided this kind of more welcoming atmosphere to the whole thing. And, and I also felt like with all the different witnesses, we really had a lot of common points of view. Like we were coming from very different perspectives, but like, collectively, we were all kind of talking about the importance of AI in the workplace, what it was gonna mean. I felt like we could really land some value points back to the senators that they can use as they think about crafting legislation and, and doing their jobs for all of us going forward.
Morse: And we'll circle back to that legislative point in a minute. The Senate Health, Education, Labor, and Pensions, or the Senate HELP Committee, is pretty well-known for its bipartisanship. And I think we certainly saw that on display. Definitely seemed like the members were really looking to get beyond the talking points and really into how AI was going to affect the workforce and what sort of issues they needed to know. So a-along those lines, you definitely focused in your testimony on how AI can help support a shift towards a skills-focused talent strategy. Can you talk a little bit about Workday's role there, sort of why we wanted to highlight that in the hearing?
Lannin: Yeah. Absolutely. Well, as I mentioned, the testimony, we play a very large role in the US, and actually, worldwide economy. You know, at one point earlier this year, one third of all open job requisitions were run on our platform, right. And so we have a unique viewpoint on what employment looks like and how all the companies we service think about attracting the talent they need to go where they need to go in the future. And you know, what we understand is that the future is gonna be very different when it comes to the skills you need to be successful, and the skills are changing quickly. There are a lot of reasons for that. There are new technologies coming to the fore. AI is part of that, but it's not the only technology that's changing what you need to know in order to be successful at your jobs. But uniquely, when you're thinking about, like, hiring and finding the right talent, or taking the talent you already have in your organization and making sure they're adaptable to the things they need to do, it's all about having a skills-based mentality. And you know, for us as Workday, we're providing that platform, as I said, for, like, recruiting people, bringing them on board to their companies, helping them get new skills. And in order to do that at scale for millions of people, AI actually plays a critical role in helping figure out what skills you need for a job and how to match people to those skills.
Morse: I think that message landed with the senators in that, you know, we shared the dias with Accenture, a close Workday partner, who shared quite a bit of information around how they were starting to think or blueprint out what the roles would be. But there was a general consensus that we're sort of in the early days of AI being implemented at scale, and it's not really clear how it's going to impact. And so it all comes down to that agility. How can you give workers and employers what they need in order to meet the changes? And, and for us, skills is the way to go. We also talked about a particular federal role, which is at the end of the day, knowing what's in demand, knowing what's changing. We're gonna need labor market data that indicates what the trends are. And in some ways, the federal labor market data resources that are available are maybe a little bit not fit for purpose, and Workday's pushing to get them modernized. Can you talk a little bit about why we're so focused on modernizing labor market data?
Lannin: Yeah. I mean, with good data, you can make predictions. You can make predictions using machine learning algorithms about what the future holds. But if your data is stale, your predictions aren't gonna be so good. And so if you look at other areas of how we work with the federal government, like financial trading arenas, there are a lot of standards in this area. And so we can understand where things are going. We can make predictions about the economy. But right now, that federal labor market data that we get is often pretty stale. And really, our customers would really like us to be able to drive insights for them based on, you know, what's happening, who is hiring, what regions, what particular skills are they hiring for? Are those seasonal behaviors? how is it shifting over time? And if we can get quality data in a standardized way, which the federal government is uniquely positioned to help enable, then we can really help employers and employees get where they wanna go and, and learn. You know, if you're waking up in central Pennsylvania where I grew up, and you're thinking like, "Well, what are the jobs of the future in my area?" you wanna be able to find that out. And, right now, it's not really as easy as you think it should be, in the time we're living.
Morse: I think that was another…and we talked about this a little in the days after the hearing, that a number of senators who were in the room, as well as staff, that issue has piqued their interest, and Congress is starting to look at the Workforce Innovation and Opportunity Act reauthorization, the WEOA. We think we'll see some conversations around modernizing federal labor market data, and I think that's going to be great. Testifying, this was a milestone for Workday, for our government affairs program, for our company. This was our first testimony experience. And one, I wanna take a point of personal privilege and say you knocked it out of the park. You did a really great job, a tremendous effort. Can you shed a little light from what you know of sort of h-how Workday goes about collaborating with policymakers and stakeholders around advocating for, for the regulation and legislation that you sort of talked about in, in your opening comments?
Lannin: It was a real honor and privilege and, and milestone in my career. And you know, I have a lot of people to thank for helping support me in giving this testimony and in all the work that we're doing. You know, how we approach this is we have been using machine learning and AI for many years now at Workday. And in supporting our customers, we've recognized that there are a whole set of things that we and they want to see in terms of how we leverage this technology successfully, how we don't put them or their employees at risk. And so we've developed a lot of, like, best practices around how to think about what a good implementation of technology looks like in this area. And it starts often with things like assessing risk or impact. that's a big part of developing software for the enterprise is understanding, what's the implication of rolling that out? Who's it gonna touch? And you know, in doing that, we've recognized that there's a role to standardizing a lot of that work. And so there are areas like the NIST AI risk management framework that we've kind of partnered to sort of say like, "Here's on the ground the things that we know we can do when we start building a feature to assess risk, assess the impact, decide what kinds of safeguards we want to put in place." And it's much better for us if those safeguards and standards are uniform everywhere we operate, everywhere our customers operate. if we have to do that in lots of different ways and lots of different locations, it just actually doesn't work that well. So it's sort of in everyone's advantage if we can sort of take what we've learned on the ground and support our customers and then partner with governments to kinda make that happen. And so that's how we're thinking about approaching this problem. And, and it's sort of like things snowball. So we started working on the AI NIST management framework some years ago, and now, a lot of those core principles are being applied in a lot of the legislation that you're seeing coming out in the United States, and frankly, in the EU as well. Those are, you know, common, well-accepted practices that we can rely on now.
Morse: 100% Recently, an agreement reached on that just in the last week. Workday, you know, our advocacy stretches from Capitol Hill with outings like you testifying to the States. We're working on legislation in California, Washington State, Connecticut, as well as in Europe. And we are starting to see, as you mentioned, this convergence of, of core principles, which we think will play a meaningful role. Okay. I've dragged you into my world a little bit on the government affairs side. So, now, we're gonna cross the Rubicon over into your world. So I'll be a lot less confident in the questions I'm asking.
Lannin: Does that mean fewer acronyms?
Morse: Fewer acronyms, and a whole lot fewer name-dropping. I'm self-aware. But I do wanna shift the conversation to talk about your experience in actually building the AI technology at Workday. Soas a product development team leader, can you describe your team's experience in developing AI, and how AI governance is sort of incorporated into that process?
Lannin: So I sort of mentioned, like our organization that I support is comprised of product managers, developers, and user-experience experts, including user researchers, right. And so it's a cross-disciplinary team that's been looking at, how do we take some of this AI technology, especially all these advancements lately around generative AI and really apply them in the flow of work in our software? And when we do that, we like to listen to customers and do a lot of listening to users and understand the role they play in interacting with, with AI. And for us, it's really about the notion that AI always needs to keep a person in the loop around the most significant decisions they're making. And that's easy to say, but it's putting it into practice that is really the focus of our organization. So if you're using AI in our software, to be in the loop means you're aware you're using AI. You know when it's being used. It literally might show up in different graphics or different colors. There'll be indicators indicating when a recommendation is coming back from an AI system, and there will be an opportunity for people to take that, modify it, or just ignore it entirely and continue with their work. And so there are all these, like, little details that matter when we're building the software that includes AI, but it's really around that core principle, people in the loop. AI is your co-pilot, not your replacement.
Morse: From my side of the issue, we often see a sort of human in the loop as a bullet in a long list of things that people are working to support. And I always love talking to you because it's past the-- past the bullet, past the sound bite, and into how it's actually being implemented in our product in a meaningful way because of how much a priority we put on it.
Lannin: You know, I'll give you another example. Like we talked a lot about this sort of anti-pattern in software, which is like a legal term of service, right. We've all seen that on websites. You know, there's a legal disclaimer, and everyone scrolls, scrolls, scrolls, and hits Next, and does not consider what was on the page. And with AI, we need to build a different experience, right? And so it needs to be an intuitive experience and one where users are really made aware that they're working with AI systems, especially when they're making recommendations. So when you first start using AI in some of our products, we recognize that's your first experience. We'll bring up a first-time user experience that introduces you to those purple buttons that exist on the page to add in AI to the experience so that you're effectively training people on the fly around what they're doing. So it's really a lot of meticulous approach on these design principles. We're now taking these design principles, and we're documenting them and applying them in a uniform way across Workday and sharing them with customers as well because we think there's just so much power in having a standard-based approach. It can't look different every place you go. It's gotta be consistent. It's gotta be understandable.
Morse: Clearly, on the policy development side, we're working to, to move the ball on meaningful AI safeguards. But at the same time, we have our own responsible AI program. We have a responsible AI board. We've got company-wide buy-in. We have documented policies and procedures. We have tools, like a risk assessment tool. You and I have talked before. You know, we sit in different organizations. I sit in the legal organization. You sit in the P&T organization. And we've, we've talked about your interaction with the Responsible AI program. You've mentioned that a little bit, but can you talk a little bit about how that comes into your line of work?
Lannin: Let's take one example, risk assessment, right. Because I think this is one that's really valuable. Leaders and cross-disciplinary people from across Workday have sat down and identified how we should think about risk. One defining principle there is like when we're making decisions that have impact on people, hiring someone, promoting them, AI needs to be very carefully considered. We don't want people letting go of that responsibility in that key area. So say I'm on a product development team, and I'm working on a recruiting product feature that will help inform who we're hiring. If I'm doing that, right out of the gate as I'm starting to design that feature and think about what customers are asking me to build, I've got that risk management framework, and it's gonna be a set of questions that you fill out and then review with a board of people that really go over like, "Okay. This is touching on something relating to the hiring practice. Should we do this at all? If we were to do it, what would be the risk? How would we mitigate those?" And there's a big back and forth around that. And so it really opens up everyone's eyes in the development organization around being sensitive to this area, versus like, "Let's build it, throw it out there, and, and then see how it lands." You know, it starts with a process that's well-defined and involves lots of people who have thought hard about these issues.
Morse: I love our conversations about that, and it sort of stuck with me because it was very much not a command and control. You described it being a really collaborative process that had led to outcomes that had helped you, that had sorta helped you think about ways of building the product in a super-productive way, which is a real success story. In many ways, we've heard about AI. We've heard about, you know, the AI sort of proliferating everywhere. And, it can be a little tough sometimes. You know, I'm oftentimes thinking back to when I was in college, and there were CDs or disks and sort of like, "Do we ship, you know, an envelope with the AI in it?" And that's clearly not the way we deliver AI to our customers. So can you talk a little bit about how we actually are delivering AI tools to our customers even if it's in rather simple terms for everyone to understand?
Lannin: There are a lot of ways, but like, some good examples of what you're gonna see from us, you know, now and into the next year, is we've looked across the product, and there are lots of areas where people are creating content, and today, they do that from a blank sheet of paper. And that's actually something that's easy to get stuck on. You want help authoring an invoice, and when you have to start from scratch, it's a lot of effort. If AI can know something about what you're writing about and write a good first draft that you can then edit, we realize that that can save you hours a day. And if we multiply that across millions of people, it is a huge time savings that people can focus back on the core part of their work. So we've identified, like, 12 use cases that we're gonna deliver in the next year. Some of those are now rolling out to early adopters that really take that focus on, like, when people are spending time at work, and they're professionals, how can we just find these things that are really mundane or laborious and, and automate those? There are other simpler examples. Like when I do a search within Workday, how do I find that right report, that right person? AI can just help pattern match and get you that information. So there are some, like, tiny, subtle ways that make for micro improvements, but now, we're really also focused on things that are just gonna save huge chunks of time for people and allow them to be much more effective.
Morse: So we've talked about successful testimony, successful product development, successful, responsible AI. it can't all be roses and rainbows. So can we talk about it? Have there been challenges or lessons that we've learned in developing and implementing AI, and sorta, how have we addressed those challenges?
Lannin: Yeah. Well, you know, especially with some of these new AI capabilities, they're really amazing at working with written language, but they also have a chance to confabulate, or the word hallucinate is used a lot, and make things up, right. And so identifying, like, when that can happen and making sure it doesn't, or if it does, that it has very limited impact is really key. And so we have a really good understanding of, you know, how to use AI in a smart way. And we've had examples of, like, you know, asking it to do basic calculations, and the numbers are just wrong. And our numbers can't be wrong at Workday. So that's maybe not always the right place to use certain forms of AI. So we use that word AI very broadly. There are lots of different algorithms. There are all kinds of times that we just have to try this stuff out and figure out, like, well, you know, where is it really gonna work for us, and where won't it? You know, on a more serious note, like, bias is something that is a big concern. A lot of these models are trained on the wide internet, right. And what we found is that that brings some good in terms of wealth and knowledge. It brings some bad because there's a lot of bias that people have written on the internet. And so we have to be very thoughtful about which models we use for particular use cases to make sure that we're not introducing bias, and then to really test everything out quite a bit. On the positive side, I think there's a lot we can do to actually help people avoid their bias by giving some AI feedback on, you know, when we bring bias into what we're doing. We're writing a performance review. How are we using language? You know, AI can actually help us out a bit, but this is just an area where we have funny stumbles, and we have serious stumbles that we have to really consider before we roll things out.
Morse: As we wrap up, Workday's done some, some pretty amazing research over the last several months. We know that among our customers, among corporate leaders and managers, there's a real desire to implement AI, there's very much a strong desire to bring AI into companies into the workplace. We also know budgets will probably be strong, so that'll, that'll likely happen. We are likely to see a growing use of AI in the workplace. As we sort of wrap up here, do you have any advice for product leaders or business leaders as they start to think about how to incorporate AI into their business?
Lannin: Yeah. I mean, thoughtful experimentation is where we're at. And I think for everyone, too, it's all about the data. And what we've seen, and Workday's core strength, is we have a lot of really well-structured data that we can use to use AI really to great effect. So if you have not done some work to clean up your data, start there. It's hugely important. And, and I think you're gonna see just amazing things come out in the next 12 months on the product side. Finding ways of responsibly tackling stuff is really key. And Chandler, I want to turn the tables on you a little bit as we think about this has been a big year. I'm curious for you, aside from my testimony, of course, like what was a high point?
And what do you see happening in your arena next year? Like if you had to make a prediction for what will be really key for us next year, what is that gonna look like?
Morse: It's a great question. It's one we've thought a lot about. I think the biggest trend that we're going to see and, and what my team is preparing for the most is activity at the state level in the US. We've seen this movie before. It was called GDPR. Europe passed a comprehensive privacy regulation. It was adopted. Many eyes turned to Congress and said, "Okay. Congress, it's time for you to act. We need a sort of counterpart US version of GDPR." and, and in many ways, we're still staring at Congress, hoping that they'll do a comprehensive privacy bill. Workday supports it and we're hoping we could see it, but we've been waiting a while. In that case, the states didn't wait. They moved. More than half of states in the US either have adopted or have a draft privacy legislation on the table. We're seeing the exact same thing on the AI side. Europe is, is likely gonna finish up the EU AI Act in, early next year. And Congress is, as we discussed, very much in learning mode, and to be honest, not very much in action mode at the moment, and the states aren't even waiting. And so we think there's gonna be an absolute tsunami of state-level activity on AI, and we're there for all of it. We're going to heavily engage. we've already seen a quickening pace. A member of my team is in Connecticut, testifying at a roundtable. they're very much looking at AI policy. We just testified at the state level in Washington State. We've been active in California, and we're not even out of 2023 yet.
Lannin: Wow. You're gonna have your hands full with your team. It's exciting. And you know, it's just fascinating to hear on the product side about this. I'm so glad for our partnership, 'cause it's like, multi-dimensions, right, how we kind of navigate this together.
Morse: 100%. I feel very grateful to work for a company where we have a North Star. We have our values. We're a values-driven company. And our customers come to trust and rely on us in our product development. On your side of the equation, our values drive what we do. And on my side of the equation, in terms of advocacy, it's all consistent with our values. We show up in those arenas, like we do for our customers, sleeves rolled up, looking to play a constructive role, looking to build trust in technology that we think can help. So with that, we'll wrap. Thanks for joining me today, Josh.