Workday Podcast: Could AI Be the Best Thing to Happen to Your Business?

In this special episode of the Workday Podcast, guest host Meg Wright, head of audio and innovation at FT Longitude, is joined by three global emerging technology thought leaders as we dive headlong into the topic of AI and ML in the workplace.

Audio also available on Apple Podcasts and Spotify.

There’s little doubt that artificial intelligence (AI) and machine learning (ML) are set to have a massive impact on organizations globally. And yet the details on how that will manifest itself are often missing from the discussion. 

In this special episode of the Workday Podcast, guest host Meg Wright, head of audio and innovation at FT Longitude, is joined by three global emerging technology thought leaders as we dive headlong into the topic of AI and ML in the workplace. 

Here are a few highlights from the discussion, edited for clarity. You can also find our other podcast episodes here

  • “There is near limitless potential if this technology is channeled and used appropriately and correctly. I think that last part is what we’re trying to figure out. Good governance, responsible use, trustworthiness. These are all at the core of good innovation.” —Dr. Rumman Chowdhury, responsible AI fellow at Harvard’s Berkman Klein Center for Internet and Society and co-founder of the nonprofit Humane Intelligence

  • “If I were watching this and trying to anticipate how fast things will happen, I would be very focused on just watching the innovative leaders. We don’t have to look at how fast all the companies will change. All we have to look for is the singular leader, and that will then set the pace for everyone else.” —Ajay Agrawal, professor at the University of Toronto’s Rotman School of Management and founder of the Creative Destruction Lab

  • “AI and ML are a game-changer for business. The thing that’s dawning on everyone is that it’s tough to see any sector in the economy that isn’t going to be adopting these tools.” —Chandler Morse, vice president of corporate affairs at Workday

Tune in to hear how AI and ML are reshaping both employee experience and organizational performance, why the global conversation on AI and ML policy is so critical, and what these technologies will mean for current and future business leaders. 

To learn about the three things holding business leaders back in their AI and ML adoption efforts, read “AI IQ: Insights on Artificial Intelligence in the Enterprise.” And stay tuned for the largest study we’ve ever conducted—“AI Global Indicator”—set to launch September 2023. 

Ajay Agrawal: People only get to live through something like this once in their career. And so I would say for every listener that for the ones who are, let's say, older than 40, you'll remember what it felt like in the very early days of the internet. And it may have felt like it was a technology, and you might have been in a business where you said, well, the internet is not going to affect me. And it's hard to think of a business today that's not impacted by the internet. And it seems like this will be at least as big, if not significantly bigger than the internet.

Meg Wright:  Artificial intelligence. Machine learning. There is little doubt that these technologies are set to have a massive impact on organizations globally. 

But with everything we know - and don’t yet know -  about AI and ML, it can be difficult to understand what this true potential looks like.

Dr. Rumman Chowdhury:There is near limitless potential if this technology is channeled and used appropriately and correctly. I think that last part is what we're trying to figure out. Good governance, responsible use, trustworthiness. These are all at the core of good innovation. 

Wright:  For business leaders, the opportunities must outweigh the risks and challenges.

And, critically, the details on how AI and ML will manifest for organizations need to be a bigger part of the conversation. 

Chandler Morse: The one thing that I would say is getting to a meaningful point in the conversation where we're not only talking about, yes, there can be unintended consequences around these technologies, but there can also be unbelievable upsides and responsiveness and nuanced approaches to talent that can be unlocked through these technologies.

Wright:  So: could AI and ML be the best - or worst - thing that ever happened to your business?  

I’m Meg Wright, head of innovation at FT Longitude. 

And in this special episode of The Workday Podcast, we take a deep dive into the world of AI and ML in business - what we know of it today, where we think it might lead, and what we are yet to uncover. 

Chowdhury: My name is Dr. Rumman Chowdhury. I am one of the founders of the field of responsible AI in practice. I am currently a responsible AI fellow at Harvard's Berkman Klein Centre for Internet and Society, and I'm also co-founder of the nonprofit Humane Intelligence.    

In the last less than a year, Generative AI has become the topic of conversation. What has been the most innovative is not that this technology came to existence, actually large language models have existed for some years. The big innovation has actually been in the no-code accessibility to create realistic looking text image, video, audio from these models without having to code. 

Almost everyone who's listening to this podcast is probably familiar with no code-based access to technology. So today you can interact with ChatGPT or a Lensa or a Stable Diffusion simply by typing in a very human-like prompt. So, you will tell it to make a picture of a cat wearing a party hat and it will send you a picture of a cat wearing a party hat. You can then refine this to say, "I want it to be a black cat. I wanted to wear a pink party hat." And instead of having to do this via code, so programming skill, which most previous versions of AI required you to do - which was a barrier to entry to most people - this plain text way of communicating which mimics human behavior is actually one of the biggest revolutions in this new wave of artificial intelligence.

Wright: So as AI and ML start to infiltrate the world of business, what skills will become more sought after? Are there particular skills people will need in order to ensure they can work effectively with AI?

Agrawal: Another way to frame your question is what skills won't you need? And so the last half century, as we've introduced more and more computers into the workforce, computers today seem very benign. People feel anxious around AI, but very few people feel anxious just around a computer sitting on your desk at your office or even your phone.

My name's Ajay Agrawal. I'm a professor at the University of Toronto at the Rotman School of Management and I'm the founder of a not-for-profit programme called the Creative Destruction Lab. Our mission is to enhance the commercialisation of science for the betterment of humankind.     

Before navigational AI, let's say in London, people had to go to school for three years. So they get three years to learn how to really be a knowledgeable driver to navigate around the City of London. And what the AI did is it took a person who knew nothing about the City of London, like I could fly into Heathrow, rent a car, and imagine I've never set foot in the City of London, I can drive through the city as efficiently as a pro.       

And so that made driving a lot more accessible. So let's imagine that you thought there was any kind of systematic bias of who got into those driving schools to learn the knowledge in London or whatever those barriers were, are to a large extent removed. And so all you have to know is if as long as you can safely drive a car, the AI upskills you to be able to navigate. 

So we had some colleagues and economists in Japan who studied this in Tokyo, and what they found was they gave half the drivers a navigational AI and half did not get it. And they looked at their productivity before versus after they got the navigational AIs.

And in their case, there's two things. One is just predicting the optimal route between two places. But the second thing is for taxi drivers, the other prediction they need to make is when they drop off their passenger, where should they go to minimize the time until they pick up their next passenger? And so because in taxi driving productivity is measured by the number of minutes with a passenger in your car versus no passenger in your car. And so what they found was the ones who got the navigational AI, the less experienced drivers got a 7% productivity boost, and the more experienced drivers got a 0% productivity boost, they already had a good instinct of where to go to minimize their time to pick up a drive. So this was yet another case of the AI leveling the playing field between the ones that were, let's say, more experienced versus less experienced.

Morse: I think AI and ML is for sure an actual game changer for business.

I think the thing that's dawning on everyone is that it's tough to see any sector in the economy that isn't going to be adopting these tools.

Wright: Meet Chandler Morse, vice president for corporate affairs at Workday. 

Chandler agrees that AI has the power to transform the workforce from inside out - both in terms of business performance, and career development. 

Morse: I get really passionate around the skills conversation, because the bulk of my career I have worked with populations for whom I think that is essential and critical. I always come back to this example: Pick any town, USA, and something's changed and now you're out of work. And the question is like, okay, here's some resources. The federal government provides, in the US, provides quite a bit of resources for workforce development. Here's some resources to go develop some skills. And the answer is, in what? In what do I go develop skills in? And I believe that the technology exists to know what's moving in the economy, what's coming and going, where are the opportunities lie.

But for a lot of people it's how do I pay my rent? How do I open up an opportunity that's meaningful that can provide for my family, that can take me to the next level? And I am passionate about the fact that a skills-based approach can start to open those doors in frankly a faster, more efficient, more effective way to lead to better economic opportunities for people. And I think AI is a really important part of that.

And I think we're at a really interesting time in the US economy and frankly the global economy sort of coming out of the pandemic and coming with new technologies, changing rapidly. What everyone should be getting the sense of is, things in the economy can change really quickly and we know that firsthand. And if there is this change in the economy, how do we prepare workers and employers to respond to that in an agile way?

I do think that skills is the way, but skills backed by a thoughtful, ethical, responsible implementation of AI that has safeguards, regulatory safeguards that help facilitate trust. I think it's incredibly exciting.

Wright: AI and ML can be powerful tools to improve employee experience, workplace efficiency and business performance. But with such limitless potential comes the question of trust. 

In Dr Rumman Chowdhury’s own words: “technologists don’t always understand people, and people don’t always understand technology”. So what does this mean when it comes to regulating AI and ML for business? 

Chowdhury: The culture of data science is actually by definition very scrappy and very decentralized, and I think it's a beautiful thing. We have investment and development in open-source technologies. That is actually how most people learn to become data scientists and AI engineers. We continually up-skill ourselves by staying on top of papers, and I think all of that needs to be embraced. So rather than trying to regulate artificial intelligence by keeping it in a cage behind locked doors, my suggestion actually is more openness and more transparency.

Wright: This begs the question: How do we steer a middle path to avoid over-regulation and at the same time ensure that AI is used safely? And what can businesses do to engage with policymakers in a productive way?

Here’s Rumman again… 

Chowdhury: A lot of these problems are actually the same problems that we have seen in platforms, and having worked at Twitter, I definitely have some familiarity with what the challenges are. I think there are a lot of parallels. So ultimately, what I will say is so much of what regulation is, is the answer to the question, who gets to be the arbiter of truth? Who decides what is and isn't correct? Who decides what should and shouldn't be seen? Who decides how it should and shouldn't be seen? And who decides what is good and what is bad? 

At the core of all of this conversation is actually choosing the parties who get to be the arbiters of truth. So with Generative AI, when we think about GDPR and the parallels to Generative AI, so the EU AI Act just passed, we have the Digital Services Act, we have the Digital Markets Act, we do have regulation coming. 

In my opinion, these new laws have actually learned from some of the critiques of GDPR, which was that it was very onerous on companies, it didn't actually understand how companies stored and collected data. And as a result, by giving mandates that seemed "simple" like the right to not be found or the right to your own information, actually was quite a difficult task for many companies.         

Frankly, a world in which we have bad regulation on AI is just as bad as a world in which we have zero regulation or standards on artificial intelligence. So, I've applauded a lot of the efforts to invest in all kinds of governance. So, governance does not just mean regulation. I think there is quite a focus because of all of the regulation coming out of the European Union and similar efforts that are now happening in other parts of the world such as the UK and increasingly in the United States. But governance takes many aspects and many forms and all of them are actually helpful in many different ways. So, in part, innovation is driven by good governance. This would mean having standardized ways of assessing the technologies that you're investing in, so you actually understand whether or not it's delivering what you think it will deliver, and you're also able to compare across different technologies to actually choose which the best one is and best suited for your product and your needs.

Wright: Critically, as business uses of AI and ML scale rapidly, leaders must address questions of trust,  safety and ethics. 

If history has taught us one thing, it’s that these conversations are central to scaling technology responsibly, as Workday’s Chandler Morse explains. 

Morse: We're going to see a lot of implementations of these technologies. There are some concerns in some cases around some use cases, around some applications. And those concerns need to be addressed and they need to be addressed in a policy context.

Wright: What lessons can businesses take from other emerging technologies? And, in particular, how can they avoid that dangerous trap between over regulation hindering progress and a lack of regulation eroding public trust?

Morse: The way I get asked that question all the time is like, "Your industry, we don't trust that you're really asking for regulation."

We believe in the power of AI to unlock human potential. And we say that as a human capital management service provider for half of the Fortune 50 and 50% of the Fortune 500, 60 million employee records in our system. We know how these technologies can benefit economic opportunities for people, that's our business. But people won't use technologies that they don't trust. 

On Capitol Hill, I was on Capitol Hill for most of my career, and all I wanted to know when people came in was, “Tell me your motive. Don’t make me guess your motive. What do you want? Can we work together?” And our motive is clear. We want people to use these technologies. We are a provider of these services and people won't use them to unlock their potential, to set up these talent marketplaces, to drive meaningful conversations around careers, to look at what's needed in the economy and where resources need to go and to develop skills.  

We just see a lot of benefit from these technologies. And so our goal is to develop a level of comfort and we think that the way we get to that level of comfort is meaningful regulation. 

Wright: How then will AI and ML enable a bold, new vision for business? And are we even ready for a world of limitless potential? 

Agrawal: The technology will get there quite quickly. It will be the change management for people in their organizations. I think what's going to drive that change is going to be competition, that as soon as one company in an industry does it, and all of a sudden they can offer a service that's much better for their customers at a price that's much lower, that all of that resistance that has slowed others down will either be addressed very quickly or the company will just become less and less relevant.

If I were watching this and trying to anticipate how fast things will happen, I would be very focused on just watching the innovative leaders. So the same way that you would've watched in the early days Netflix and in the US for example, people were watching this like a curiosity while they were still getting in their cars and driving to Blockbuster to rent a video. That once you saw how it worked, it was hard not to imagine that this is inevitable. 

We don't have to look at how fast will all the companies change. All we have to look for is the singular leader and that will then set the pace for everyone else.

Wright: Critically, those leading businesses must be a vocal part of the conversation. 

Morse: We very much view and are bullish on the potential for AI to unlock human potential. At the same time, there are potential unintended consequences around these technologies that need to be addressed.

When we started these conversations in 2019, the first thing we were saying was, "Hey, let's have a risk-based approach. We're not sure HR AI use cases and Netflix recommendations of the next season of a show that you want to watch deserve the same level of scrutiny.” And so really parsing out where the focus should be. I think the Europeans have landed that and have frankly now made it a foregone conclusion, that's sort of now table stakes in the conversation. It's no longer a novel concept. 

We also think that they've done a fairly good job having a nuanced approach. One of the things that we were suggesting was in that risk-based triangle from low risk to you absolutely can't use AI for these use cases, we asked them to avoid the temptation of putting entire sectors into categories. That that risk-based approach needed to have kind of a nuanced approach enough to really separate out even in our sector the things that will have dramatic impacts on employment opportunities versus some of the things that are maybe a little bit less.

Wright: Along with managing change effectively and taking a nuanced approach to risk, business leaders will also need to understand how AI could start to reshape the wider business landscape. Here’s Rumman to explain… 

​​Chowdhury: So, for example, Duolingo is an app that teaches you different languages. So, what they are doing at Duolingo is taking this core model built by OpenAI and refining it to fit their purpose. So, in this new world where companies are using a core algorithm built by another company and refining it for their purpose, they too have a responsibility for trust and safety. That responsibility for trust and safety is actually about their specific Fine Tune use case. So there is an expectation that these giant AI companies, the Anthropics and OpenAIs of the world, have a responsibility for identifying egregious harms, utilizing red teaming and ensuring that in general it's promoting responsible use.

But then the secondary party that's refining this for their purpose also has a responsibility. So, what companies need to think of are, what are these two tiers of trust and safety and what do they imagine is their expectation for their customers. So, while a decent part of the technology can be outsourced because all they're doing is fine-tuning a core model, I will also add that that responsibility component cannot also be outsourced.

I am an advocate, a very vocal advocate of having global governance for some of these problems for which we need moral oversight. I think this concept of global governance has evolved to have many different arms and legs, but for me, what this governance body should do is have a mission of enabling human flourishing. And that sounds very vague and nebulous, but so does the concept of artificial general intelligence, right? So, if we are investing billions of dollars to a concept that sounds completely unachievable, such as artificial general intelligence, I too think we should spend a lot of time, effort, and money into enabling human flourishing based on these technologies.

Wright: There’s a lot of work to be done, but there’s no denying that the outlook for AI and ML is bright. But what does this mean for business leaders? How should today’s organizations prepare for tomorrow's world of work? I put this question to Ajay… 

Agrawal: Thing number one is to point AI at real business problems. So in other words, people get kind of mesmerized by the magical and science fiction part of it that every AI initiative inside the company should be focused on a key business metric. And so it should be very measurable. AIs are optimizers and they need to be pointed at a thing that they're optimizing. And so I would avoid handing your AI all over to your chief data scientist and make sure that it's under the auspices of a business unit lead who's got a very clear key performance indicator or some kind of metric. And the AI is pointed at a business goal that ultimately either increases revenue or reduces cost. 

Point number two is there are many areas now that you can apply AI, especially since we can now handle language. So many things that were not feasible this time last year are now feasible because we can read contracts, we can read standard operating procedures, we can read employment agreements, emails, all of that unstructured data that we couldn't process very effectively this time last year is now very manageable. 

Prioritize, pick whichever one or two or three projects will have the highest lift in terms of increased revenue or decreased costs on the one dimension and on the second dimension is feasible to build. So in other words, apply your typical ROI calculation and just pick one or two or maybe three projects, but don't try and boil the ocean and attack everything at once. 

Finally, point three is I would highly encourage every company to lean in on something. In other words, pick whatever is your most valuable AI initiative and get started now as opposed to like, let's just wait and see what happens. And the reason is because AI learns. Unlike any tool our human civilization has used before AI learns, so it gets better with use. And so the people who sit on the sidelines, you're missing all that learning time that those that are building their AIs now are getting the advantage of the faster you get in, the faster your AI starts to learn.

Wright: People, performance, policy, progress. The future potential that AI and ML hold for businesses is undeniable. 

And yet it’s the steps business leaders take today that will ultimately determine how this potential will unfold - and, crucially, the value AI and ML will yield. 

So, could AI and ML be the best thing that ever happened to business? 

Well, I’ll let you decide. 

I’m Meg Wright. Thanks for listening.

More Reading