Workday Podcast: Why Culture Trumps Strategy on the Road to AI

Are digital teams ready for an AI-enabled future? Dave Mackenzie, managing principal for digital at Aurecon, joins guest host Megan Wright, head of innovation at FT Longitude, to talk about why digital culture is so important.

Audio also available on Apple Podcasts and Spotify.

There’s little doubt about the profound potential that AI and machine learning (ML) hold for business. But in the face of so much promise, are we in danger of putting too much pressure on our technology teams? 

Now more than ever, the onus is on technology leaders to ensure their teams are equipped with the right skills and culture to support organization-wide digital change, explains Dave Mackenzie, managing principal for digital at design, engineering, and advisory firm Aurecon. In this special episode of the Workday Podcast, Mackenzie shares insights from years of experience leading business-wide AI and ML transformation. 

Here are a few highlights from Mackenzie, edited for clarity. You can also find our other podcast episodes here.

  • “If you’ve got the right culture around generative AI—and culture trumps strategy every day of the week—you’re halfway there in terms of your people being upskilled enough to understand the technology, to be able to explain it, to be able to talk confidently about it with stakeholders. That’s two-thirds of the battle, as far as I’m concerned.”

  • “The reality is, when you’ve got human intuition, deep domain expertise, and the technology, that’s when you really unlock something special. And to that end, I think [AI] is about people doing more. It’s actually about amplifying their potential rather than taking something away from them.”

  • “It’s amazing how quickly things like large language models, foundation models or transformers, and generative AI have been embedded into the business. It’s changing how we work, how we think about work, how we think about solving problems. There’s a starting position, which is: ‘I’m about to do something. Can AI help me?’”

Megan Wright:

From analyzing contracts to writing code and everything in between, AI and ML are already proving to have game-changing potential for the world of business. But in the face of so much promise, are we at risk of putting too much pressure on our technology teams to deliver extraordinary outcomes?

New Workday research reveals that many IT leaders feel underprepared and overwhelmed with the task at hand. And more than one-third admit that they’re going to adopt a wait-and-see approach when it comes to AI.

I’m Megan Wright, head of Innovation at FT Longitude, and I’m delighted to be joined today by Dave Mackenzie, managing principal for digital at design, engineering, and advisory firm, Aurecon. Dave, thanks for joining me.

Dave Mackenzie:

I’m happy to be here.

Wright:

New Workday and FT Longitude research reveals that actually one-third of tech leaders say they’ll be under pressure to make difficult decisions on where to apply AI and machine learning, even in areas where they might lack domain expertise. And actually, those same leaders admit to feeling pressure to find trustworthy approaches to AI, and also that they might be held accountable if any issues arise. So, I’m wondering, do you think technology teams are at risk of becoming the scapegoat for AI and ML challenges within businesses?

Mackenzie:

No, I don’t think so. I think there is pressure to deliver and pressure to have an impact, and you’re right in that you do need a degree of domain expertise to get these solutions working and really singing. But in terms of risk, and I mean look, you have to be approaching this technology as though it’s explainable and defensible, and I think that defensible posture is really important. If you’re using this technology as part of a process or a workflow, can you put your hand on your heart and say, “Yes, it made the right decision. We’re comfortable with the output.”? And if the answer is no, because you’ve totally novated responsibility to a piece of technology, then you’re in risky territory.

The other thing I think of is lots of organizations really get stuck on a risk-based approach or defining the strategy of how and what and where we’re going to apply this technology and how. But we often forget about having the right culture. If you’ve got the right culture around generative AI, I mean culture trumps strategy every day of the week. You’re halfway there in terms of your people being upskilled enough to understand the technology, to be able to explain it, to be able to talk confidently about the technology with stakeholders. That’s two-thirds of the battle as far as I’m concerned. There are risk measures and things you should put in place, yes, absolutely, of course, but taking your people on the journey and getting them to where they need to be is the real battle. And if you get that right, you’re really primed for success.

Wright:

I wonder if you could give any examples of how you’ve implemented that culture within Aurecon.

Mackenzie:

We’ve had a machine learning program for a number of years now, right back from when we first set up our machine learning co-lab. But we’ve also been on a digital journey for five or six years now. Part of that journey has been to establish organizational-level KPIs, so we’ve set out to ensure that 100% of our people have digital skills. And part of building that digital literacy across the business has enabled our people to be much better placed to explore with this technology and use it.

We’ve scaled solutions across our business that are a bit more specific and specific-use-case–based, but everyone who’s been using those tools has come to it with a degree of enthusiasm and a degree of understanding, and enough understanding to know when to ask the question so they understand how this works; but when they’re not comfortable, they know who to go to or where to go to get more information. And I think that really talks to that foundational culture around being comfortable with technology, and I think that’s a place that organizations really should strive to be.

Wright:  

We have talked a little bit about the people side of this. I wanted to dive a bit deeper actually. How, if at all, do you think AI and ML could change the way that technology and IT teams are perceived within the wider business, if we’re talking about the way that the culture permeates the wider organization?

Mackenzie:

I think this technology can enable technology teams or digital teams or IT teams to just get a bit more tightly coupled with the business, and I think it is about just working a bit more closely together in a bit more tight-knit kind of way.

A poignant example today is that I had someone from our asset management business who deals with large, resource-intensive organizations on a daily basis, working with an architect who has a PhD in artificial intelligence, and they’re a real multidisciplinary group of people coming together to solve a content generation and automation problem.

Likewise, I had another person just recently playing with these tools, again working with a water utility, just exploring the potential, but then reaching back into our digital team to pull some expertise, but then reaching over here to get some other domain expertise, and bringing all that together and they’re prototyping a little tool that they can deliver with their client. So, I see that sort of collaboration being unlocked all the time.

I think the reality is, when you’ve got our human intuition, our deep domain expertise, and the technology, that’s when you really unlock something special. And to that end, I think it’s actually about people doing more. It’s actually about amplifying their potential rather than taking something away from them.

Wright:

What impact is this shift having on your people in terms of the skills that they will need? Are you seeing a new skill set, a new, I guess, way of working emerging within the technology team that people are going to need to adapt to?

Mackenzie:

This is definitely going to impact future ways of working. There’s no question in my mind whether that’s the case or not. And I think there was a quote, “AI will never take a job, but someone who can use AI might,” and I think that sort of underscores my mindset. So, I think there’s a definite imperative for people to be learning how to use this technology. I think there’s a whole raft of new skills, even, and it sounds a bit kind of hype-y, but even the prompt engineer kind of role or set of skills, which is really just talking about how to use this technology, but they’re really important. Getting a good understanding of how to prompt these models to get the outcome you want is really important.

There’s this new hybrid role, which is the AI engineer. It’s a bit of data science and data engineering, a bit of full-stack development, but a good understanding of these large language models and generative AI, and it’s sort of emerging into a bit of a niche of its own. So, there’s a tremendous amount of new skills required, and I think it was interesting for us because we’ve been on this digital journey and it’s almost like, well, here’s the next tranche of transformation that we need to do, the next lot of skills that we need to be helping our people navigate.

Wright:

And I wonder too, as someone who is the leader of a technology team, is that having an impact on how you operate within the business?

Mackenzie:

Yes, it is, first and foremost because it’s a very topical question at the moment, and I’ll give you a quick little anecdote. I did a presentation to 100 or so grads and I was talking about leading your career and being a leader and digital skills and why it’s important. Got to the end of the presentation, every question was about ChatGPT. There’s a cohort of people for whom this is front and center of their mind on a daily basis, and no matter where I go in the business, there’s a generative AI question or an AI question or a ChatGPT question. So, it has changed my approach.

It’s also changed the language I’ve started to use. It’s amazing how quickly things like large language models, foundation models or transformers, or generative AI has just been  embedded into the business. So, there’s a base-level understanding that’s taking hold quite quickly, but it’s changing everything. It’s changing how we work, and also how we think about work, how we think about solving problems. There’s a starting position, which is probably, “I’m about to do something. Can AI help me?” And I think a lot of people have adopted that as just a first sort of check in terms of how they’re currently working right now.

Wright:

Are there particular areas that you think technology leaders will need to be really adept at navigating in order to avoid some of AI’s biggest challenges?

Mackenzie:

I think the one that I don’t hear much about is I see a degree of obsession with the technology from people, and I love that people are passionate, but also don’t get hung up on it. Know when the solution’s not actually going to help you now and just walk away from it. So, I do see this obsession building, which is something that needs to be managed.

Aside from that, I think there are all the usual risks around content filtering and data and data management and data sovereignty. They’re all very real issues. I think the regulatory frameworks or legislation that’s being rolled out globally will have an impact, and that needs to be managed, and leaders need to be certainly mindful of that, particularly in finance or if you’re handling PII or something. I think that’ll be interesting to navigate, or even just to see what governments decide they want to regulate or not, and how much they could potentially constrain innovation. I think that is a risk for us as well.

The real risk for me is partly mitigated just by having the right culture, keeping a human in the loop, and ensuring that outputs, there’s a degree of verification that takes place. I mean, we’re an engineering firm, so we have a culture of verifying things, double-checking, and making sure that when we assert something is accurate and good that it actually is. And that doesn’t change for us whether we use this technology or a different piece of technology.  

Wright:

Workday research also revealed that tech leaders are among the most enthusiastic when it comes to the potential of AI and ML. So, they’re specifically looking at its role in strategic value creation, in improving their jobs, and improving collaboration, which we've touched on today. What are you most excited for with AI and ML and why?

Mackenzie:

That’s really interesting. Obviously, the tech leaders would be the ones that are excited, I guess. Actually, I heard a good joke the other day that machine learning’s written in Python and AI’s written in PowerPoint, which maybe talks to tech leaders’ enthusiasm. I probably fall into that camp and I’m excited about a raft of things.

One, just the democratization of the technology and the access to the technology. I think that’s really exciting, because it can mean something at scale. I do see, it’s almost like I’m sort of cautiously excited about the future I think, because I think where we are right now, I feel like a large language model or a foundation model is, in simplistic terms, as good as most things that most people can do. I feel like in the next 18 months or 3 years, maybe 5 years depending on how bullish you are, these models might be better at everything that anyone can do, including our experts. That’s really exciting, but it’s also a challenge. And as organizations and leaders, we need to be prepared for that. It might happen on that horizon, it might take much longer. My gut feeling is it’s going to happen sooner and be quicker, harder, and faster than we are anticipating.

Wright:       

Absolutely, I couldn't agree more. Look, Dave, this has been a very wide-ranging, but certainly insightful conversation. So, thank you very much for joining us on the Workday Podcast.

Mackenzie:          

Happy to be here.

More Reading