Workday Podcast: AI Transforms the Finance-Tech Alliance into the Ultimate Power Couple

The CTO and the SVP of finance at CAI tell guest host Megan Wright, head of innovation at FT Longitude, how AI and machine learning are transforming their teams from the inside out.

Audio also available on Apple Podcasts and Spotify.

AI and machine learning (ML) create so many opportunities for businesses to generate value, but what about their impact on the organization itself?

These tools are transforming the way technology and finance teams grow, interact, and collaborate with one another, explains Matt Peters, CTO at technology services firm CAI. In this special episode of the Workday Podcast, Peters is joined by his colleague and SVP Finance Derek Sager to discuss how AI is transforming the finance-technology alliance for the better.

Here are a few highlights from the conversation, edited for clarity. You can also find our other podcast episodes here.

  • “I am always equal parts frightened and excited about where we are with AI right now.”—Matt Peters, CTO, CAI

  • “The finance team is not looking at AI as a component of ‘What I do is changing, and you’re taking work away from me.’ The team is looking at it as an extension of its ability to serve the needs of the function and the organization.”—Derek Sager, SVP Finance, CAI

  • “The demand in this space is so high, it’s nearly impossible for us to provide enough staff to do everything that the organization would like to do. But with AI and ML really coming into play for the IT organization, it’s forced us to find a lot of folks we can upskill. It’s been great for us because we don’t usually find something that allows us to  give folks a new and big career opportunity that they’re equally excited about. This is a combination of new tech and new value at the same time. Usually, we have one chasing the other and that’s not the case this time.”—Matt Peters, CTO, CAI

Megan Wright:

Productivity, creativity, agility. When it comes to leveraging artificial intelligence and machine learning there are so many possible business benefits and paths to value creation. But what about the impact of AI and ML on the business itself? Its structures, processes, and perhaps most importantly, the impact it could have on teams and their leaders?

In fact, new Workday research reveals that collaboration could be one of the biggest benefits the business leaders will realize on the path to an AI future. I'm Megan Wright, head of innovation at FT longitude, and joining me to discuss this is Matt Peters, Chief Technology Officer at Technical Professional Services Firm, CAI, and Derek Sager, Senior Vice president of Finance, also at CAI. 

Matt, Derek, welcome to the Workday podcast.

Matt Peters:

Thank you for having us.

Derek Sager:

Happy to be here.

Wright:

I'd say there's little doubt by now that the potential that technology transformation holds for businesses is immense, but I'm curious, what impact is the uptake of AI and ML in particular having on the relationship between finance and IT?

Are there particular use cases or maybe big picture benefits that each of you is eyeing off closely?

Sager: 

What I'm really excited about as we look ahead is being able to throw in the moment questions against these tools, which will look at the data and give me a more instantaneous response that says, "Okay, that's what I'm looking for, and I can keep digging, digging, digging." Right now it's like, "Write it down. I'll get back to it when I can." And this becomes a much more spontaneous, "Hey, let me ask the data real quick that question to see what I get." And I think that will help us from a financial analysis and more in the moment conversation, get those answers quicker as opposed to, let me get back to you. And I think that's where we're going to see the productivity ultimately in terms of the outcomes of these tools.

Peters:    

I am always equal parts frightened and excited with where we're at with AI right now. And I say that for a few reasons. As we run through the organization right now, we're looking for opportunities to deploy AI. A lot of the things that we run into are just elements of work that we should be doing today, or we want to be doing today, but we don't because we simply don't have the time, the human power, the priority behind it. But as AI starts to reduce that barrier to get started, it's showing that we're able to pick up a lot more of that work.

We're able to turn those things around a lot more quickly. There's a huge amount of value to making it easier to at least get started if not get done with a higher portfolio of work. But at the same time, CAI as an organization, we do a lot of service and consulting to state and local government agencies in the United States. They are under attack all the time. And as a result, so are we.

And it's not because we were never a target before, it's because it's just so much easier and cheaper to attack us so much harder and more often. And so at the same time that I'm seeing all the value of, yeah, AI is really helping us do these jobs quicker, better, easier, it's also creating a new substantial problem for us. But at the same time, the snake is eating its own tail. We're feeding our SecOps team with more and more AI tools so that they're able to respond and defend us better. And we're in a very interesting cycle right now where it seems like it's really just a race to the greatest degree of exploitation of an AI model that someone can get to. And we're in a race against attackers at the moment at the same time that we're racing to just make sure we're servicing our internal customers and the rest of our organization and our enterprise as quickly, accurately, and correctly as we can.

Wright: 

You've both touched a bit on the impact for your teams. I'm wondering how this is impacting each of you as leaders? Are there skills and capabilities that you are each looking at now that you potentially didn't need to think about quite so much in the past?

Peters: 

The demand in this space is so high, it's nearly impossible for us to provide enough staff to do everything that the organization would like to do. But with AI and ML really coming into play, it's forced for the IT organization, it's forced us to find a lot of folks that we can upskill.

Reaching out into the market and trying to find a lot of people that can do prompt engineering and stuff like that, yeah, there's a little bit of a commodity quality to that right now, but not people who are good at it necessarily. So we've really taken a lot of the folks who we already had on staff with a focus on automation because that's just been something that we've been doing in many forms for many years, and this becomes an upskilling opportunity for them to get comfortable and familiar enough with a new technology to be of use to Derek and his team or HR and that team and so on. So it's been really great for us because we don't usually find something that allows us to really give folks a new and big career opportunity that they're equally excited about. This is a combination of new tech and new value at the same time. Usually we have one chasing the other and that's not the case this time.

Sager:

Yeah, I think from a finance perspective, when I think about more FP&A at the moment right now, the folks are comfortable with how they go find data and how they understand it. But at the same time as Matt and I have been talking through this, one of the things that I'm becoming more appreciative of, it's not advisable that we just dump all our company data into these tools and see what happens. We're very focused on the security of that data and how these tools work. So we're still, I think learning how we optimize them and what data we should use and how we use all these cool tools out there, but do it at a measured pace to protect our organization.

Wright:     

One thing, Derek, I wanted to come back to, you talked a bit earlier about the impact of this on your teams and certainly on where you're looking at those skills gaps. Something I thought that was interesting that came out of the research study that Workday have just conducted was actually that finance leaders are almost fairly evenly split about whether AI and ML will make jobs in finance more or less rewarding. So I was interested to get your perspective on that. How do you think AI and ML are reshaping finance and is it better or worse in your perspective?

Sager:

In a positive way for us, what I'm finding is just the human element. They're not looking at it as a component of what I do is changing and you’re taking work away from me. They're looking at it as an extension of their ability to serve the needs of the function and the organization. So not just with where we are with the current state of AI machine learning. If we go back five, 10 years ago as we started working with Matt and his team to understand some of the technologies back then, they fully embraced it. And then for me that was a positive. It's like, "Wow, they're excited about it."

And of late, I'd say probably in the last quarter here, as Matt and his team allowed the organization to opt in to understand some of the machine learning and these large language models, they gave us all a sandbox. It was really reassuring to see that several of my team members, my managers jumped right in and all of a sudden, were beating on the sandbox and they're saying, "How does it work?" And that was encouraging. You don't have to push them now, they're willing to jump in and figure it out, which is phenomenal as a leader and speaks well to where our organization has come to accept it as another tool in our toolbox.

Peters:

I have to couple that with the fact that in the US at least we're looking at a worker shortage for CPAs of only 10% of demand in the coming years. So that turns this into one of those scenarios where the value that we are trying to extract from AI right now is not supplanting Derek's team in any way. It's getting more out of all of them, and if we're going to have access to even fewer of them in the years coming, then AI is the only viable solution that allows the rest of us to still get that work done. 

We were talking recently just a few days ago about a scenario where within our organization there's a forensic lens that Derek applies to financial data at CAI that's really just a reflection of his brain, right? He's formulaic about it. He's got a specific way that he goes through our data, but he can only do that one entity or one project or one division at a time, and it takes a lot of time that he shouldn't have to spend that way. If AI is the way that we can mechanize that piece of Derek's thinking and then let Derek still live in the loop and confirm the outputs and agree with them and keep the model headed in the right direction, I think that's what we have to be striving for right now, given the landscape of demand and lack of individuals to meet that demand.

Wright: 

I'm curious too, is there an example that you would point to where you've done this successfully? Perhaps, of how you've helped your team to buy into the potential of this technology and overcome some of the barriers that each of you have spoken about?

Sager:      

From a finance perspective, I'd say some of the early days of AI for us was just a simple, how do we take large volume-based tasks and have some type of machine assist. Today, I want to say we've probably got about 20 of those unique processes captured in that type of an environment. So given our size of an organization, for me to say I've got one individual who does all of our cash application is a testament to what Matt and his team have enabled us to do. And when she shows up for work, she logs in and she fires up her assistant, if you will, and that assistant is fed data and it then starts applying cash within the platform. And I think for us, that's a really good example because from a, how do I support the organization as ensuring transactions are processed timely, but at the most cost-effective manner? That is a simple foundational layer that says, "Yeah, I mean, if I'm dealing with 6,000, 7,000 invoices a month and I can say I need basically one person that can apply all that cash,” it's pretty cool.

Peters:

I think we're also seeing that in every example across the organization and just for context, the CAI, the gentleman, we have a gentleman named Chris who deployed a playground for us. It supports multiple models. They're all private in our Azure instance, and we've been slowly opening it up to the rest of the organization that here you can come do anything you want with this. There's no risk, there's no fear. All the reasons that we told you, you can't go do this with ChatGPT, it's not true in this environment, so do whatever you want. And people were throwing financial data in there as an example from Derek's team looking for some kind of identification of an outlier, pattern recognition, smart things to do with an AI model. But we also saw that as a human user gets comfortable with the outcome of a tool like that, that has a long runway of capabilities, and they start to request more and more complicated math from it, there is a point where under normal and standard out of the box, if you will, large language models, they stopped doing math correctly and well.

And we found that, we found that, "Okay, they're coming in, they're asking for simple summary analysis and things like that because a great way to get started now that I'm interested in it, now I want to do something more elaborate." When you do something statistically interesting with data in a large language model, I want to plot an analysis of covariance over time, it falls down. It does hit a point where it's literally making things up. The popular term in the media is hallucinating. So we found that we hit those points. It's why a human needs to continue to be in the loop. It's why we rely on someone from Derek's team to say, "I tried to do this. The outcome doesn't look like I expected it to, doesn't mean it's wrong, but I'd really like another set of eyes on it."

And then we start to realize, "Okay, we need to be bringing ancillary tools like WolframAlpha into our large language model deployment because that gives the LLM a tool to use, to do the math for it instead of trying to make it up when it gets confused." It's really, if you think about how an AI solution like this works, it's just a probability model. It's taking some kind of an input from us and saying, "I bet this is next." And that's what it does. And when you get into interesting math, it's wrong and it doesn't have a calculator, but it can be instructed or directed by a technical user that, okay, when you don't really know and you're making it up, you go use the calculator and then it does. And that kind of fixes the problem. But we're discovering those problems as we go because I don't think much of any organizations at this very moment are completely comfortable with, where are the boundaries of where this AI solution stops doing things that make sense? Because it's not like you suddenly jump off a cliff. It's a casual and gradual gradation toward wrong that I think makes it harder for a lot of human users to be able to identify it and makes it a little bit more invisible to the model as well.

Sager:

Yeah, and to Matt's point, Meg, you look at some of the business intelligence tools out there and as they start to integrate some of this technology, so you can say, "Hey, ask the data a question." To Matt's point, you get an answer like, "Eh." So I'm getting an answer that doesn't feel right, is that because I didn't ask the question the right way? So how do we phrase our questions within these tools that enable it to bring back a more logical answer, as opposed to how much of that answer is an output or a byproduct of I didn't give the tool all the data it needed to give me a much more focused answer. So that's the other thing I think we're learning as we're playing with these things is how much of those are contributing to it. But again, it comes back to: I, the human, have to be able to interpret, as Matt said, the answer to say, "What's missing, what doesn't feel right, and then how do I continue to train it to give me an answer that I'm much more confident in?"

Wright:

Finally, to wrap up, thinking about the impact of AI and ML on how the two of you are working together and also on the wider leadership in the business, which you've touched on, what sort of advice would you give to listeners as they start to think about how best to approach AI and ML within their own businesses?

Peters:

I'd say one thing that Derek and I benefit from automatically is that we have a boss, the CEO of our company is very interested in AI and ML. He's interested in technology, broadly speaking, he wants to be an early adopter and he wants best in class technology. He also has a financial background. So Derek and I sit in a really interesting position where he understands Derek's world and he's excited about my world and that makes it great for us. That's not exactly advice, but I think I would translate that to say a lot of what Derek and I and our teams are able to accomplish right now is in no small part due to his understanding and motivation for what we're doing.

So if you're a finance or technology officer out there who has a CEO that doesn't get it and isn't able to appreciate the value, then socializing it there first is the right place to start because you need support for these kinds of initiatives. Even when you want to start grassroots in an organization, that can only take it so far. There's an element where every AI deployment, if you're doing it safely, it's organizational cultural change. It's a major investment on your security posture and how you need to reposition the organization. None of that is trivial, and if you don't have top level executive leadership promoting that, I'm not sure how anybody gets anything big done.

Sager:

Beyond that, Matt and I think both benefit from this perspective that I understand our data, he understands our data, but I also have an appreciation for there are certain limits of technology and Matt's teaching me what those are and his team as well. So I'd say it's beyond just the two of us having a respective understanding and the ability to just talk openly about things, I have a degree of confidence that I'm willing to pick up the phone and talk to his line manager and say, "Hey, talk to me about this. How does it work?" And if you don't have that, it's going to be harder. It's going to be harder to work through the art of the possible and push through how do we build out the model that's best positioned for our organization?

And I guess lastly, just as we talked about leadership at the top, it's ensuring that our respective line managers embrace it as well. And if there are folks that don't, it's going to make it even harder to then deploy it within our respective towers, not just finance and technology, but within other towers within the organization. And for us, we benefit from that. Everyone at the leadership level within our organization fully embraces and understands the impact.

Wright: 

Couldn't agree more. You've certainly given our listeners a lot to think about, especially when it comes to the benefits of that collaborative partnership, as well as the steps that finance and technology leaders really need to start to take to ensure their workforces are ready for this AI future. Matt, Derek, thank you very much for joining me.

Peters:      

Thank you for having us.

Sager:   

Thank you.

More Reading