As CIOs consider the multitude of ways to tap into the power of AI in their organisations, one question is becoming increasingly important: how exactly should they treat AI agents? Are they applications, or are they digital employees? What are the opportunities and consequences of each approach?

According to Salesforce, 69% of C-Suite executives in Australia who have AI on their strategic agenda are focusing on agentic AI over the next 12 months, with 38% already implementing the technology. So if you're all in on AI agents but haven't decided how you’ll manage their governance, risk and compliance (GRC), it's a good time to make the call.

Is That Thing Really an Agent?

It's no secret that executives struggled to get the ROI they expected out of generative AI. The hype was real, but the reality fell short, and many organisations now have a bunch of generative AI tools lying around that are struggling to realise value, and according to Gartner, likely to be abandoned by the end of the year.

It turns out, there's only so much efficiency you can gain from content generation, meeting summaries and chatbots. However, generative AI did give us something, and that's buy-in from executives. It seeded the idea that AI is ok to use in an enterprise context, and that has changed everything.

While we often talk about 'AI agents' as a single technology, there are actually two different types of agents in the market right now.

Generative AI has added a certain amount of productivity for enterprises. However, as the AI architectures are progressing, the main benefit those frontier models will offer is to support agentic AI. But before we get to that, let’s explore the concept of agents a bit.

While we often talk about 'AI agents' as a single technology, there are actually two different types of agents in the market right now: task-based agents, and what I think of as goal-oriented agents.

Task-Based Agents

This type of agent interprets what the user wants to achieve as an outcome, using a Large Language Model (LLM). It then executes a bunch of predefined tasks that will achieve that outcome. In this case, the agent is reactive — because it's waiting for you to make the request, and can only follow the workflows you have previously defined. There is a thin facade of agency, but it's effectively a smarter form of Robotic Process Automation (RPA).

Goal-Oriented Agents

This isn’t an official industry term, but one I think illustrates the more useful alternative. This kind of agent is one where you ask it to achieve an outcome, but without any predefined workflows to achieve it. Instead, you provide it with context as to how your organisation operates (such as corporate policies, contracts and GRC frameworks) and then essentially say: “You work out how to get it done, as long as the outcome is achieved within those boundaries”. 

The agent then uses agency (as the label indicates) to determine a path of action that achieves the outcome, in whatever way it feels is best, learning along the way. There is more at stake here — you need to trust the agent a lot more, and the risk is higher. But… the payoff? It's much higher than traditional automation.

So as you unpack all the noise from technology providers and the internet, it's important to first make sure you can discern between agents that simply interpret actions (or task-based agents) and those that execute actions autonomously, with full context of their place in the organisation (or goal-oriented agents).

Upskilling Goal-Oriented Agents for Enterprise Work

Let's say you decide to start using a goal-oriented agent. This means that not only do you need to have a goal defined for that agent with the desired outcomes, but you also need to provide it with the skills it needs to achieve the outcome. This is done by partnering two broad kinds of agents that work together:

  • The coordinating agent, which decides how to achieve the desired outcome, and
  • Delegate agents that are skilled in specific things

This is where the large generative AI models play into agentic AI, as they are especially good at interpreting a request and laying out a course of action. They can act as the powerhouse for deciding how to coordinate the process of achieving the outcome, if they aren’t the expert in actually solving it, or have the power to take action.

Separating out these 'responsibilities' and assigning them to delegate agents means that if you want to give your AI agent a new skill, you can add a new delegate agent. You don't need to create a new coordinating agent, which may already be deeply knowledgeable about your business. And likewise, delegate agents may be re-used across a number of coordinating agents, if they require those specific skills.

Now you can onboard new coordinating agents as your business needs them, upskill them with the expertise they need, then augment those skills over time as your organisation changes and evolves. Meanwhile, your AI agents are well positioned to execute suitable actions on your behalf, all within the scope of the role you have defined for them and the policies and guardrails they need to adhere to.

As agents become more common, and we delegate more work to them, the importance of managing, tracking and optimising their interplay with the human workforce will only become more important.

Introducing New Career Pathways for Your People

You're probably starting to realise that I've painted a picture of AI agents that act more like digital employees than apps, right?

After all, you're essentially onboarding them with all the information they need to know. You're defining the outcomes they need to achieve, and setting goals for them. And on a regular basis, you're reviewing their performance. These AI agents are fulfilling objectives in your organisation, and have skills that can develop over time. Sounds a lot like a digital employee to me.

Goal-oriented agents can be working throughout the night, doing thousands of operations a second, leaving time for your team to focus on higher-level strategy and human connection. They don't require constant prompting and oversight, because they can achieve outcomes fairly autonomously.

As agents become more common, and we delegate more work to them, the importance of managing, tracking and optimising their interplay with the human workforce will only become more important, and more of a business concern outside the IT function.

For example, this new landscape can offer new management pathways for employees, providing a transition from individual contributors to what Microsoft is calling an 'agent boss', where employees can master the skill of delegating tasks and the vital responsibility of managing outcomes from their digital employees.

The potential to introduce new roles and career pathways is exciting, especially as Generation Alpha enters the workforce in the years ahead. In a way, it's not only a technology shift, but also a mindset shift in the way we think about work. If you treat AI agents as applications, I'm not sure you can leverage this opportunity.

Are your current governance, risk and compliance (GRC) policies designed for human speed and scale, or AI speed and scale?

Managing AI Agents Beyond Deployment

So now that you've decided how to treat AI agents, how can you get ready to leverage them in the next 12 months? Here are four things to keep in mind.

Get Clear on Ownership 

While IT will play a vital role in the selection, procurement and deployment of AI agents, it's important to understand whether you want to continue managing them centrally within your function, or allow them to be managed by the business. If you've decided to treat them as digital employees, it makes sense that HR will have a critical role to play in their scope, onboarding, governance and performance management.

Take a Platform Approach

Unlike monolithic LLMs, it’s not our goal to train an individual AI agent with all the corporation’s (or world’s) knowledge. Instead, it’s the interplay between different delegate agents with their own areas of expertise and responsibilities that makes agentic AI better grounded, more multi-disciplinary, and potentially more secure.

This is why agents work best when they're built into platforms which have the data they need to run.

If your company’s agents are distributed across systems, make sure they are all organised on a central platform, to give you the power of an ecosystem. This consolidation will help you better manage your integrations, and reduce risk. Plus your operating expenses will go down, because your team won't have to jump every time there's an update to an individual application.

Unlock SaaS Features

The beauty of having all your apps and agents on a single platform is that you'll unlock the benefits of SaaS features as they're released, not when your function has time to deploy them. If you wait 3-5 years when your big 'digital transformation' is complete, you'll be well behind the competition. Instead, aim to evolve rather than transform, with quick wins and regular iterations that will allow you to realise the full value of your investments.

Update Your GRC Policies

Are your current governance, risk and compliance (GRC) policies designed for human speed and scale, or AI speed and scale? I thought so. The thing is, if you make an error in calculating salary reviews, you may have to send 50 apology emails. If an agent performs thousands of reviews a minute and the calculation is wrong, that's front page news. So review your policies through an AI lens, and make sure you feel confident in your GRC framework.

By carefully considering how you'll treat AI agents in your tech stack, you'll be ready to deploy them in your organisation in the months ahead, and start seeing value right away.

A remarkable 82% of organisations are already using AI agents. But is your team ready? Read our latest report to learn how businesses are maximising human potential with AI, featuring insights from nearly 3,000 global leaders.

More Reading