In the AI Age, Leaders Need to Build Trust… But How?

With great uncertainty surrounding AI, security and governance teams are operating in a sea of change. But one thing’s for certain: trust has never been more important.


By Martin Veitch, industry commentator

In the AI Age Leaders Need to Build Trust

Trust, my trusty dictionary tells me, is “belief in the reliability, truth, or ability of someone or something”. That’s a decent starting point but we all know that it can be hard to achieve. As children, it is drummed into us that “trust is earned, not given”. In adulthood, the importance of transparency, honesty and accountability are repeatedly proven. Sadly, however, we also become increasingly familiar with just how tenuous trust can be, how easily breached and lost. We all know the importance of trust and nowhere is trust more valued than in the process of change, the uncomfortable and unfamiliar terrain that must be negotiated to get to a better place. And with AI, companies are undergoing the mother of all changes.


What Does Trust Mean?

The fact is that trust is a complex matter. Paul Thagard in Psychology Today has called it “a complex neural process [that is] rarely absolute, but … restricted to particular situations … a binding of current experiences, memories and concepts”. In other words, we aren’t like Professor Pangloss in Voltaire’s novel Candide, blindly trusting everybody. Instead, we build up our sense of who we trust and in what circumstances and to what degree, based on a lifetime’s body of direct and indirect experiences. And once that trust is exposed as being fool worthy, reinstating it is very hard.

All leaders need to invest in building trust and understand why it is a better route than blunt retention measures such as imposing golden handcuffs.

Paul J. Zak, writing in the Harvard Business Review has gone further, measuring the production of the chemical oxytocin to show that trust can lead to dramatic positive shifts in stress levels, energy, productivity and engagement. Even without that sort of hard evidence, most of us will instinctively agree that trust is a good thing that leads to positive outcomes but, particularly in the wake of The Great Resignation and quiet quitting, all leaders need to invest in building trust and understand why it is a better route than blunt retention measures such as imposing golden handcuffs. This is especially the case today for CIOs where AI and machine learning (ML) will surely act as super-catalysts that will lead to massive changes in how we work.


Change Is Tough without Trust

Change is one of the toughest aspects of business and you may have heard a version of the old wisdom that while change is tough, not changing will end up being far tougher. Change requires a well-planned strategy of course, with lots of due diligence to show that the decision to enter a new market, geography or business model is correct. But much of the challenge in change lies in soft skills, the ability to lead, persuade and build consensus. And underpinning all of these is trust. 

Leaders need to win over employees, partners and customers. AI puts trust front and centre again because it works by collating large data sets so it has never been so important to lay out clearly what data is being collected, how it is being attained and how it will be used, together with transparency as to risk factors and unknowns. For AI not to stand for ‘Angst Inducing’, we need to implement safeguarding policies now and make sure everyone can access and understand them. 

70% of leaders welcome AI and 65% are confident their organisations will deploy it in a trustworthy manner.

To what extent do we trust AI today? Not much, which is perhaps no surprise when we think about generative AI issues such as hallucination – AI making stuff up, effectively – or LLMs sucking data from unknown sources on the public internet and not always respecting copyright. New research conducted by FT Longitude for Workday points to a clear gap between leaders and staff:

Mind the gap. 70% of leaders welcome AI and 65% are confident their organisations will deploy it in a trustworthy manner but for employees the numbers are just 46% and 51% respectively.

Manage, don’t dictate. Four out of five employees say they have observed no collaborative interactions with their employers over AI and have received no usage guidelines.

Educate and pledge. One in four employees are not confident their organisations will place their interests above those of the organisation. Also, 69% of leaders predict a future scenario where AI reduces manual labour to a significant degree – but just 38% of employees agree.

This research is aligned with a growing body of evidence as to the importance of trust in AI. A 2023 report by KPMG with the University of Queensland, Australia, based on 17,000 global participants, found strikingly negative reactions with 61% saying they feel ambivalent or wary towards AI. However, looked at more closely, the data yielded more granular results with, for example, far more unwillingness to trust AI in HR compared to AI in medical diagnosis. Also, it’s worth noting that prejudice against AI can’t be purely ascribed to negative Luddite views, as 85% of respondents said they believed AI will result in a range of benefits.

We also know that AI is one of the biggest and fastest-moving booms ever in technology history, making it difficult to predict the next twists and turns in how it operates, how it is applied and how it is policed. So, if your organisation is betting on AI for significant business change, it will be as well to explore the concerns of your people and partners then work out how you assuage them.

Few of us will build our own LLMs or feel the need to create bespoke core applications outside those that create direct competitive differentiation. That means selecting vendors and other partners that have strong records on data stewardship.

As with every major technology change then, we know there are obstacles to be addressed. Think of cloud, SaaS, blockchain or e-commerce: there are tricky stages we go through before we feel comfortable. Many of us will feel that we are in a Wild West scenario and miles away from what Gartner calls the “Plateau of Productivity.” So, how do we get there?

Building Trust, One Brick at a Time

When we think about practical steps, a few stand out.

Use a Trusted Technology Platform

Few of us will build our own LLMs or feel the need to create bespoke core applications outside those that create direct competitive differentiation. That means selecting vendors and other partners that have strong records on data stewardship and rigorous measures to safeguard AI. Ask your suppliers what they are doing in AI governance and security, then ask them to prove it. Request access to customers who are peers. Seek evidence of architectures that prevent bias through techniques such as ‘dynamic grounding’ to capture only the most reliable and up-to-date information from LLMs. Look for strong access and retrieval controls and data-masking capabilities to protect sources. Insist on tight retention policies and the ability to identify and block toxic content. Beyond this, seek out vendors that are heavily involved in setting standards and putting in place safeguards for AI.


Draw the Line Between Automation and Augmentation

One elephant in the AI room is that there are widespread fears that it will lead to a massive disruption of white-collar working as certain human jobs become thought of as no longer necessary because machines can do them better. It’s critical that leaders explain that AI is about augmentation and replacing human chores with machine intelligence, liberating human beings to deliver what they are good at: empathy, creativity, collaboration and problem solving.

Sunlight is the best detergent, so keep people in the loop and give them a voice. Accenture is an example of a company that has put its cards on the table and clearly said that it does not plan job cuts, but it does expect massive productivity improvements through AI. That sort of clear messaging will go a long way to reassuring staff who may be feeling vulnerable or exposed. And if your organisation is not willing to say that and is eyeing the opportunity purely as a vehicle for mass job cuts and cost cutting, then maybe it’s time to consider your employment options.

Walk the Walk

The politician who says there will be no tax rises and then hikes them anyway immediately loses faith. The business that espouses ethical conduct and then exploits its workers or collaborates with unethical partners also burns through trust. Actions speak louder than words, so promises on AI need to be followed up on.


AI Is a CIO Opportunity

In his book ‘The Open Organization’, former Red Hat CEO Jim Whitehurst commended the notion of democratising decision-making across the enterprise and even outside it. This included the idea that leaders should make it clear when they don’t know something rather than pretending to be omniscient.

Most modern CEOs should be aware of the implications of AI for their organisations, but they will urgently need the assistance of CIOs and other specialists to understand technical, legal and ethical complexities. By building trust and by getting ready now for the probability of massive technologically enabled disruption, smart leaders will be able to ride what promises to be one of the great business waves of our times.

As one CIO told me, “Trust is a two-way street and it is hard to negotiate. Nobody can put their finger on exactly what are the skeins and fibres that bond us to have faith in each other. But we know that once it’s gone, it’s gone, so the message has to be to handle with care.”

More Reading