How Companies Can Thrive with Trusted AI

The exact applications of AI are unknown, but organisations should consider privacy protections, human judgement and simpler systems to guide successful AI use.

How Companies Can Thrive With Trusted AI | smartCIO Workday

From writing marketing content to delivering faster, more accurate financial predictions to streamlining supply chains, business leaders see opportunities to win big by implementing AI across the board. 

And no one wants to be left behind. In fact, nearly three-quarters of business leaders say they feel pressure to increase AI adoption, according to the 2023 AI IQ report from Workday. But where – and how – AI will deliver the biggest gains is still cloaked in uncertainty.

"What we would have done six months ago is not what we're doing today. What we're going to do in six months, we don't know today,"

Shane Luke Vice President, AI and Machine Learning (ML) at Workday

To reach the glittering potential on the horizon, CIOs must lead their organisations through a minefield of hidden privacy, security, bias and ethics issues. Yet, the policies, rules and best practices that would normally guide them are still being developed. With so much still up in the air, it’s no surprise that nearly half (49%) of CEOs say their organisation is unprepared to adopt AI and ML, the C-Suite Global AI Indicator Report by Workday found.

“If people don't trust technology, they won’t use it,” said Tom Girdler, Principal, Product Marketing at Workday. “At the same time, if we can build technology that is underpinned by a robust framework, we create an amazing dynamic where trust and AI can really thrive together.” 

By increasing confidence in AI, CIOs can amp up adoption and start delivering the exponential value business leaders expect. What will that take? It starts with adhering to three principles of trustworthy AI – and opening the door for teams to experiment responsibly with this transformative tech.

 

1. Assess Privacy Risk – and Plan for Compliance

Not every AI application comes with the same level of risk. To navigate evolving privacy issues and compliance concerns, IT teams must understand the unique issues each use case presents – and help the business prioritise projects accordingly. 

Some risks should be deemed unacceptable from the start, such as the presence of bias or discrimination in AI models that could lead to unfair or discriminatory outcomes concerning race, gender or other protected characteristics. Security vulnerabilities, unauthorised data collection and unreliable predictions are other examples of unacceptable risks that should stop an AI project in its tracks.

“It's about transparency, technical documentation, recordkeeping, human oversight, accuracy, robustness and cybersecurity.”

Jens-Henrik Jeppesen Senior Director, Public Policy, Workday

Other risks are more manageable, but require close oversight. For example, many AI models learn as they’re used, which means new interactions could introduce new types of bias. IT teams must develop guidelines for the individuals providing new inputs – and monitor evolving outputs to ensure they remain fair and accurate.

CIOs must also keep an eye on compliance. While most privacy laws are still catching up with the latest advancements in AI, technology leaders should be prepared for what’s to come. Regulations will most certainly vary across borders, but there is broad agreement that a few key factors are essential to the development and deployment of trustworthy AI. 

 “It's about transparency, technical documentation, recordkeeping, human oversight, accuracy, robustness and cybersecurity,” said Jens-Henrik Jeppesen, Senior Director, Public Policy at Workday. “The idea is that technical standards are going to be developed to match each of these regulatory requirements and companies will certify to these standards.”

 

2. Keep Humans at the Helm

Science fiction writers love to imagine dystopian futures ruled by sentient AI. Of course, technologists know that AI can’t think – it can only come to conclusions based on its training data. But this can be dangerous in its own regard. 

When unthinking machines make purely data-driven decisions, they often ignore crucial contextual factors. For example, an AI-driven financial model that relies on historical data to make projections may not account for current geopolitical conditions or recent shifts in market sentiment, which could significantly influence business outcomes.

For AI to inform sound business decisions, humans must remain involved each step of the way. From training and testing to implementation and adoption, organisations must use AI to amplify human potential – not the other way around.

“The real question is, how do you put that into practice?” asked Kelly Trindel, Chief Responsible AI Officer at Workday.

It takes cross-disciplinary, open-minded collaboration across different domains, she said. In these early days, CIOs must build the teams and organisational structures needed to develop the guidelines that will promote fairness, accuracy, reliability and robustness as the organisation brings new applications online.

77% of leaders worry that at least some of their data is neither timely or reliable enough to use with AI and ML.

“The people who actually know how this stuff works, they really need to be involved in how you put together your AI governance,” Trindel said. “We're seeing it as a developing best practice to have separate lines of reporting for those who develop governance for AI systems and those who are frontline developers of AI systems.” 

 

3. Design Simpler Systems to Mitigate Bias

Bias in AI can’t be completely avoided. Every human has their own opinions – and humans train AI based on what they believe to be true. However, proactively working to mitigate bias from the start can go a long way toward building more ethical and equitable AI systems.

“The design of the system is by far the most important,” said Luke. “You can design the system to be very unlikely to produce something that you don't want. So that's the starting point.”

Because training data will determine AI outputs, CIOs must ensure all applications are built using trustworthy data that has been examined and validated by diverse human teams. While testing outputs is important to mitigate bias that sneaks into the model, this should be the organisation’s safety net – not its first line of defence, Luke said. “It's not about trying to check or police outputs. That's much harder to do and it's never definitive.” 

For example, large language models like Chat GPT are trained on large, general datasets that allow them to deliver long-form responses in convincing natural language. But these datasets often include poor content, such as misinformation found online. A substantial 77% of leaders worry that at least some of their data is neither timely or reliable enough to use with AI and ML. As an alternative, CIOs and their system designers should consider building applications with a smaller scope, trained to complete very specific tasks. 

“They’re not as capable at doing very general things, so they're less mesmerising,” Luke said. “But they're very capable at the tasks they're supposed to do, while being less capable of doing something you don't want.”

More Reading