3 Insights to Help CIOs Navigate Evolving AI Regulations

The age of AI is causing huge disruption to the way the world does business – and so we can expect a raft of new regulations to define how to use AI responsibly. The question is, how can CIOs prepare and respond to those incoming regulations? Here are three key insights we think you need to know.

3 Insights to Help CIOs Navigate Evolving AI Regulations

As global leaders shape AI policies, CIOs should keep transparency, data privacy and vendor accountability top of mind.  

AI is evolving at breakneck speeds but as is often the case when it comes to technology, policy is struggling to keep up.

Still, the next couple of years promise to be big ones for AI regulations as global leaders create policies aimed at governing next-gen AI applications, including large language models (LLMs) like ChatGPT. European Union policymakers, for one, have agreed on the basics of the AI Act, a sweeping set of laws meant to capture the potential of the technology as well as buffer against its risks.

The exact shape that AI regulations take is still evolving and will likely need constant updating. After all, ChatGPT is still in its infancy. A few of the key questions being considered are:

  • In a global economy, how will different countries enable and limit the use of AI? 

  • How can AI leverage data while ensuring sensitive information remains secure and protected?

  • What practices can best mitigate bias in AI applications and outputs?

  • What documentation will be required to prove that AI has been developed responsibly?

As government agencies and NGOs continue to grapple with these crucial questions, CIOs find themselves in the hot seat. While forging ahead amidst regulatory uncertainty comes with risks,

delaying the development and deployment of AI applications could have long-term consequences on profitability and growth. 

“If you’re regulating AI, you can’t go about it by regulating the technology. Technology evolves. So you have to regulate the users and look at the context.”

Thomas Boué Director General, Policy, Business Software Alliance (BSA)

Risks aside, 60% of businesses are adopting AI and machine learning (ML) in some way, according to the C-Suite Global AI Indicator Report by Workday. Furthermore, the research found that IT leaders will most likely be the ones expected to make a company’s AI deployment a success. To stay ahead of the curve, CIOs must identify how the business can benefit from AI, define clear use cases, and introduce governance policies that will enable the responsible use of AI innovation.

“If you’re regulating AI, you can’t go about it by regulating the technology. Technology evolves. So you have to regulate the users and look at the context,” said Thomas Boué, Director General, Policy at Business Software Alliance (BSA). “In high-risk uses of AI, the idea is not to prevent them from happening, but to put the safeguards in place to ensure that AI can be used, developed and deployed for the benefits of society."

Here are three insights CIOs can use to guide their AI practices as both the technology, and the regulations surrounding it, evolve.

 

Transparency Is King

No CIO wants to invest in innovation just to have regulatory changes block their organisation’s path forward. But the AI opportunity is too promising to pass up. To make meaningful progress in an unpredictable market, CIOs must ask for – and enable – transparency across the enterprise and the ecosystem.  

This starts with clearly communicating where and how AI will be used, as well as what the organisation aims to achieve. With so many unknowns framing the AI conversation, radical transparency helps set expectations about what applications will be able to achieve, alleviate stakeholder fears and demonstrate accountability.

“People believe that, if you use AI in HR, there should be full transparency on what's happening, how it's happening, what data is being collected and what inferences are being made.”

Chandler C. Morse Vice President, Public Policy, Workday

CIOs can promote transparency by pulling back the curtain for internal and external stakeholders, including regulators. Outlining data handling practices and privacy measures in detail can help organisations prove that data has been used ethically and transparently. Providing insight into which algorithms were chosen and why can also showcase how bias is being considered and addressed.

While exactly what kind of documentation will be required by regulatory bodies is still being determined, “there's a tremendous consensus on transparency,” said Chandler Morse, Vice President, Public Policy at Workday. “For example, people believe that, if you use AI in HR, there should be full transparency on what's happening, how it's happening, what data is being collected and what inferences are being made.”  

When companies communicate openly about AI, employees are also more willing to ask questions about how and where to use new applications. This decreases the risk that teams will use AI inappropriately – and increases the likelihood that they will innovate confidently and responsibly.

 

Explainability Reduces Risk Exposure 

While compliance guidelines are being developed, explainability should be a CIO’s North Star. In the context of AI, explainability is concerned specifically with decision-making. Transparency provides visibility into how AI is developed and deployed, but explainability focuses on how the system thinks – and the logic it uses to come to conclusions.  

Regardless of whether governments require regulatory pre-approval or self-assessment of AI – the European Union has chosen self-assessment, putting the burden on software developers and AI providers, said Jens-Henrik Jeppesen, Senior Director, Public Policy at Workday – CIOs will need to easily communicate the internal workings of these applications. For example, businesses may be asked to prove that no copyrighted material was used to train their AI, even if a third-party developed the model it is based on.

Employee management offers a case in point. When AI is used to inform hiring, promotion or termination decisions, CIOs will be asked hard questions about how the AI was trained, how bias was addressed and how private data was protected during implementation. That’s critical considering that some countries are considering legislation that makes companies using high-risk AI models – such as those meant for health care or education – more responsible for any damage that results from that use.  

General purpose AI models, known as foundation models, can be fine-tuned to complete a variety of tasks. Companies often purchase these general models from vendors – but once a company incorporates a foundation model into its products or operations, its leaders will be responsible for assuring regulators that the technology is in compliance with new rules. This means CIOs must ensure they are delivered with comprehensive documentation, including background on model architecture, feature engineering, testing procedures, and security measures.

“So companies ought to have very close conversations with their vendors to make sure that they have the emerging regulations firmly in hand, and that they have governance programs that are aligning with those emerging regulatory requirements,” Jeppesen said. 

It’s not that companies need governments to tell them how to build the technology – they don’t. Rather, they need safeguards in place to assure their customers that these products and applications are safe.

CIOs must also track where internal teams use the foundation model, how it has been integrated into the company’s products and operations and what additional data was used to fine-tune the application. As adoption increases, ensuring explainability will be an enterprise-wide assignment. “AI is no longer an off-the-shelf thing that you install on the system and it just works,” said Boué. “It's something that is negotiated, that is discussed and that changes all the time.”

 

Safeguards Encourage Innovation

Businesses rarely clamour for more regulation but, when it comes to AI, most industry players agree that better guardrails are needed.  

It’s not that companies need governments to tell them how to build the technology – they don’t. Rather, they need safeguards in place to assure their customers that these products and applications are safe. “There is a level of trust that comes with regulatory surety,” Morse said.

To build confidence while regulations are being developed, CIOs should examine what AI leaders are doing in this space. For example, early adopters that are rolling out AI for HR are focused on protecting the fundamental rights of individual employees, applicants and candidates each step of the way.  

Taking a proactive approach also prepares global companies for the inevitable variation in AI legislation around the world. Defining key terms and aligning with the core principles of responsible AI, including transparency, explainability, discrimination and bias mitigation, and privacy protection, can help companies go further faster – while also enabling collaboration across jurisdictions.

“There is a commonality of purpose, which is to have an interoperable environment for innovation and deployment of these technologies,” Jeppesen said. “Most countries have broadly similar objectives – to have all the benefits of this technology, while ensuring that it is safe, trustworthy and can be used with confidence.”

More Reading