CIOs can promote transparency by pulling back the curtain for internal and external stakeholders, including regulators. Outlining data handling practices and privacy measures in detail can help organisations prove that data has been used ethically and transparently. Providing insight into which algorithms were chosen and why can also showcase how bias is being considered and addressed.
While exactly what kind of documentation will be required by regulatory bodies is still being determined, “there's a tremendous consensus on transparency,” said Chandler Morse, Vice President, Public Policy at Workday. “For example, people believe that, if you use AI in HR, there should be full transparency on what's happening, how it's happening, what data is being collected and what inferences are being made.”
When companies communicate openly about AI, employees are also more willing to ask questions about how and where to use new applications. This decreases the risk that teams will use AI inappropriately – and increases the likelihood that they will innovate confidently and responsibly.
Explainability Reduces Risk Exposure
While compliance guidelines are being developed, explainability should be a CIO’s North Star. In the context of AI, explainability is concerned specifically with decision-making. Transparency provides visibility into how AI is developed and deployed, but explainability focuses on how the system thinks – and the logic it uses to come to conclusions.
Regardless of whether governments require regulatory pre-approval or self-assessment of AI – the European Union has chosen self-assessment, putting the burden on software developers and AI providers, said Jens-Henrik Jeppesen, Senior Director, Public Policy at Workday – CIOs will need to easily communicate the internal workings of these applications. For example, businesses may be asked to prove that no copyrighted material was used to train their AI, even if a third-party developed the model it is based on.
Employee management offers a case in point. When AI is used to inform hiring, promotion or termination decisions, CIOs will be asked hard questions about how the AI was trained, how bias was addressed and how private data was protected during implementation. That’s critical considering that some countries are considering legislation that makes companies using high-risk AI models – such as those meant for health care or education – more responsible for any damage that results from that use.
General purpose AI models, known as foundation models, can be fine-tuned to complete a variety of tasks. Companies often purchase these general models from vendors – but once a company incorporates a foundation model into its products or operations, its leaders will be responsible for assuring regulators that the technology is in compliance with new rules. This means CIOs must ensure they are delivered with comprehensive documentation, including background on model architecture, feature engineering, testing procedures, and security measures.
“So companies ought to have very close conversations with their vendors to make sure that they have the emerging regulations firmly in hand, and that they have governance programs that are aligning with those emerging regulatory requirements,” Jeppesen said.