To help ensure AI/ML technology can drive human progress, create job opportunities, and grow our economies, our co-CEO, co-founder and chair Aneel Bhusri has highlighted the need for stakeholders to work together to develop meaningful policies that embrace responsible approaches to AI/ML. Workday has long advocated for policies that bolster trust in technology, enable growth, and ignite innovation for the future of work. This includes leading conversations focused on AI/ML in Europe, with the U.S. Congress and federal agencies, and at the state and local level. So, on the heels of this Framework, we’re sharing our thoughts on what it means for AI regulation.
What Is the NIST AI Framework?
In 2021, Congress directed NIST to develop a voluntary framework for trustworthy AI, with lawmakers on both sides of the aisle understanding that cultivating trust in AI would sustain U.S. technology leadership. Over the course of a year and a half, NIST led an open, inclusive, multi-stakeholder process to develop the AI Framework, collaborating closely with academics, civic organizations, and companies, including Workday. The result is a how-to guide for organizations of all sizes, industries, and geographies to develop and use trustworthy AI. The Framework outlines common characteristics of trustworthiness and a comprehensive approach for mapping, measuring, managing, and governing potential AI risks. Importantly, it is designed to balance these risks with AI’s immense opportunity for unlocking human potential at scale.
What Does This Mean for AI Governance?
With attention on AI regulation and policy development growing around the world, the NIST Framework is a much-needed and well-timed benchmark. NIST has a distinguished track record of developing frameworks that enable organizations to manage the complexity of technology risks and benefits. Its leading cybersecurity and privacy frameworks continue to inform best practices, technical standards, certifications, and regulations that are cornerstones of enterprise safeguards today. As AI governance continues to mature, the Framework promises to serve as a common language for thinking about and addressing AI risks.