December 14
On December 8, 2023, a political agreement was announced by negotiators from the European Parliament, Council of Ministers, and the European Commission regarding the Artificial Intelligence Act, commonly referred to as the AI Act. This significant step marks the first comprehensive law aimed at regulating the development and utilization of artificial intelligence.
Over the past decade, Workday has been at the forefront of developing and delivering AI and machine learning (ML) capabilities to our clients. We’re excited about the many benefits AI and ML bring to both our clients and society as a whole. However, we understand that in order to realize their full potential, these technologies must first earn the trust of the public.
Recognizing the crucial role that the European Union (EU) will play in establishing the direction for AI policy, Workday has been actively participating in the ongoing AI Act process since 2019. During this time, our focus has been on ensuring a nuanced, risk-based approach that includes impactful regulatory requirements and future international cooperation. As a result, we’ve contributed to the work of the high-level expert groups and subsequent government-led consultation. We’ve also engaged with policymakers as they charted a path forward for this legislation. Most recently, our involvement with trilogue negotiators resulted in suggestions for the foundation model (FM) transparency requirements.
By enacting smart, risk-based, and proportionate regulation, we can establish trust and mitigate the risk of potential harm while fostering innovation. The AI Act is designed to achieve precisely that. It will mandate AI technology providers to meet a set of requirements, encourage the responsible use of these tools by organizations, and enable regulators to enforce the rules. The proposed requirements of the AI Act outline sensible objectives to manage potential risks, many of which align with Workday ethical AI principles that have guided our approach to responsible AI development and governance for years.
In a significant move, EU negotiators included meaningful requirements on FMs and general purpose AI (GPAI) by imposing regulatory requirements for when these are integrated into high-risk use cases. While final text is yet to be developed, under the agreement FM and GPAI providers will be required to provide transparency for downstream providers and deployers. Transparency is a crucial factor in FM and GPAI utilization within the enterprise, and we look forward to seeing the final language.
It may take several weeks to finalize the text of the agreement, but based on this announcement, it appears that negotiators have found practical solutions to a range of complex issues. We are hopeful that the AI Act will meet its dual objectives of promoting innovation in AI and creating a vibrant market for trustworthy technology that individuals and companies can use with confidence. We will continue to support this effort until the Act becomes law early next year.
While we’re pleased with the significant progress within the EU, we understand that establishing unified rules and standards for AI is crucial to unlock its full potential and support responsible use worldwide. We urge lawmakers to ensure international harmonization when developing trustworthy and innovation-friendly AI policy while providing a strong foundation for international operations. We remain committed to collaborating with policymakers globally to build trust in AI for everyone.