December 14

On December 8, 2023, a political agreement was announced by negotiators from the European Parliament, Council of Ministers, and the European Commission regarding the Artificial Intelligence Act, commonly referred to as the AI Act. This significant step marks the first comprehensive law aimed at regulating the development and utilization of artificial intelligence.

Over the past decade, Workday has been at the forefront of developing and delivering AI and machine learning (ML) capabilities to our clients. We’re excited about the many benefits AI and ML bring to both our clients and society as a whole. However, we understand that in order to realize their full potential, these technologies must first earn the trust of the public.

Recognizing the crucial role that the European Union (EU) will play in establishing the direction for AI policy, Workday has been actively participating in the ongoing AI Act process since 2019. During this time, our focus has been on ensuring a nuanced, risk-based approach that includes impactful regulatory requirements and future international cooperation. As a result, we’ve contributed to the work of the high-level expert groups and subsequent government-led consultation. We’ve also engaged with policymakers as they charted a path forward for this legislation. Most recently, our involvement with trilogue negotiators resulted in suggestions for the foundation model (FM) transparency requirements.

By enacting smart, risk-based, and proportionate regulation, we can establish trust and mitigate the risk of potential harm while fostering innovation. The AI Act is designed to achieve precisely that. It will mandate AI technology providers to meet a set of requirements, encourage the responsible use of these tools by organizations, and enable regulators to enforce the rules. The proposed requirements of the AI Act outline sensible objectives to manage potential risks, many of which align with Workday ethical AI principles that have guided our approach to responsible AI development and governance for years.

In a significant move, EU negotiators included meaningful requirements on FMs and general purpose AI (GPAI) by imposing regulatory requirements for when these are integrated into high-risk use cases. While final text is yet to be developed, under the agreement FM and GPAI providers will be required to provide transparency for downstream providers and deployers. Transparency is a crucial factor in FM and GPAI utilization within the enterprise, and we look forward to seeing the final language.

It may take several weeks to finalize the text of the agreement, but based on this announcement, it appears that negotiators have found practical solutions to a range of complex issues. We are hopeful that the AI Act will meet its dual objectives of promoting innovation in AI and creating a vibrant market for trustworthy technology that individuals and companies can use with confidence. We will continue to support this effort until the Act becomes law early next year.

While we’re pleased with the significant progress within the EU, we understand that establishing unified rules and standards for AI is crucial to unlock its full potential and support responsible use worldwide. We urge lawmakers to ensure international harmonization when developing trustworthy and innovation-friendly AI policy while providing a strong foundation for international operations. We remain committed to collaborating with policymakers globally to build trust in AI for everyone.

June 20

The discussions on regulating artificial intelligence (AI) have intensified in the United States, coinciding with a significant development in Europe. The legislative process in Europe, which began in 2018, reached a major milestone this week as the European Parliament voted on amendments to the proposed Artificial Intelligence Act (AI Act). The vote sets up the so-called trilogue negotiations, the final phase of the European Union’s process, and paves the way for the likely adoption of Europe’s—and the world’s—first comprehensive AI regulatory framework in early 2024.

At Workday, we believe in the power of AI to unlock human potential. At the same time, we also believe these technologies demand a mature policy approach, which is why we’ve long advocated for smart regulatory safeguards that help build trust in AI. With momentum for the AI Act building, we’ve engaged collaboratively with policymakers in Brussels to ensure that the AI Act’s requirements are both meaningful and workable. 

We’re pleased to see the Parliament’s amendments reflect suggestions to create a more tailored definition of AI, maintain reasonable requirements for AI use cases, and support a nuanced risk-based approach. While we hope to see additional refinements as the process unfolds, we’re optimistic that the AI Act will play a foundational role in building an emerging global consensus on principles related to regulating AI.

For many in Washington, there may be a sense of deja vu when it comes to Europe playing a leading role in the development of tech policy. In 2016, Europe adopted privacy legislation that has gone on to play a significant role in global privacy regulation. Following the EU’s GDPR passage and impact, there are three key lessons we can learn as AI policy conversations around the globe heat up and Europe is poised to yet again take a significant and early step.

  1. Congress should act. As a global leader in the technology sector, the U.S. has a crucial role to play when it comes to setting the direction of technology policy, including on AI. To date, Congress has made progress. It directed NIST to launch the AI Risk Management Framework, an important step for which Workday was an early champion. In addition, Congress created the National AI Advisory Committee, a group of experts tasked with providing well-timed advice to the White House—and that includes Workday Co-President Sayan Chakraborty in his personal capacity. However, as concepts turn to concrete policies in capitals around the globe, a lack of further congressional action will grow conspicuous. Now is the moment for Congress to pass legislation addressing the need for meaningful AI safeguards.

  2.  International cooperation is critical. In the case of Europe’s AI efforts, we are starting from a foundation of shared values and even some consensus on core elements of responsible AI. However, a world in which innovators are subject to contradictory regulatory regimes must be avoided. The U.S. has taken steps to drive cooperation on these issues with our European counterparts, including partnering to launch the U.S.-EU Trade and Technology Council and recently confirming Ambassador Fick, the inaugural U.S. Ambassador at Large for Cyberspace and Digital Policy. Beyond our EU and U.S. advocacy, Workday is currently engaged in ongoing or emerging AI-related policy conversations in Australia, Canada, Singapore, and the UK. We’re in a nascent moment on AI policy and, if the past is prologue, the pace of change after Europe adopts the AI Act will ramp up dramatically. Against this backdrop, the U.S. should increase its investment in AI-focused international cooperation.

  3.  State legislatures will not wait. In the absence of congressional action on privacy legislation, state governments moved relatively swiftly to fill the void. This trend has started even earlier with AI, for example, with New York City’s law focused on AI and employment going into effect next month. With state and local activity inevitable, Workday has leaned in to play a constructive role in processes like New York City’s, while also engaging with lawmakers in Sacramento, Albany, and elsewhere to drive effective and workable rules. We’re pleased to see thoughtful contributions to the debate like California’s AB 331, a legislation that seeks to take a risk-based approach to AI regulation while embracing tried and true accountability tools like impact assessments. We anticipate a dramatic increase in the number of state proposals in the coming year.

The European Parliament’s adoption of their AI Act position marks the beginning of the end-game for what will represent a welcome and seismic shift in the global AI policy landscape. Much of the talk around technology is about the future. When it comes to Europe’s role in technology policy, it’s helpful to look to the past for cues on how to successfully navigate toward a harmonized approach to much-needed safeguards for AI that build trust and support innovation. 

And that harmonized approach is needed now. Stakeholders and policymakers in the United States should work together to ensure we don’t miss the opportunity presented by this momentum to ensure the future of responsible AI development  and advance legislation that enables continued innovation while building trust.

More Reading