How the National AI Plan Will Balance Safety and Growth
Australia’s National AI Plan sets clear expectations for agencies and businesses to manage the AI transition through balanced regulation and proactive safety.
Australia’s National AI Plan sets clear expectations for agencies and businesses to manage the AI transition through balanced regulation and proactive safety.
In this article we discuss:
Across advanced economies, AI is becoming a foundational technology that can lift economic growth, improve public outcomes and help organisations do more with increasingly constrained resources.
In this context, Australia’s decision to place AI at the centre of its national productivity agenda is both timely and strategically important. The release of the National AI Plan in December 2025 signalled the Federal Government’s intent to harness AI as a catalyst for economic growth and societal benefit, while maintaining the strong safeguards Australians expect.
What distinguishes Australia’s approach is its focus on balanced and thoughtful regulation. At a time when many jurisdictions are introducing sweeping, standalone AI legislation, the Australian Government has chosen a different path.
Rather than treating AI solely as a regulatory challenge or an innovation opportunity, the Plan frames it as both – a powerful enabler that strengthens national capability while supporting responsible innovation. A major theme is the need for lifelong learning and broad AI capability uplift across the workforce.
The decision not to introduce a dedicated AI Act at this stage reflects a recognition that premature or overly prescriptive regulation could inadvertently slow innovation.
For government and private sector leaders, the message is two-fold: AI will shape the next phase of Australia’s economic development; and an important part of realising its benefits will be aligning organisational AI strategies with national principles.
One of the most notable features of Australia’s National AI Plan is its calibrated regulatory stance. The decision not to introduce a dedicated AI Act at this stage reflects a recognition that premature or overly prescriptive regulation could inadvertently slow innovation. Instead, the Government has opted to uplift and rely on existing, technology-neutral laws – including those governing privacy, consumer protection, copyright and discrimination – while encouraging industry-led governance.
This approach reflects confidence in Australia’s existing legal and institutional frameworks to provide comprehensive legal oversight – overlaid with a commitment to monitor emerging risks closely. By allowing innovation to develop within familiar accountability structures, Australia is creating space for experimentation and adoption without abandoning safeguards.
For executives, this is an important distinction. The absence of a standalone AI Act places greater emphasis on organisational governance, risk management and ethical decision-making. Businesses and agencies are expected to understand how existing obligations apply to AI systems – and to demonstrate that they are doing so in practice. Leaders can also expect regulators to ask not only whether AI is used, but how it is governed.
The National AI Plan places proactive safety at its core, using practical, multi-layered mechanisms to manage advanced AI risks.
To prepare, boards and executives should challenge their organisations to demonstrate:
The National AI Plan places proactive safety at its core, using practical, multi-layered mechanisms to manage advanced AI risks.
Uplifting existing laws is a central tenet of this strategy. The Government has clarified how current regulation applies to AI, so accountability does not disappear when decisions are automated or augmented by algorithms. This provides continuity for organisations while ensuring protections remain enforceable.
Helping to oversee this is the new Australian AI Safety Institute (AISI), backed by $29.9 million in funding, as part of the National AI Plan’s 'keep Australians safe' pillar. AISI will monitor, test and share information about emerging AI capabilities, risks and harms – both upstream, at the level of model development, and downstream, in real-world deployment.
Importantly, AISI will be a central hub – not a silo. Its role includes informing ministers and regulators, supporting coordinated policy responses, publishing research and engaging with domestic and international partners. For industry, this creates a focal point for dialogue and shared learning.
The third element of this trust framework is practical guidance for organisations. In October 2025, the National AI Centre released updated adoption guidance, introducing the AI6 – six essential governance practices for AI developers and deployers. Replacing the earlier voluntary standard, the AI6 provides clear expectations across accountability, risk management, transparency, testing, human oversight and incident response.
Crucially, the guidance is tailored, with foundations for organisations at the early stages of AI adoption, and more detailed implementation practices to support complex or high-risk use cases. Organisations of all sizes will be able to align with national expectations.
These measures are intended to position the APS as a living demonstration of responsible AI in action.
While much attention is rightly focused on industry, the Australian Government is clear that agencies must 'lead the way'. The Australian Public Service (APS) AI Plan offers a framework for the public sector to model responsible AI adoption. To ensure governance, capability and infrastructure advance together, the Plan is structured around three pillars:
Together, these measures are intended to position the APS as a living demonstration of responsible AI in action, showing how AI can be adopted at scale, with rigour, transparency and workforce inclusion.
Australia’s National AI Plan represents a thoughtful and forward-looking response to one of the defining technologies of our time. It acknowledges AI’s transformative potential, commends innovation, and invests in capability while prioritising trust, safety and accountability.
Under its auspices, government agencies are expected to lead adoption responsibly, build internal capability and provide the frameworks that enable safe innovation.
For the private sector, alignment with national principles will be foundational to maintaining social licence, regulatory confidence and long-term value. This means:
With the Productivity Commission pointing to a potential 4.3% lift in labour productivity and around $19 billion a year in extra public sector value, there is clear upside for both the private and public sector to follow the National AI Plan and get this right.
As Australia works to position itself as a developer and adopter of trusted, world-class AI – supporting the 'Future Made in Australia' agenda – collaboration between government and industry will be essential. By aligning practices, sharing insights and building capability together, we can ensure that AI delivers not just productivity gains, but sustainable and inclusive growth.
AI is delivering speed, but many organisations struggle to turn that efficiency into net value. New global research reveals how rework, reinvestment choices, and work redesign shape whether AI delivers sustainable growth.
More Reading
Australian universities face shrinking growth and rising student expectations. To stay competitive, they must modernise with AI-enabled platforms.
Workday research reveals that we’re in a critical leadership moment: Can organizations redesign work, transforming productivity into real business impact and deeper human connection?
We have an unprecedented opportunity to dismantle traditional barriers and drive true gender equity in the workforce. Here's how AI can help.