05 June 2025
With productivity at a 60-year low and the Federal Government having recently committed an additional $900 million to address this challenge, artificial intelligence (AI) presents itself as an opportunity to drive economic growth and foster innovation, with many organisations already capitalising on its potential.
However, despite AI’s potential to boost productivity, a 2024 IPSOS Survey found that 64% of Australians are still nervous about its use. This hesitation underscores the need for organisations to embed a responsible AI approach, which goes beyond legal compliance to integrate principles of ethical AI into governance strategies and puts practical safeguards in place to ensure the technology is trained, designed and deployed in ways that are safe, transparent and aligned with public expectations.
With the global regulatory landscape for AI in flux and comprehensive AI legislation in Australia still under consultation, Australian organisations cannot afford to take a passive approach to responsible AI governance. Proactively developing responsible AI governance frameworks is essential to unlocking AI’s productivity potential while managing risks (including exposure to reputational harm) and building stakeholder confidence.
The international AI regulatory environment has continued to evolve in markedly different directions.
The European Union (EU) has introduced the EU AI Act, which entered into force on 1 August 2024. It adopts a risk-based model that classifies AI systems by various risk levels (with corresponding obligations depending on the system’s risk classification). Despite its intention to ensure safe and transparent AI, the EU AI Act has been controversial, with major technology companies criticising it for hampering innovation. Consequently, EU lawmakers have begun discussing whether to exempt US tech giants from key regulatory requirements and to make the EU AI Act more voluntary. This, coupled with the withdrawal of the draft AI Liability Directive in February 2025, highlights internal discourse and signs of regulatory fatigue within the EU.
In contrast, the United States’ (US) federal approach has shifted toward deregulation. Currently, the US lacks any dedicated federal AI legislation, instead relying upon existing technology-neutral frameworks such as those governing intellectual property, data protection and employment. Whilst proposals for federal legislation such as the AI Research, Innovation and Accountability Act (which calls for greater transparency, accountability and security in AI) and the Algorithmic Accountability Act (which obliges critical impact assessments for automated systems, including AI) have emerged, the current federal administration has adopted a deregulatory stance towards AI which casts doubt upon the likelihood of such legislation passing.
On 23 January 2025, the federal government issued the ‘Removing Barriers to AI Innovation’ Executive Order which mandates the reversal of the previous administration’s efforts to regulate AI, including overturning the ‘Executive Order for the Safe, Secure and Trustworthy Development and Use of AI’. Some independent state-level AI initiatives continue to operate independently however the federal government is currently seeking to impose a ten-year moratorium that will inhibit state governments from enacting or enforcing AI laws until 2035.
Similarly, the United Kingdom (UK) lacks dedicated AI legislation and favours a light-touch, ‘pro-innovation’ approach, as noted in its AI Opportunities Action Plan published in January 2025, which makes little mention of regulatory discussion. Rather than introducing broad legislative measures, the UK plans to adopt ‘highly targeted regulation’ aimed at ‘safety risks posed by most powerful AI models’ out of concern that AI (over)regulation may hamper AI investment. Notably, both the US and UK declined to endorse ethical and responsible AI declarations at the AI Action Summit in February 2025, reinforcing their anti-regulatory stances towards AI.
The People’s Republic of China (China) does not currently have unified AI regulation but does regulate specific AI use cases, including deep synthesis technology (used for ‘deep fakes’), recommendation algorithms and generative AI (through its Interim Measures). This is alongside technical standards, frameworks and guidelines issued by the National Information Security Standardisation Technical Committee (TC260). As articulated in the New Generation Artificial Intelligence Development Plan (published July 2017), China endeavours to become the world leader in AI by 2030 from both a competitive and an ethical standpoint.
Consistent with most jurisdictions, Singapore lacks AI legislation and relies upon existing legal frameworks, supported by sectoral guidance from bodies such as the AI Verify Foundation and the Advisory Council on the Ethical Use of AI and Data. Its National AI Strategy (published in 2019 and updated in 2023) emphasises Singapore’s commitment to a responsible AI ecosystem that facilitates innovation under an ‘agile’ regulatory approach.
With the EU AI Act being an outlier, the global lack of regulatory constraints alongside the winding back of regulation in the US raises questions about the prospects of Australia introducing its own AI legislative framework.
Australia currently lacks any dedicated AI legislation, creating uncertainty for organisations on how to govern AI effectively under existing laws. However, in a proposals paper introducing mandatory guardrails for AI in high-risk settings published in September 2024, the Australian Government has contemplated introducing a risk-based legislative framework to regulate AI (mirroring the EU’s approach albeit with less risk categories).
Whilst the mandatory guardrails are currently in consultation, the Australian Government and related bodies have published further guidelines to inform best practice in using and deploying AI such as:
Despite these developments, Australia’s path to comprehensive AI legislation remains uncertain, especially when considering the Federal Government’s recent calls to slash ‘thickets of regulation’ to bolster productivity and that the overregulation of AI would only stifle such ambitions. AI regulation did not feature as a key issue in the recent federal election, and neither major party addressed the proposed mandatory guardrails which casts doubt over the prospect of change, particularly amidst the shifts in global regulatory stances outlined above.
According to EY’s ‘Bridging the sentiment gap’ report, Australians harbour significant distrust towards AI when compared with the rest of the world, with only 37% of Australians believing that the benefits of AI outweigh its risks. Financial service providers and government are amongst the groups Australians trust least to manage AI in ways that align with their best interests. This is particularly problematic given the central role these entities play in the Australian economy.
With dedicated AI legislation still some time away and Australians exhibiting scepticism towards AI, organisations must act now to implement responsible AI governance frameworks that address both legal compliance and ethical deployment. At its core, responsible AI takes principles of ethical AI such as fairness, transparency, accountability, privacy, safety and human-centred values, and converts them into actionable steps to guide the design, development, deployment and oversight of AI systems through governance, testing and risk mitigation strategies.
Some key considerations for organisations looking to embed a responsible AI framework include:
All of the above actions should be supplemented with transparent communication with customers and other relevant stakeholders to maintain clear understanding, manage expectations and create trust in how AI systems are used and governed. Details of the organisation’s governance framework may be documented in a public-facing responsible AI policy which will further assist with promoting transparency. However, to avoid claims of misleading and deceptive conduct, any processes and procedures that the organisation discloses in such a policy need to be practically implemented (in the manner they are described) and subsequently adhered to.
At a minimum, organisations must ensure that their AI governance is compliant with existing legislative frameworks such as copyright, privacy, anti-discrimination or consumer law. However, organisations should still be wary that, where consumer trust is already low, business decisions that are legal but still unethical will only deepen such distrust.
In the absence of binding AI-specific legislation, organisations should not treat responsible AI as optional. A purely compliance-driven approach may satisfy legal obligations but fail to address growing consumer scepticism and evolving ethical expectations.
By embedding responsible AI principles into their operations and adopting a forward-looking approach to AI governance, organisations can foster confidence within stakeholders, benefit from the productivity gains that AI proffers, and ensure that AI use is clearly aligned with the organisation’s business objectives.
Authors
Head of Technology, Media and Telecommunications
Head of Responsible Business and ESG
Paralegal
Tags
This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.
Head of Technology, Media and Telecommunications
Head of Responsible Business and ESG