Home Insights Responsible AI governance: key considerations for Australian organisations
Share

Responsible AI governance: key considerations for Australian organisations

With productivity at a 60-year low and the Federal Government having recently committed an additional $900 million to address this challenge, artificial intelligence (AI) presents itself as an opportunity to drive economic growth and foster innovation, with many organisations already capitalising on its potential.

However, despite AI’s potential to boost productivity, a 2024 IPSOS Survey found that 64% of Australians are still nervous about its use. This hesitation underscores the need for organisations to embed a responsible AI approach, which goes beyond legal compliance to integrate principles of ethical AI into governance strategies and puts practical safeguards in place to ensure the technology is trained, designed and deployed in ways that are safe, transparent and aligned with public expectations.

With the global regulatory landscape for AI in flux and comprehensive AI legislation in Australia still under consultation, Australian organisations cannot afford to take a passive approach to responsible AI governance. Proactively developing responsible AI governance frameworks is essential to unlocking AI’s productivity potential while managing risks (including exposure to reputational harm) and building stakeholder confidence.

AI regulation around the globe

The international AI regulatory environment has continued to evolve in markedly different directions.

The European Union

The European Union (EU) has introduced the EU AI Act, which entered into force on 1 August 2024. It adopts a risk-based model that classifies AI systems by various risk levels (with corresponding obligations depending on the system’s risk classification). Despite its intention to ensure safe and transparent AI, the EU AI Act has been controversial, with major technology companies criticising it for hampering innovation. Consequently, EU lawmakers have begun discussing whether to exempt US tech giants from key regulatory requirements and to make the EU AI Act more voluntary. This, coupled with the withdrawal of the draft AI Liability Directive in February 2025, highlights internal discourse and signs of regulatory fatigue within the EU.

United States

In contrast, the United States’ (US) federal approach has shifted toward deregulation. Currently, the US lacks any dedicated federal AI legislation, instead relying upon existing technology-neutral frameworks such as those governing intellectual property, data protection and employment. Whilst proposals for federal legislation such as the AI Research, Innovation and Accountability Act (which calls for greater transparency, accountability and security in AI) and the Algorithmic Accountability Act (which obliges critical impact assessments for automated systems, including AI) have emerged, the current federal administration has adopted a deregulatory stance towards AI which casts doubt upon the likelihood of such legislation passing.

On 23 January 2025, the federal government issued the ‘Removing Barriers to AI Innovation’ Executive Order which mandates the reversal of the previous administration’s efforts to regulate AI, including overturning the ‘Executive Order for the Safe, Secure and Trustworthy Development and Use of AI’. Some independent state-level AI initiatives continue to operate independently however the federal government is currently seeking to impose a ten-year moratorium that will inhibit state governments from enacting or enforcing AI laws until 2035.

United Kingdom

Similarly, the United Kingdom (UK) lacks dedicated AI legislation and favours a light-touch, ‘pro-innovation’ approach, as noted in its AI Opportunities Action Plan published in January 2025, which makes little mention of regulatory discussion. Rather than introducing broad legislative measures, the UK plans to adopt ‘highly targeted regulation’ aimed at ‘safety risks posed by most powerful AI models’ out of concern that AI (over)regulation may hamper AI investment. Notably, both the US and UK declined to endorse ethical and responsible AI declarations at the AI Action Summit in February 2025, reinforcing their anti-regulatory stances towards AI.

People’s Republic of China

The People’s Republic of China (China) does not currently have unified AI regulation but does regulate specific AI use cases, including deep synthesis technology (used for ‘deep fakes’), recommendation algorithms and generative AI (through its Interim Measures). This is alongside technical standards, frameworks and guidelines issued by the National Information Security Standardisation Technical Committee (TC260). As articulated in the New Generation Artificial Intelligence Development Plan (published July 2017), China endeavours to become the world leader in AI by 2030 from both a competitive and an ethical standpoint.

Singapore

Consistent with most jurisdictions, Singapore lacks AI legislation and relies upon existing legal frameworks, supported by sectoral guidance from bodies such as the AI Verify Foundation and the Advisory Council on the Ethical Use of AI and Data. Its National AI Strategy (published in 2019 and updated in 2023) emphasises Singapore’s commitment to a responsible AI ecosystem that facilitates innovation under an ‘agile’ regulatory approach.

AI regulation in Australia: the current state of play

With the EU AI Act being an outlier, the global lack of regulatory constraints alongside the winding back of regulation in the US raises questions about the prospects of Australia introducing its own AI legislative framework.

Australia currently lacks any dedicated AI legislation, creating uncertainty for organisations on how to govern AI effectively under existing laws. However, in a proposals paper introducing mandatory guardrails for AI in high-risk settings published in September 2024, the Australian Government has contemplated introducing a risk-based legislative framework to regulate AI (mirroring the EU’s approach albeit with less risk categories).

Whilst the mandatory guardrails are currently in consultation, the Australian Government and related bodies have published further guidelines to inform best practice in using and deploying AI such as:

  • a Voluntary AI Safety Standard (Voluntary Standard) which closely aligns with the mandatory guardrails but applies to all, not just high-risk, AI systems (a revised version of the Voluntary Standard is also under consultation with submissions having closed on January 2025);

  • a national framework for the assurance, and a policy for the responsible use, of AI by government; and

  • policies published by the Office of the Australian Information Commissioner (OAIC) on privacy, the use of commercially available AI products and developing and training generative AI models.

Despite these developments, Australia’s path to comprehensive AI legislation remains uncertain, especially when considering the Federal Government’s recent calls to slash ‘thickets of regulation’ to bolster productivity and that the overregulation of AI would only stifle such ambitions. AI regulation did not feature as a key issue in the recent federal election, and neither major party addressed the proposed mandatory guardrails which casts doubt over the prospect of change, particularly amidst the shifts in global regulatory stances outlined above.

Responsible AI governance

According to EY’s ‘Bridging the sentiment gap’ report, Australians harbour significant distrust towards AI when compared with the rest of the world, with only 37% of Australians believing that the benefits of AI outweigh its risks. Financial service providers and government are amongst the groups Australians trust least to manage AI in ways that align with their best interests. This is particularly problematic given the central role these entities play in the Australian economy.

With dedicated AI legislation still some time away and Australians exhibiting scepticism towards AI, organisations must act now to implement responsible AI governance frameworks that address both legal compliance and ethical deployment. At its core, responsible AI takes principles of ethical AI such as fairness, transparency, accountability, privacy, safety and human-centred values, and converts them into actionable steps to guide the design, development, deployment and oversight of AI systems through governance, testing and risk mitigation strategies.

Some key considerations for organisations looking to embed a responsible AI framework include:

  1. Establishing clear roles and responsibilities for implementing, overseeing and monitoring AI systems. This includes considering dedicated governance forums such as an ‘AI Risk Committee’ and ensuring organisational accountability for any adverse impacts these systems may cause. If boards and organisations do not govern AI use, then ethical decisions will be made (or simply not considered at all) by engineers and coders who may not be well-positioned to adequately deal with these matters.

  2. Before putting an AI system into use, having a thorough understanding of how the system operates and its intended use cases within the organisation. This includes conducting (and regularly reviewing) responsible AI impact assessments and risk assessments early in the AI lifecycle to enable the identification of potential risks and the implementation of corresponding risk mitigation strategies. These assessments may be aligned with best-practice models such as the NIST AI Risk Management Framework or ISO/IEC 23894 which address risk mitigation in AI-specific use cases.

  3. Understanding the risks posed by ‘Shadow AI’, which refers to unauthorised AI use by employees for work-related purposes. A 2023 ISACA Poll found that 63% of Australian and New Zealand organisations observed unauthorised generative AI use by their employees, despite only 36% expressly permitting such use. Alarmingly, only 11% of organisations were reported as having a formal, comprehensive policy for generative AI. This gap exacerbates organisational and stakeholder risks, such as privacy and confidentiality breaches, especially where sensitive corporate data is inputted into AI systems (especially publicly accessible systems like ChatGPT). These risks should be considered when mapping AI use in organisations with the recognition that some technologies, which may not fit conventional perceptions of what AI is or are otherwise ‘non-obvious’ use cases, may still fall under the AI umbrella and should be accounted for.

  4. Assessing any decisions to develop or use AI from an ethical perspective. A number of existing Responsible AI policies already draw upon Australia’s AI Ethics Principles which are a set of voluntary principles designed to ensure AI systems are safe, secure and reliable. For instance, organisations may ask themselves whether the AI system benefits individuals, society and the environment and respects human rights, diversity, and individual autonomy and consider whether training data or algorithmic bias may result in unintended and negative consequences. Organisations should also consider alignment with global frameworks, such as UNESCO’s AI Ethics Recommendation, to maintain stakeholder expectations across jurisdictions.

  5. Ensuring rigorous testing prior to any AI deployment. This includes considering ‘red-teaming’ protocols which identify how the AI model could be misused and to assist in designing future mitigations or stress testing to confirm the AI system adheres to performance and operational requirements.

  6. Once deployed, ensuring AI systems are monitored and governed with a high degree of human oversight. This is particularly the case at critical decision points (‘human-in-the-loop’). Robust review, monitoring and compliance protocols will allow organisations to identify and mitigate harm early, correct course when needed, and ensure that the technology serves its intended purpose without unintended consequences. High standards of security and privacy protection are non-negotiable, particularly when noting that only 32% of Australians trust that companies adopting AI will protect their personal data (IPSOS AI Monitor Survey, 2024).

All of the above actions should be supplemented with transparent communication with customers and other relevant stakeholders to maintain clear understanding, manage expectations and create trust in how AI systems are used and governed. Details of the organisation’s governance framework may be documented in a public-facing responsible AI policy which will further assist with promoting transparency. However, to avoid claims of misleading and deceptive conduct, any processes and procedures that the organisation discloses in such a policy need to be practically implemented (in the manner they are described) and subsequently adhered to.

At a minimum, organisations must ensure that their AI governance is compliant with existing legislative frameworks such as copyright, privacy, anti-discrimination or consumer law. However, organisations should still be wary that, where consumer trust is already low, business decisions that are legal but still unethical will only deepen such distrust.

Key takeaways

In the absence of binding AI-specific legislation, organisations should not treat responsible AI as optional. A purely compliance-driven approach may satisfy legal obligations but fail to address growing consumer scepticism and evolving ethical expectations.

By embedding responsible AI principles into their operations and adopting a forward-looking approach to AI governance, organisations can foster confidence within stakeholders, benefit from the productivity gains that AI proffers, and ensure that AI use is clearly aligned with the organisation’s business objectives.


Authors

NORTH-james-highres_SMALL
James North

Head of Technology, Media and Telecommunications

WYNN POPE Phoebe SMALL
Dr Phoebe Wynn-Pope

Head of Responsible Business and ESG


Tags

Technology, Media and Telecommunications Responsible Business and ESG Board Advisory

This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.