Home Insights The EU AI Act: a possible direction for Australian AI regulation?
Share

The EU AI Act: a possible direction for Australian AI regulation?

The Australian Government is now calling for submissions from industry bodies to assist in formulating regulation on the use and deployment of AI in Australia.

As we mentioned in our recent Corrs insight article, the Australian Government is developing its policy on the regulation of artificial intelligence (AI) and has recently published its Safe and Responsible AI in Australia discussion paper.

In light of this background and the widespread discussions on how AI should be regulated in Australia, we examine the recent developments on the EU’s proposed Artificial Intelligence Act (AI Act). The EU is seen as leading the way internationally in the regulation of AI and may well strongly influence the regulation of AI in Australia.

Overview of EU AI Act

  • At a high level, the AI Act will create obligations for both providers and users of AI systems, with the level of risk of an AI system determining the restrictiveness of the obligations.

  • Following two years of consideration, the European Parliament adopted its negotiating position on the AI Act on 14 June 2023 (499 votes in favour, 28 against and 93 abstentions). This follows the Council of the EU determining its position in December 2022.

  • The penalties for failing to comply with the AI Act may be significant (the European Parliament has proposed a maximum fine of the greater of €40 million euros (A$64 million) or 7% of a company’s annual global turnover).

Prohibited AI systems

The AI Act will ban AI systems with an unacceptable level of risk to people’s safety. In the AI Act, the European Parliament expanded the list of ‘unacceptable risk AI systems’ (i.e. banned systems) to include the following:

  • social scoring (classifying people based on behaviour, socio-economic status or personal characteristics);

  • cognitive behavioural manipulation of people or specific vulnerable groups (e.g. voice-activated toys that encourage dangerous behaviour in children);

  • real-time facial recognition in publicly accessible spaces. While delayed facial recognition will also generally be banned, there is a limited exception for law enforcement for the prosecution of serious crimes (with judicial authorisation);

  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;

  • biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion or political orientation);

  • predictive policing systems (based on profiling, location or past criminal behaviour); and

  • emotion recognition systems in law enforcement, border management, the workplace, and educational institutions.

Generative AI regulations

The European Parliament would require generative AI (such as ChatGPT) to comply with transparency requirements including:

  • disclosing that the content was generated by AI;

  • designing the model to prevent it from generating illegal content; and

  • publishing summaries of copyrighted data used for training.

Further, generative AI developers would be required to submit their systems for review before releasing them commercially.

The AI Act will also classify AI systems that negatively affect safety or fundamental rights as ‘high risk’ and require each system to be assessed both before it is put on the market and throughout its lifecycle.

While an earlier draft of the AI Act would have classified generative AI systems as inherently high risk, the European Parliament’s final position was significantly less onerous (following lobbying by OpenAI).

Supporting innovation and protecting individuals’ rights

The European Parliament added exemptions for research activities and AI components provided under open-source licences to its draft of the AI Act to boost AI innovation and support small and medium businesses, including through regulatory sandboxes, or real-life environments, established by public authorities to test AI before it is deployed.

The European Parliament’s position would also increase individuals’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights.

What’s next?

The European Parliament, European Commission and the Council of the EU must now all agree on the contents of the AI Act so it can become law.

Trilogue negotiations between these parties have already begun with the aim of reaching an agreement on the legislation by the end of this year.


Authors

Kit Lee

Associate

Mark Salamy

Law Graduate


Tags

Technology, Media and Telecommunications Global Regulation

This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.