Home Insights Brave New World or the end of it? Regulating artificial intelligence begins with understanding the real risks
Share

Brave New World or the end of it? Regulating artificial intelligence begins with understanding the real risks

“AI is a fundamental existential risk for human civilization.” Elon Musk (2017)[1]

When Tesla’s CEO, Elon Musk, addressed the US National Governor’s Association (NGA) summer meeting, it was his comments on AI which drew intense media scrutiny. AI was described as a fundamental risk to the existence of civilisation; the “scariest problem”. Musk suggested robots will ultimately be able to do “everything, bar nothing” and do it “better than us”.

The concern that he and others have about AI isn’t about programs that might, for example, better crunch or compare data or operate machinery or devices. The concern relates to what Musk refers to as deep intelligence. Deep intelligence is a difficult concept to grasp. Essentially it refers to a point where AI is more intelligent than the smartest person on earth and involves self-learning capability in an unstructured or unsupervised manner. Musk has described it like a situation where humanity is being visited by a super intelligent alien – only the AI is the super intelligent alien.

An AI with deep intelligence will have access to enormous amount of data in real time. It will be able to manipulate that data and possibly even falsify or distort data – potentially at incredible speeds. Most importantly, it will have the capacity to manipulate people and institutions.

Musk illustrated his concern to the governors with a hypothetical Wag The Dog scenario involving an AI tasked with maximising the value of portfolio stocks. One way the AI might achieve this goal is to go long on defence stocks and short on consumer stocks and then, through misinformation, encourage sabre-rattling or even a war.

With deep intelligence, the outcomes may not always be foreseeable and have the potential to be far-reaching, especially where the AI is wholly or partly autonomous, self-learning or otherwise uses an intelligence construct that is capable of dynamic and potentially unpredictable self-development and growth.

Of course, AI can’t and shouldn’t be put back into the bottle – the potential benefits to humanity are too enormous and, in any event, global research is too widely dispersed. But, while there are potential benefits, there are also risks associated with AI.

To manage the risks society must appreciate those risks and take proportionate action in advance of developing deep intelligence. As the reasoning goes, once bad things happen as a result of deep intelligence it may already be too late.

Governments are even starting to take action. Earlier this year the European Parliament passed a detailed and relatively comprehensive resolution relating to robotics and AI, with recommendations to progress a framework for robotics regulation.[2] Only recently the UK House of Lords Select Committee on Artificial Intelligence published its call for evidence and submissions in respect of wide range of matters concerning AI including ‘pragmatic solutions to the issues’.

So, is talk of civilisation-ending AI irresponsible, alarmist and counterproductive? As with any major technological breakthrough there are two sides to every coin. There are at least two opposing views on the risk of civilisation-ending AI, as personified by the recent exchange between Elon Musk and Mark Zuckerberg on the subject.

However, the simple reality is it may be impossible to reap the potential benefits in the absence of potential risks. Surely we must ask ourselves what’s the best way to maximise the benefits and the speed at which they might be realised while also taking a precautionary approach to minimise risks and avoid destructive scenarios.

Regulation may be one tool that assists, but the right balance would need to be struck to obtain the benefits of regulation while minimising any potential disadvantages. In any event, while it may help us better understand and reduce risks, regulation won’t magically remove the threat of civilisation-ending AI.

AI is global

The development of AI is global. Any hypothetical threat in a digital world would not respect national borders. Ultimately, if regulation was seen as a solution to mitigate the risk of civilisation-ending AI, that regulation would need to occur at a global level. Realistically, there’s little prospect of global regulation any time soon. The global track record where there isn’t an agreed clear and present danger is discouraging.

Effectiveness of regulation

Even if regulation were implemented, it may not eradicate the particular conduct or outcomes targeted.

After all, AI, essentially involves research in a ubiquitous digital medium and typically without geographically limited resource and infrastructure requirements. AI research may potentially be unconstrained in terms of where it can be conducted. As a result, there is a risk that some jurisdictions may be more relaxed than others in implementing any regulation, leading to regulatory failure.

Also, realistically, some countries and corporations may not always play by the rules when it suits their purposes. Lines of AI research inconsistent with any regulatory approach might be pursued even in jurisdictions where a regulatory approach is implemented. Regulation cannot remove all risk and it would be irrational to think otherwise.

Reducing the risk of ‘thintelligence’

“They don’t have intelligence. They have what I call ‘thintelligence’. They see the immediate situation. They think narrowly and they call it ‘being focused’. They don’t see the surround. They don’t see the consequences.” Michael Crichton, Author

Sensibly, faced with potential for civilisation-ending AI, the best outcome is to avoid developing that AI. This is not a call to stop AI research. Any such call by extension is a call to give up the potential benefits AI has to offer; and society sorely needs those benefits. What is needed, however, is a sensible approach to mitigate the risk, while we pursue the benefits.

A prudent approach would involve:

  • ex-ante measures – to understand, guide, and implement an ethics and values based framework for AI research to mitigate the risk of civilisation-ending AI; and

  • ex-post measures – to understand, design and implement countermeasures that operate should the ex-ante measures fail to prevent the creation of civilisation-ending AI.

Elon Musk indicated to the NGA that in respect of regulation: “The first order of business would be to try to learn as much as possible, to understand the nature of the issues, to look closely at the progress being made and the remarkable achievements of artificial intelligence.” This may involve assessing the state of AI research (potentially around the world) and its immediate, medium and longer term research objectives and trajectory.

To be clear, regulation could potentially be pursued by the private sector (through self-regulation) or by government (through imposed regulation). Neither is likely to prove truly adequate.

Private sector AI researchers and developers may have competing commercial priorities or take an opt in/opt out approach. Government, on the other hand, may have insufficient technical understanding or capabilities to achieve efficient and timely regulatory outcomes in the dynamic and complex environment of AI research.

It appears (especially given the lack of certainty as to precisely what is being regulated and how to do so) that any regulatory approach may need to draw on the regulatory strengths of both government and AI researchers and developers.

A case for AI ethics committees?

Not a lot has been said as to concretely what form of regulation would be appropriate to address the risk of civilisation-ending AI and how it would operate.

Perhaps there are parallels that might be drawn with human medical research. The ethical structures governing such research, so as to monitor and manage human medical experimentation, are commonly understood.

While the risk of civilisation-ending AI and human medical research involve different issues, the ethical process governing human medical research might still be a helpful model prompt policy discussion or that, with modification, might form the basis for a proportionate regulatory approach to understanding and managing the risk of civilisation-ending AI.

Such an approach might also assist to subtly guide the path of research. Stephen Hawking for instance has referred to a shift in research. He wants to see this move from undirected AI to beneficial AI. Essentially, not just developing an AI and letting it run (doing whatever) because it is after all smarter than us, but rather developing an AI directed to beneficial ends and presumably, so far as possible, without inadvertently imposing our flaws on that AI.

An AI research ethics committee approach might focus more on ‘should we?’ rather than ‘could we?’. It may also assist the AI industry with social and governmental transparency which may benefit the public acceptance of AI (especially deep intelligence) and the responsibility and accountability of AI researchers.

Importantly, it could be a dynamic and flexible form of regulation; and regulation based on reason rather than fear. This would be less restrictive and intrusive than mandatory one size fits all requirements or a governmentally-driven Big Brother approach.

However, as with unsanctioned human medical research, there may need to be consequences for unsanctioned or non-compliant AI research and, in some circumstances, potential for investigations, audits or other accountability mechanisms.

“Everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.” Stephen Hawking

There are always risks. With the advent of nuclear technology, the world has not become a radioactive wasteland. The Large Hadron Collider is yet to suck the planet into a black hole and rock and roll has not (so far) destroyed civilisation as we know it. The key is to objectively identify and acknowledge potential risks (without sensationalising them) and then to take sensible steps to understand and address them.

Our power to develop AI and deep intelligence comes with responsibility. This involves realising the benefits for society, minimising risks and protecting society from disasters. The greater those benefits and risks; the greater that responsibility. As the American journalist and humourist, Robert Quillen put it: “Progress always involves risk. You can’t steal second and keep one foot on first base.”


[2] For our analysis on that resolution go to our article: Preparing for life with robots: How will they be regulated in Australia? Access the European Parliament resolution.


Authors

Peter Schmidt

Senior Associate


Tags

Technology, Media and Telecommunications

This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.

Share
  • Print article

Key Contact

James North

Head of Technology, Media and Telecommunications

+61 2 9210 6734

+61 405 223 691

[email protected]

Related Capabilities