The Australian Government’s Consultation on Banning ‘High-Risk’ AI
The Australian government has announced an unexpected eight-week consultation period to assess the potential banning of “high-risk” artificial intelligence (AI) tools.
Various other regions, including the United States, the European Union, and China, have also taken steps in recent months to comprehend and potentially mitigate the risks associated with the rapid development of AI.
On June 1, Industry and Science Minister Ed Husic revealed the release of two papers: a discussion paper on “Safe and Responsible AI in Australia” and a report on generative AI from the National Science and Technology Council.
This release was accompanied by an eight-week consultation that will conclude on July 26.
Seeking Feedback on Safe and Responsible AI
The government seeks feedback on how to support the “safe and responsible use of AI.” The consultation paper explores whether voluntary approaches, such as ethical frameworks, are sufficient or if specific regulations should be implemented, or a combination of both approaches.
One of the key questions in the consultation directly asks whether any high-risk AI applications or technologies should be completely banned and what criteria should be used to identify such AI tools.
The comprehensive discussion paper includes a draft risk matrix for AI models, which seeks feedback. It categorizes AI in self-driving cars as “high risk,” while a generative AI tool used for purposes such as creating medical patient records is considered “medium risk.”
The discussion paper highlights both the positive and harmful uses of AI. It acknowledges the benefits of AI in the medical, engineering, and legal industries but also raises concerns about harmful applications such as deepfake tools, the creation of fake news, and instances where AI bots have encouraged self-harm.
Issues related to bias in AI models and the generation of nonsensical or false information, known as “hallucinations,” are also addressed in the discussion paper.
The paper states that AI adoption in Australia is relatively low due to low levels of public trust. It also references AI regulations in other jurisdictions and Italy’s temporary ban on ChatGPT.
On the other hand, the National Science and Technology Council report highlights Australia’s advantageous AI capabilities in robotics and computer vision. However, it acknowledges that the country’s core fundamental capacity in large language models and related areas is relatively weak. The report also raises concerns about the concentration of generative AI resources within a small number of large multinational, primarily US-based technology companies, posing potential risks to Australia.
The report further discusses global AI regulation, provides examples of generative AI models, and suggests that these models will likely impact various sectors, including banking and finance, public services, education, and the creative industries.