Government plans for ‘disinformation deluge’ when artificial intelligence hits stride

A world run by robots is often a top concern when discussing artificial intelligence, but the government is not worried about a takeover – it’s worried about a disinformation deluge.

Jan 17, 2024, updated May 22, 2025
Minister for Industry Ed Husic speaks to the media at a press conference at Parliament House in Canberra (AAP Image/Mick Tsikas)
Minister for Industry Ed Husic speaks to the media at a press conference at Parliament House in Canberra (AAP Image/Mick Tsikas)

The federal government is considering laws surrounding the use of AI to ensure stronger protections, particularly in high-risk industries, as the technology rapidly develops.

Generative AI, which creates text, images or other media by drawing from immense amounts of data, often taken from creators without consent, has taken the spotlight due to the threat it poses to copyright and creative jobs.

While Industry Minister Ed Husic says he is a firm believer in the value of AI and does not want to stifle innovation, the emerging technology presents a massive challenge that the government must confront.

“The biggest thing that concerns me around generative AI is just the huge explosion of synthetic data. The way generative AI can just create stuff that you think is real and organically developed but it’s come out of a generative model,” he told reporters in Canberra on Wednesday.

“The big thing I’m concerned about is not that the robots take over, but the disinformation does.”

Mr Husic warned that generated media could get picked up and quickly distributed through social media which may evolve into something that triggers a government response.

“We all recognise the threats, the perils that present themselves if a government response is based on something that’s not legitimate.” he said.

As such, the government’s interim response to industry consultation on the responsible use of AI, released on Wednesday, suggests introducing measures like voluntary labelling and watermarking of AI-generated material.

It is also considering safeguards for using AI in potential high-risk industries such as critical infrastructure like water and electricity, health and law enforcement and could include measures that regulate how the products are tested before and after their use, along with further transparency on how they are designed and used data.

However, talks are focused on developing a voluntary safety standard before bringing creating mandatory safeguards.

Mr Husic said it was critical to put protections in place, noting that the “days of self-regulation are gone”.

The interim response paper said while many uses of AI did not present risks that required oversight, there were still significant areas of concern.

Stay informed, daily

“Existing laws do no adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur,” the report said.

More than 500 groups responded to the government’s discussion paper on AI.

The Australian Information Industry Association welcomed the discussion paper, saying the government needed to work with international frameworks to ensure Australia did not get left behind.

A report from the association said 34 per cent of Australians were willing to trust AI technology, with 71 per cent believing guardrails needed to be put in place by government.

Chief executive Simon Bush said the government needed to take advantage of the growth of AI.

“The regulation of AI will be seen as a success by industry if it builds not only societal trust in the adoption and use of AI by its citizens and businesses, but also that it fosters investment and growth in the Australian AI sector,” he said.

SUGGESTIONS ON REGULATING AI FROM THE GOVERNMENT’S INTERIM RESPONSE TO INDUSTRY CONSULTATION

* Voluntary labelling and watermarking of AI-generated material

* Safeguards for using AI in potential high-risk industries such as critical infrastructure like water and electricity, health and law enforcement

* Developing a voluntary safety standard for AI before implementing mandatory safeguards

* Other potential safeguards include improving transparency on how AI products are tested before and after use and the way they are designed and use data

* Measures to bolster accountability by requiring training for developers and deployers, and hold organisations responsible for any safety risks posed by AI.

    Archive