Create a free account, or log in

Generative AI flagged as high-risk as Senate inquiry demands stricter laws

A special parliamentary inquiry into the use of AI in Australia has called for dedicated legislation to regulate high-risk technologies, including generative AI products such as ChatGPT and Google Gemini.
Tegan Jones
Tegan Jones
AI small businesses risk
Independent Senator David Pocock. Source: AAP Image/ Mick Tsikas

A special parliamentary inquiry into the use of AI in Australia has called for dedicated legislation to regulate high-risk technologies, including generative AI products such as ChatGPT and Google Gemini.

The 222-page report emphasises the necessity of protecting democratic processes, safeguarding workers’ rights, and ensuring fair compensation for creators while fostering trust and innovation within the AI sector. The inquiry, chaired by Labor Senator Tony Sheldon, ran for eight months and presented 13 key recommendations.

The report notes trust in AI remains lower in Australia compared to other countries, which has slowed adoption rates. It argues strong safeguards could help build public confidence while allowing innovation to thrive.

Addressing high-risk AI and worker protections

The committee identified general-purpose AI models and workplace surveillance tools as “high risk,” advocating for mandatory transparency and accountability measures. Independent Senator David Pocock highlighted significant risks to workers, stating:

Workers are facing an era of significant disruption. Legislation to mandate how AI is used in high-risk settings needs to be an urgent priority for government.

The inquiry recommended updating Australiaโ€™s work health and safety laws to encompass AI-related risks and implementing training programmes to assist workers whose roles may be affected by automation.

Safeguarding democracy

The report highlighted AIโ€™s potential to spread disinformation and undermine public trust during elections. Drawing on international examples, the committee expressed concerns about the use of AI-generated deepfakes and other tools to mislead voters.

“AI and mis- and disinformation are threatening democracies around the world. Australia is not immune, but we are clearly underprepared,” Independent Senator David Pocock said.

The committee recommended updates to electoral legislation, including prohibitions on the production or dissemination of AI-generated political deepfakes during election periods.

The inquiry accused tech companies of committing “unprecedented theft” by using Australian copyrighted materials to train AI models without permission or compensation.

It recommended amending the Copyright Act to mandate transparency regarding training datasets and establishing mechanisms for creators to receive royalties for their work.

“There is no part of the workforce more acutely and urgently at risk of the impacts of unregulated AI disruption than the more than one million people working in the creative industries and related supply chains,” Senator Pocock said.

The report also criticised tech giants for their lack of cooperation during the inquiry, stating: The notion that exploiting Australian content is for the greater good is farcical”.

Building sovereign AI capabilities

The report encouraged Australia to develop sovereign AI capabilities to reduce reliance on foreign technologies and recommended focusing on tailored solutions in areas like healthcare and climate science.

Senator Pocock supported the establishment of a National AI Safety Centre.

“We need to ensure that Australia remains at the forefront of AI research and development while prioritising safety and ethics,” he said.

Environmental concerns with AI

The environmental effects of AI were also highlighted, with the report recommending updates to building standards for data centres, stronger incentives for renewable energy use, as well as benchmarks for energy and water efficiency.

The Greens called for a “…comprehensive roadmap to address environmental impacts of AI,” including standards for hardware reuse and recycling, which would be essential to manage these challenges.

Unclear timelines for implementation

The recommendations have garnered support from creative and human rights groups but faced pushback from some industry stakeholders who warn that over-regulation could stifle innovation.

While the committee urged prompt government action, the timeline for implementation remains uncertain.

“With the growing uptake of AI, legislation to mandate how AI is used in high-risk settings needs to be an urgent priority for government,” Senator Pocock said.

Never miss a story: sign up toย SmartCompanyโ€™sย free daily newsletterย and find our best stories onย LinkedIn.