Australian workers are secretly using generative artificial intelligence (Gen AI) tools – without knowledge or approval from their boss, a new report shows.

The “Our Gen AI Transition: Implications for Work and Skills” report from the federal government’s Jobs and Skills Australia points to several studies, showing between 21% to 27% of workers (particularly in white collar industries) use AI behind their manager’s back.

Why do some people still hide it? The report says people commonly said they:

  • “feel that using AI is cheating”
  • have a “fear of being seen as lazy”
  • and a “fear of being seen as less competent”.

What’s most striking is this rise in unapproved “shadow use” of AI is happening even as the federal treasurer and Productivity Commission urge Australians to make the most of AI.

The new report results highlight gaps in how we govern AI use at work, leaving workers and employers in the dark about the right thing to do.

As I’ve seen in my work – both as a legal researcher looking at AI governance and as a practising lawyer – there are some jobs where the rules for using AI at work change as soon as you cross a state border within Australia.

Risks and benefits of AI ‘shadow use’

The 124-page Jobs and Skills Australia report covers many issues, including early and uneven adoption of AI, how AI could help in future work and how it could affect job availability.

Among its most interesting findings concerned workers using AI in secret – which is not always a bad thing. The report found those using AI in the shadows are sometimes hidden leaders, “driving bottom-up innovation in some sectors”.

However, it also comes with serious risks.

Worker-led ‘shadow use’ is an important part of adoption to date. A significant portion of employees are using Gen AI tools independently, often without employer oversight, indicating grassroots enthusiasm but also raising governance and risk concerns.

The report recommends harnessing this early adoption and experimentation, but warns:

In the absence of clear governance, shadow use may proliferate. This informal experimentation, while a source of innovation, can also fragment practices that are hard to scale or integrate later. It also increases risks around data security, accountability and compliance, and inconsistent outcomes.

Real-world risks from AI failures

The report calls for national stewardship of Australia’s Gen AI transition through a coordinated national framework, centralised capability, and a whole-of-population boost in digital and AI skills.

This mirrors my own research, showing Australia’s AI legal framework has blind spots, and our systems of knowledge, from law to legal reporting, need a fundamental rethink.

Even in some professions where clearer rules have emerged, too often it’s come after serious failures.

In Victoria, a child protection worker entered sensitive details into ChatGPT about a court case concerning sexual offences against a young child. The Victorian information commissioner has banned the state’s child protection staff from using AI tools until November 2026.

Lawyers have also been found to misuse AI, from the United States and United Kingdom to Australia.

Yet another example – involving misleading information created by AI for a Melbourne murder case – was reported just yesterday.

But even for lawyers, the rules are patchy and differ from state to state. (The Federal Court is among those still developing its rules.)

For example, a lawyer in New South Wales is now clearly not allowed to use AI to generate the content of an affidavit, including “altering, embellishing, strengthening, diluting or rephrasing a deponent’s evidence”.

However, no other state or territory has adopted this position as clearly.

This article is part of The Conversation’s series on jobs in the age of AI. Leading experts examine what AI means for workers at different career stages, how AI is reshaping our economy – and what you can do to prepare.

Clearer rules at work and as a nation

Right now, using AI at work lies in a governance grey zone. Most organisations are running without clear policies, risk assessments or legal safeguards. Even if everyone’s doing it, the first one caught out will face the consequences.

In my view, national uniform legislation for AI would be preferable. After all, the AI technology we’re using is the same, whether you’re in New South Wales or the Northern Territory – and AI knows no physical borders. But that’s not looking likely yet.

If employers don’t want workers using AI in secret, what can they do? If there are obvious risks, start by giving workers clearer policies and training.

One example is what the legal profession is doing now (in some states) to give clear, written guidance. While it’s not perfect, it’s a step in the right direction.

But it’s still arguably not good enough, especially because the rules aren’t the same nationally.

We need more proactive national AI governance – with clearer policies, training, ethical guidelines, a risk-based approach and compliance monitoring – to clarify the position for both workers and employers.

Without a national AI governance policy, employers are being left to navigate a fragmented and inconsistent regulatory minefield, courting breaches at every turn.

Meanwhile, the very workers who could be at the forefront of our AI transformation may be driven to use AI in secret, fearing they will be judged as lazy cheats.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Guzyal Hill, The University of Melbourne

Read more:

Guzyal Hill is a practising lawyer, but wrote this article in her role as a researcher working on AI governance and national uniform legislation.