What if your biggest competitive asset is not how fast AI helps you work, but how well you question what it produces?

Business leaders tend to prioritize efficiency and compliance in the workplace. It’s one of the reasons why so many are drawn toward incorporating generative AI technologies into their workflows. A recent survey found 63 per cent of global IT leaders worry their companies will be left behind without AI adoption.

But in the rush to adopt AI, some organizations are overlooking the real impact it can have on workers and company culture.

Most organizational strategies focus on AI’s short-term efficiencies, such as automation, speed and cost saving. What tends to be overlooked are the impacts AI has on cognition, agency and cultural norms. AI is fundamentally restructuring not only what we know, but how we know it.

As AI becomes more integrated, it will continue to influence organizational tone, pace, communication style and decision-making norms. This is why leaders must set deliberate boundaries and consciously shape organizational culture in relation to AI integration.

Once embedded into workflows, AI influences workplace defaults: which sources appear first, what tone a memo takes and where managers set the bar for “good enough.” If people don’t set these defaults, tools like AI will instead.

As researchers who study AI, psychology, human-computer interaction and ethics, we are deeply concerned with the hidden effects and consequences of AI use.

Psychological effects of AI at work

Researchers are beginning to document a number of psychological effects associated with AI use in the workplace. These impacts expose current gaps in epistemic awareness — how we know what we know — and how those gaps can weaken emotional boundaries.

Such shifts can affect how people make decisions, calibrate trust and maintain psychological safety in AI-mediated environments.

One of the most prominent effects is known as “automation bias.” Once AI is integrated into a company’s workflow, its outputs are often internalized as authoritative sources of truth.

Because AI-generated outputs appear fluent and objective, they can be accepted uncritically, creating an inflated sense of confidence and a dangerous illusion of competence.

One recent study found that in 40 per cent of tasks, knowledge workers — those who turn information into decisions or deliverables, like writers, analysts and designers — accepted AI outputs uncritically with zero scrutiny.

Read more: AI is reshaping the workplace – but what does it mean for the health and well-being of workers?

The erosion of self-trust

A second concern is the erosion of self-trust. Continuous engagement with AI-generated content leads workers to second-guess their instincts and over-rely on AI guidance, often without realizing it. Over time, work shifts from generating ideas to merely approving AI-generated ones. This results in the diminishing of personal judgment, creativity and original authorship.

One study found that users have a tendency to follow AI advice even when it contradicts their own judgment, resulting in a decline in confidence and autonomous decision-making. Other research shows that when AI systems provide affirming feedback — even for incorrect answers — users become more confident in their decisions, which can distort their judgment.

Workers can end up deferring to AI as an authority despite its lack of lived experience, moral reasoning or contextual understanding. Productivity may appear higher in the short term, but the quality of decisions, self-trust and ethical oversight may ultimately suffer.

Emerging evidence also points to neurological effects of over-reliance on AI use. One recent emerging study tracked professionals’ brain activity over four months and found that ChatGPT users exhibited 55 per cent less neural connectivity compared to those working unassisted. They struggled to remember the essays they just co-authored moments later, as well reduced creative engagement.

So what can leaders and managers do about it?

What leaders and managers can do

Resilience has become something of a corporate buzzword, but genuine resilience can help organizations adapt to AI.

Resilient organizations teach employees to effectively collaborate with AI without over-relying on its outputs. This requires systematic training in interpretive and critical skills to build balanced and ethical human-AI collaboration.

Organizations that value critique over passive acceptance will become better at thinking critically, adapting knowledge effectively and will build stronger ethical capacity. One way of achieving this is by shifting from a growth-oriented mindset to an adaptive one. Which, practically speaking, means workplaces should seek to do the following:

  1. Train people to separate fluency from accuracy and to ask where information comes from rather than just being passive consumers of it. With better epistemic awareness, workers become active interpreters understanding what an AI tool is saying, as well as how and why it’s saying it.

  2. Teach people to monitor their thinking processes and question knowledge sources. A recent study showed professionals with strong metacognitive practices, like planning, self-monitoring and prompt revision, achieved significantly higher creativity when using AI tools, while others saw no benefit. That means metacognition could be the “missing link” for productive LLM use.

  3. Avoid a one-size-fits-all approach and consider levels of automation by task stages. AI tool developers should be encouraged to define clear roles for when the model drafts or analyzes, when the human leads and when verification is mandatory. Consider adding things like AI-use to responsibility and accountability charts.

  4. Create workplace cultures that encourage workers to question AI outputs, track those challenges as quality signals and budget time for verification. Workplaces should publish style norms for AI-assisted writing, set confidence thresholds and evidence requirements by function, and specify who signs off at each risk level.

  5. Hold quarterly “drift reviews” to spot shifts in tone, reliance or bias, before they calcify into organizational culture.

Efficiency will not decide the winners

As we are starting to see, the drive for efficiency will not decide which firms are most successful; the ability to interpret and critically assess AI outputs will.

The companies that pair speed with skepticism and protect judgment as a first-class asset will handle volatility better than those that treat AI as an autopilot. Speed may get you to the next decision, but judgment keeps you in business.

Ethical intelligence in organizations requires an ongoing investment in epistemic awareness, interpretive skill, psychological safety and active value-driven design.

Companies capable of balancing technological innovation with critical thinking and deep ethical understanding will be the ones to thrive in the AI era.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Jordan Loewen-Colón, Queen's University, Ontario and Mel Sellick, Arizona State University

Read more:

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.