Your Biggest AI Innovators Might Be Breaking the Rules

Published: Apr 13, 2026

130229-AI-WP-Article-2026-TrustBuilds-1240x500-F

By AMA Staff

A fascinating new dynamic is taking shape in the world of AI. New data from AMA’s latest AI Survey reveals that as leaders get better at managing AI, employees’ trust in them is soaring. But that trust comes with a twist: employees are also taking matters into their own hands, using unsanctioned AI tools at a rapidly growing rate.

The Trust Dividend Is Paying Off

First, the good news. Efforts to build governance and training are working. According to the AMA survey of 1,365 professionals, employee trust in managers to use AI fairly and transparently has more than doubled in three years, jumping from 39% to an impressive 81%. This shows that when employees see leaders making a real effort to guide AI adoption responsibly, they respond with confidence.

But There's Another Side to the Story

That growing trust has not led to compliance. It has led to empowerment. The survey also found that the use of AI without a centralized approach, or "Shadow AI," has surged from 44% to 65% in the last year alone.

Individual contributors are on the front lines, experimenting with new tools and applying AI in ways they find useful for their immediate workflows. Feeling the pressure to keep pace with technology, they see AI as an opportunity to innovate and are not waiting for official permission to do so.

Why Is This Happening?

Employees want to use AI. They see the benefits and want to push the limits of what’s possible. They are looking for guardrails, but within those guardrails, they want the freedom to explore. This presents leaders with a choice: try to lock everything down, or find a way to harness this powerful, bottom-up innovation.

What to Do Next

Suppressing independent use is a losing battle. The organizations that thrive will be those that channel this energy productively.

  • Actively Enable: Your goal should be to strengthen governance while also creating clear pathways for employees to bring their ideas and use cases forward. An effective governance policy should protect the organization without stifling innovation.
  • Listen to Your Early Adopters: The employees using unsanctioned tools are your canaries in the coal mine. They have direct insight into what works. Create channels to listen to them and learn from their experiments.
  • Start a Reverse Mentoring Program: The AMA data points to a powerful opportunity. Leaders have the strategic view, while individual contributors have the deep, hands-on user experience. Pair them up. Let your early AI adopters mentor senior leaders to help close blind spots and ground strategy in reality. Explore mentorship program resources to get started.
  • Provide Clear Guardrails: Employees are not asking for a free-for-all. They want to know the rules of the road. Clear, simple policies about data privacy, security and ethical use give them the confidence to innovate safely.

Frequently Asked Questions (FAQs)

Why is independent or "Shadow AI" use growing if trust in leadership is up?

According to AMA's latest AI Survey, employees feel empowered by leadership's new focus on AI. They trust them more, but they are also moving faster than official policy, using tools they find helpful for their daily work.

Is "Shadow AI" a bad thing?

It carries risks, like data exposure. But it's also a powerful sign of employee-led innovation. The challenge for leaders is to manage the risk while harnessing the innovative spirit.

What is reverse mentoring for AI?

It's a program where employees with deep, practical AI tool experience (often individual contributors) mentor senior leaders. This helps leadership stay current and avoid strategic blind spots.

How much has trust in managers grown regarding AI?

The AMA data shows employee trust in managers to use AI fairly has surged from 39% to 81% over the last three years, correlating with better governance and training.