
By Curtis Vincent
I spend a lot of time with people who do the hard work in the “gray area” of financial recovery services (complex asset recovery, fraud investigations, and challenging debt situations). Their job is to navigate every nuance of each case with empathy, not to follow a rigid script. This was supposed to be the world that’s safe from automation. However, AI has penetrated it at a speed that employees didn’t plan for and certainly didn’t choose. The systems show up fast. The explanations come later, if at all. And in that vacuum, people are left confused. They hesitate, question, and worry about their position.
Resistance is the natural result of change racing ahead of understanding. When rollouts fail to give employees enough context and control, people feel unsteady and push back. I know because I’ve seen this pattern play out often. According to Business Insider, New York City public schools in January 2023 banned generative AI but reversed course by May only because teachers demanded guidance over prohibitions (“New York City’s Public Schools Reverse Their Ban on ChatGPT—Admitting It Had Been ‘Knee-Jerk Fear’”). This showed exactly how quickly tech had moved ahead of the dialogue itself. Trust broke down and was only repaired when leaders slowed down to teach, set boundaries, and invite questions. It also proved that when leaders answer that trust gap with more software and fewer conversations, the gap only widens.
WHY THE REAL CHALLENGE IS HUMAN
AI adoption isn’t just a technology shift—it’s a human one. Without trust, people work around tools, second-guess outputs, and delay decisions. The real competitive advantage isn’t the algorithm—it’s the trust built alongside it.
Leaders must focus on change management, emotional intelligence, and communication. Employees need to understand why AI is being introduced, what it changes, and where human judgment still matters.
EMOTIONAL INTELLIGENCE MEETS ARTIFICIAL INTELLIGENCE
Your ability to notice and manage your emotions, read the room, and choose words and actions that build trust is emotional intelligence. For AI to work in that environment, it must be designed around those same human skills. This doesn’t mean you’re removing judgment. You are simply giving people better signals and more time to use it with care.
You can start by being clear about roles. Let AI handle the sorting and presentation of relevant data while your teams decide, explain, and support. Train managers to coach tone and clarity with emotional awareness in mind. Provide everyone with simple rules for when to pause automation and ask for a human review. Be open about what data the system learns from and where people can question an output without penalty. When these expectations are clear, AI will strengthen empathy instead of dulling it.
I have observed this in practice when we partnered with Elephants Don’t Forget to deploy Clever Nelly—an adaptive AI microlearning platform—across its global workforce. The program replaces sporadic, generic courses with short, daily practice matched to each person’s gaps. It reinforces compliance and builds soft skills such as respectful language, careful listening, and clear next steps. The rhythm is light but steady, so knowledge stays current and people can apply judgment when pressure is high. Over time, teams were better prepared for difficult conversations and handled them with accuracy and care.
TURNING AI INTO A HUMAN LEARNING PARTNER
Employees are actively seeking more training and development in AI. Organizations must meet this demand with relevant, timely learning experiences tied to real work.
AI can support learning by delivering real-time feedback, personalized training, and actionable insights. When used correctly, it builds both skill and confidence without overwhelming employees.
FROM RESISTANCE TO RESILIENT WORKFORCE
People first become worried about AI when they fear losing their jobs or their voices. These worries are valid and make sense, so they need to be taken into account in the way we teach. But if you want the quickest way to drain energy from a rollout, keep the people who do the work out of the room where the rules are set. Getting them involved and showing them how their ideas impact the plan is the fastest way to gain back their enthusiasm.
Set the table with two habits that hold. Create regular office hours where developers, risk leaders, and frontline teams meet to review a small set of cases. Keep the group small enough to speak and rotate voices so knowledge moves across locations. Then publish a short note on what was learned and what will change. People accept trade-offs when they can see the exchange.
I think about Walmart, a global retailer that introduced AI-driven inventory tooling in 2023 (“Decking the Halls with Data: How Walmart’s AI-Powered Inventory System Brightens the Holidays,” Walmart Global Tech). The first version landed hard. Store teams felt managed by a model that didn’t understand local traffic or seasonal quirks. Shrinkage and stockouts told the story on the floor. So in response, the leadership paused, built a feedback loop with district managers and associates, and changed parameters that drove poor decisions. They also set predictable schedules that considered the tool’s suggestions and the human realities of running a store. The same technology produced steadier outcomes because the people closest to the work shaped how it operated.
The same principle applies in service environments. If you pilot a decision tool for case handling, invite a small group of agents to run real scenarios and narrate what they see. Capture where the guidance helped and where it confused. Adjust the prompts and the thresholds, then try again.
A few tight cycles like this build a system that respects local context instead of flattening it. Adoption follows because the tool feels like it belongs to the team.
Resilience grows through such repetitions. A new hire finds the right phrasing for a delicate moment because real-time guidance offered a useful nudge he could accept or ignore. A seasoned pro can spot a subtle policy change during a brief daily check rather than an hour-long lecture. A manager coaches the behaviors that the analytics reveal and gives credit for the judgment that closed the loop. These small wins add up. The climate shifts from guarded to engaged. Performance follows because people feel equipped and respected, which is the ground where initiative takes root.
DESIGNING THE FUTURE OF WORK WITH HUMANITY AT THE CORE
As AI moves to everyday practice, the test is simple: Does the system make work more human, not less? Technology should stay in the service of purpose, and cultures that last will be built on trust and steady ethics, not on features alone.
Innovation holds when people can see how decisions are made and where judgment sits. Explain what the system learns from and how it arrives at a recommendation. Give employees a path to pause automation and ask for a review. Keep a regular check on bias, and fix what you find in daylight. When intent and limits are visible, confidence grows and adoption follows. To determine whether this approach works, you need to widen the scoreboard. Keep tracking output, and add measures that tell you how people are coping. Psychological safety and belonging should rise with the trends, not fall. There should also be a visible and proactive need to upskill. Use these signals to steer pace, coaching, and investment so progress is real for the people doing the job.
There is a leadership habit that binds these pieces. Tell the story of the work as it is, not as you wish it to be. Share where the system fell short and what you changed. Invite a fresh round of feedback and be specific about what will happen next. This rhythm builds credibility. It also keeps the build close to reality, which is where the value is. This is the path forward. Combine data with judgment so that tools lift human strengths. Let progress be measured by results and the way people are treated. Organizations that hold to that balance will move faster with fewer missteps, because trust reduces friction and learning compounds.
Curtis Vincent is chief human resource officer at Phillips & Cohen Associates Ltd.