There’s a weird split in AI right now:
- Headlines: “AI is transforming customer experience.”
- Actual customers: “This bot is useless, let me talk to a human.”
Fresh data is pretty brutal:
- Qualtrics’ 2026 Consumer Experience Trends report: nearly 1 in 5 people who used AI for customer service say they got no benefit at all. That’s a failure rate almost 4× higher than AI use in other areas.
- Verint’s State of Digital CX 2024 and follow-up coverage: most customers have had a bad chatbot or IVR experience, often because they couldn’t reach a live agent. Inability to escalate is one of the top sources of service frustration.
- KPMG’s 2024-25 Citizen Experience Excellence work: AI should only be used where use cases are prioritized by their impact on experience, and digital channels provide clear signposting and ways to escalate to a human.
- CMSWire’s 2025 service trends: customers still want to speak to humans; over-reliance on AI creates “doom loops” where issues never fully resolve; the best outcomes come when people and tech are used in unison.
- Forbes (Nov 2025): AI customer experience is booming in investment but failing consumers, lagging other AI domains because companies are chasing savings faster than they’re fixing broken journeys and governance.
So yes, AI in CX is exploding.
But a lot of that explosion feels like shrapnel to the customer.
And that’s exactly where the white space is for a human-centric, AI-powered CX playbook.
1. The Stat No One Wants to Own: AI Support Fails 4× More Than Other AI
Let’s sit with that Qualtrics number for a second:
“Nearly one in five consumers who have used AI for customer service saw no benefits from the experience. That’s a failure rate almost four times higher than for AI use in general.”
Why?
Qualtrics’ own commentary (and the CX analysts amplifying it) call out a simple pattern:
- Companies are using AI to cut costs, not solve problems.
- Customers feel the difference immediately.
When AI is deployed that way, you get:
- Bots that don’t understand or resolve real issues
- No memory across channels
- No clean escape hatch to a human
- A vague sense that the company has put a wall of automation between itself and its customers
Result: customers don’t experience AI as “wow, this is helpful.”
They experience it as “great, one more thing between me and a human.”
2. The Over-Automation Trap (And How It Shows Up)
Across Verint, KPMG, CMSWire, and others, the failure modes are surprisingly consistent.
1) No easy path to a human
Verint’s 2024 State of Digital CX and Customer Experience Dive coverage highlight:
- The majority of customers have had at least one bad chatbot or IVR experience in the past year.
- The inability to switch from self-service to a live agent is one of the top reasons.
KPMG’s Citizen Experience report literally calls out as a best practice:
- “Clear signposting in digital channels with ways to escalate and speak to a human when needed.”
Over-automated orgs do the opposite:
- Hide the “talk to a person” option
- Force customers through long IVR trees
- Treat escalation like a failure instead of a design feature
2) AI deployed where journeys are already broken
CMSWire’s “AI isn’t failing customer experience, companies are failing AI” makes the point bluntly:
Poor AI strategy magnifies broken processes and disconnects customers from real value.
If your:
- Policies are confusing
- Knowledge base is outdated
- Handoffs between teams are messy
…then AI will reflect that mess back at customers, faster and at scale.
This is where “doom loops” come from:
- Customer explains the problem
- Bot answers the wrong question
- Customer rephrases
- Bot gives another partial or irrelevant answer
- Customer finally gives up or explodes on the next human they reach
3) AI KPIs are efficiency-only, not outcome-based
When success is defined as:
- “Deflect X% of contacts”
- “Reduce handle time by Y%”
…you’ll hit the number by:
- Holding customers in self-service as long as possible
- Cutting corners on resolution quality
- Discouraging escalations
Qualtrics + CMSWire’s coverage of the 4× failure rate is basically a warning label on that approach.
3. KPMG & Qualtrics: The Better Pattern Is Hiding in Plain Sight
The good news: the same research that calls out the failures also sketches the alternative.
KPMG’s citizen-first AI deployment rules
The 2024-25 Citizen Experience Excellence work recommends AI only when:
- Use cases are prioritized against their likely impact on experience (not just cost).
- Digital channels offer:
- Clear signposting (what this channel can/can’t do)
- Easy escalation to humans when needed
Translated out of public-sector language, that’s:
- Don’t start with “Where can we cut headcount?”
- Start with “Where are we currently wasting customers’ time?”
- Use AI where it:
- Removes friction
- Speeds up simple tasks
- Sets humans up to win on complex ones
Qualtrics’ prescription in one line
Qualtrics’ CX team is very explicit about what works:
- Let AI handle simple, transactional requests where it can reliably help.
- Use AI to prep and coach human agents for more complex problems.
That’s it.
Not “replace humans.”
Restructure work so humans are spending more time where they actually create value.
4. CMSWire & Forbes: Cost-First AI Creates the Moat You Can Take
Two more threads you can lean on in your narrative:
CMSWire: AI alone won’t solve your service challenges
Multiple CMSWire pieces hammer the same theme:
- Automation drives efficiency, but without the human touch, CX falls flat.
- Over-reliance on AI creates loops where issues never fully resolve.
- The best-performing orgs use Human-AI hybrid teams, where:
- AI tackles routine tasks and pattern-spotting
- Humans handle nuance, emotion, and accountability
Forbes: AI CX is booming, but failing consumers
Dan Gingiss’s November 2025 Forbes piece summarizes a new global survey this way:
- AI CX investment is booming.
- But AI CX is lagging other AI uses on usefulness and satisfaction.
- Why? Because companies are:
- Deploying AI to save money faster than they fix broken journeys
- Underinvesting in:
- Strategy
- Governance
- Human oversight
The implication for you is huge:
While many companies train their customers to resent their AI, you can train your customers to trust yours.
That trust gap is the moat.
5. The Two Playbooks (And Which One You’re Running)
At this point, you can almost divide companies into two buckets.
Playbook A: Headcount-first automation
- Goal: “Reduce support cost by X% with AI.”
- Design choices:
- Hide human contact paths
- Measure success in deflections and shorter calls
- Deploy AI into broken processes without fixing root causes
- Typical customer outcomes:
- More rage (Verint’s “bad chatbot/IVR” bucket)
- More distrust (KPMG’s global AI trust work shows adoption rising but trust lagging)
- More churn (Verint’s report: 70% of customers are at risk of leaving due to poor CX)
This is the group feeding the “AI customer service fails 4× more” stats.
Playbook B: Human-centric CX with AI under the hood
- Goal: “Increase resolution, satisfaction, and NRR while lowering cost to serve.”
- Design choices:
- Start with journey mapping and failure analysis
- Use AI where it clearly helps:
- Self-service on simple stuff
- Insight, prep, and drafting for humans on complex stuff
- Make escalation easy and explicit
- Let CX/Support/Success leaders own the AI service strategy
- Typical customer outcomes:
- Faster paths to resolution on simple tasks
- Better (not worse) human experiences on complex tasks
- Higher trust and stickiness, even as more AI gets introduced
This is where the moat lives.
6. Quick Self-Check: Are You Over-Automating?
Run this against your own Support/CX setup:
- Escalation design
- Every AI or IVR flow has a clear, fast path to a human.
- Customers have to fight the bot or the phone tree to reach a person.
- AI success metrics
- We track resolution, CSAT, and churn along with efficiency.
- We mostly celebrate deflection and cost savings.
- Use case selection
- We prioritize AI use cases based on experience impact (speed, clarity, effort).
- We prioritize the areas with the biggest apparent headcount savings.
- Role design
- AI is explicitly there to prepare and support agents/CSMs, not compete with them.
- AI and humans both handle everything, depending on who picks it up.
- Customer feedback
- We have fresh data on AI touchpoints and we tweak based on customer feedback.
- The only AI feedback we see is complaints bubbling up informally.
If you’re mostly in the second column, you’re in the “over-automated and under-designed” camp these reports are warning about.
The upside: that means there’s a lot of value still on the table.
Where The Scale Crew Fits In
This is the gap The Scale Crew is built to help close.
With our expanded focus across:
- Customer Experience & Ops
- Customer Support
- Customer Success
- AI Readiness & Transformation
…we work with startups, SMBs, and mid-market firms who:
- Are tired of “AI = cheaper support” stories that annoy customers
- Don’t want to fire the very teams that create their moat
- Want to use AI to raise their service game, not just lower their cost line
We come in to help you:
- Map where your current support is creating loyalty vs creating friction
- Decide where AI should actually live in those journeys
- Redesign workflows so:
- Bots handle what they’re good at
- Humans handle what only humans can do
- CX/Support/Success leaders own the AI + human service strategy
We don’t sell “just add AI.”
We help you stop training customers to hate your automation, and start using AI to make your human-centric CX the thing competitors can’t copy.
If You Suspect You’re Over-Automating
We’ll help you see:
- Whether you’re in the 4× failure camp
- How much white space do you have for a human-centric, AI-powered CX approach
- And what it would take to move from “AI that cuts corners” to “AI that quietly makes your humans, and your customers, much, much happier.”

