On paper, AI is everywhere:
- Strategy decks
- Board updates
- “AI-powered” product announcements
But when you ask executives a simple question:
“Is AI making a significant difference to your revenue, margin, or cost base?”
the answer, most of the time, is no.
A long-running research series from MIT Sloan Management Review and BCG found that:
- Only about 10% of organizations report “very significant” financial benefits from their AI initiatives.
- The vast majority see modest, localized gains (some efficiency here, some time savings there) or no clear impact at all.
Multiple follow-on analyses echo the same pattern: a small minority of businesses are converting AI into visible financial outcomes; most are stuck in AI that feels busy but doesn’t move the top or bottom line.
Let’s talk about why.
1. Most AI Wins Are Too Small to Matter
For a lot of companies, AI “wins” look like this:
- “Our sales team saves 15 minutes per proposal.”
- “Our marketers can write copy faster.”
- “Our support team uses AI to draft responses.”
Those are fine outcomes. But they are:
- Hard to measure
- Spread across many people
- Rarely tied to hard business metrics
So even if:
- People feel more productive
- Some tasks go faster
…the CFO still sees:
- No obvious change in:
- Revenue growth trajectory
- Gross margin
- SG&A as a % of revenue
MIT/BCG note that companies over-index on “pockets of productivity” instead of designing for “significant financial impact” and then wonder why their AI story doesn’t show up in the numbers.
In other words:
Lots of micro wins. Almost no macro impact.
2. AI Is Pointed at the Wrong Part of the Value Chain
The firms in that ~10% aren’t just using more AI; they’re using it where money actually moves.
Most organizations, by contrast, deploy AI into:
- Low-stakes, internal tasks
- One-off “innovation” projects
- Areas that don’t directly touch:
- Customer acquisition
- Pricing and packaging
- Renewals and expansions
- Cost-to-serve at scale
So you get:
- Better slide decks
- Faster emails
- Prettier internal docs
…but not:
- Higher close rates
- Lower churn
- Fewer agents per X tickets
- Faster onboarding that unlocks revenue sooner
The 10% behave differently:
- They pick specific, financial levers:
- “Increase NRR by 3–5 points.”
- “Cut cost-to-serve by 15%.”
- “Reduce average implementation time by 30%.”
- Then aim AI directly at:
- The workflows that drive those numbers
- Not the periphery
If AI never touches the critical paths in your business, it will never show up as a significant benefit.
3. The Organization Doesn’t Learn With AI. It Just Uses It
MIT/BCG’s research is blunt about what separates the 10%:
The winners aren’t just deploying AI; they are building multiple, effective ways for humans and AI to learn together.
In practice, that means:
- Humans don’t just consume AI outputs; they:
- Provide structured feedback
- Refine prompts and workflows
- Change how they make decisions
- AI systems:
- Are tuned based on frontline behavior
- Are evaluated on business outcomes, not just accuracy
- Are refined as part of an ongoing learning loop
Meanwhile, in most organizations:
- AI is treated as a static tool:
- Rolled out once
- Used (or ignored)
- Rarely revisited in terms of workflow or design
- There is:
- Little documentation of what works where
- No explicit “learning loop” between teams and the AI
- No clear owner for continuous improvement
The result:
- Early demo lift decays
- Usage plateaus
- AI quietly becomes background noise, not a compounding asset
The 10% are effectively running human+AI systems, not just “AI tools.” That’s a very different operating model.
4. Finance Isn’t in the Conversation Early Enough
Most AI projects talk to finance at the end:
- “We shipped the AI feature.”
- “Can you help us show ROI?”
By that point, the basic decisions are locked:
- Which workflow AI touches
- What “success” means
- How data is logged (or not)
So finance gets:
- A narrative: “This is making people more efficient”
- A handful of usage stats
- Not much they can confidently plug into models
In firms that report significant financial benefit, the conversation with finance often happens at the start:
- “If we touch this workflow with AI, how would we model value?”
- “What’s the baseline we’d need to measure before/after?”
- “What would count as a meaningful shift in this metric?”
That changes everything:
- Use case selection is filtered through a financial lens, not just technical feasibility.
- Data collection and logging are designed to answer CFO-grade questions.
- AI initiatives live or die on:
- Contribution to margin
- Cost structure
- Revenue movement
Research repeatedly stresses that linking AI initiatives to financial performance, and designing organizational learning around that is what separates the small minority from everyone else.
If finance never had a say in what you’re doing with AI, it’s no surprise if they can’t see significant benefit later.
5. Culture Treats AI as Optional, Not How Work Is Done
In most companies:
- AI is framed as:
- “A tool you should try.”
- “An assistant you can use if you want.”
- Managers say:
- “AI is there to help you, but do whatever works for you.”
That sounds nice.
It is terrible for impact.
Because what actually happens is:
- Power users lean in
- Skeptics and busy people ignore it
- No one redesigns meetings, SOPs, or KPIs around AI
In the ~10% that see significant gains, culture looks different:
- Leadership sets expectations like:
- “This is how we now do this category of work.”
- “Here’s the AI-powered path; the old path is the exception.”
- Teams are expected to:
- Experiment
- Share what they learn
- Help refine how AI is used over time
Instead you should have a “carefully orchestrated symbiosis between human and machine” not a bolt-on helper.
In that environment:
- AI isn’t optional
- It’s part of role expectations
- It’s part of how teams hit their numbers
And that’s when its impact starts to look significant instead of marginal.
6. Quick Self-Check: Are You in the 10% or the 90%?
Take one AI initiative you’re running (or planning) and score yourself honestly.
A. Is the target outcome financially meaningful?
- “We want to reduce cost-to-serve for Tier 1 support by 20%.”
- “We want to increase NRR in this segment by 3–5 points.”
Feels like a 10% move.
OR
- “We want to make people more productive.”
- “We want to explore GenAI for [broad area].”
Feels like a 90% move.
B. Does finance know the plan?
- Were finance leaders involved in:
- Defining the business case?
- Setting baselines and targets?
- Agreeing on how you’ll measure impact?
If not, expect hand-wavy ROI later.
C. Is this aimed at core workflows or side quests?
- Does the use case clearly touch:
- Revenue, cost, or risk at scale?
Or is it:
- A helpful sidekick in a workflow no one in the C-suite talks about?
D. Are humans and AI actually learning together?
- Do you have:
- Feedback loops from users?
- Regular review of AI performance on business metrics?
- Adjustments to workflow based on those findings?
Or did you:
- Roll it out
- Train once
- Move on?
E. Has the way work is done actually changed?
- Are meetings shorter?
- Are handoffs different?
- Are job expectations updated?
- Are KPIs aligned with AI-powered ways of working?
If the answer is “no” across the board, you most likely have:
- AI added to the stack
- Not AI woven into operations
…which is how you end up with modest or unmeasurable impact instead of joining that small minority.
Where The Scale Crew Fits In
The ~10% number shouldn’t scare you off AI.
It should scare you off doing AI the default way.
At The Scale Crew, we work with US startups, SMBs, and mid-market firms that:
- Are done with AI theater
- Don’t want “busy but marginal” AI
- Need to figure out where AI truly belongs in their business if at all
We don’t come in saying:
“You must build a custom AI product.”
We start with:
- Do you even need something custom?
- Or can you boost tools you already have?
- Or should you buy something off-the-shelf?
- Which workflows could realistically deliver significant financial benefit?
- And which are distractions?
- What has to change in:
- Ownership
- Metrics
- Work design
- Data and guardrails
…so that if you do invest, you have a shot at being in the group that’s actually seeing real returns, not just accumulating cool stories.
That’s what our AI Readiness & Transformation Program is built around:
deciding if AI belongs, where it belongs, and what it would take for it to matter in your numbers, not just in your marketing.
If You’re Wondering Whether Your AI Effort Is “Significant” or Just “Nice”
We’ll help you think through:
- Whether that idea has a realistic shot at significant financial benefit
- Whether you should build, boost, or buy
- And what’s missing today that’s keeping you in the “interesting, but modest” 90% instead of the small minority that actually sees AI move the business.

