AI ROI | Scale Crew HR LLC

We Spent on AI. Where’s the ROI?” Why 74% of Leaders Still Don’t See Financial Impact

Most exec teams can now say:

  • “We’ve invested in AI.”
  • “We’ve run pilots.”
  • “We’re using AI in at least one function.”

But when you quietly ask, “What has it done for revenue, margin, or cost?” the room goes shockingly quieter.

1,400+ executives surveyed across industries and found:

  • 74% say they have yet to see material financial impact from AI.
  • Only 26% feel they’re getting significant value or on a clear path to it.

A lot of individuals will tell you it’s a tool issue. The truth is, it’s not a tooling problem.

It’s a disconnect between AI activity and business outcomes.

Let’s unpack why so many leaders are still waiting for AI to hit the income statement.

1. “We Deployed AI” Is Not the Same as “We Made Money”

You can check all of these boxes:

  • Signed an enterprise agreement with a GenAI vendor
  • Rolled out an “AI assistant” in your productivity suite
  • Piloted chatbots, copilots, and automations in a few teams

…and still have no material financial impact to point to.

Common patterns:

  • Success is defined as:
    • # of users who “tried” the tool
    • # of prompts run
    • Sentiment from internal surveys (“this is helpful”)
  • What’s missing:
    • A hard KPI tied to:
    • Revenue (conversion, NRR, deal size)
    • Cost (cost-to-serve, time-to-resolution, cycle time)
    • Risk (loss events avoided, error rates)

If you don’t define “material impact” upfront, you end up with:

  • “AI adoption” slides
  • “Innovation” stories
  • Very little P&L movement’

Activity does not equal impact.

AI can feel busy and impressive without changing core numbers.

2. AI Is Aimed at the Edges, Not the Core

Look at where many organizations put their first AI bets:

  • Nice-to-have internal assistants
  • Side projects in innovation labs
  • Small experiments in non-critical workflows

What gets ignored:

  • Revenue-critical workflows
    • Lead qualification and routing
    • Renewals and expansions
    • Upsell/cross-sell sequencing
  • Cost-critical workflows
    • Tier-1 and tier-2 support
    • Onboarding, implementation, and training
    • Back-office operations
  • Risk-critical workflows
    • Review, approvals, and quality
    • Compliance checks

If AI is only deployed in low-leverage corners, you shouldn’t expect material financial impact:

  • You’re improving things that:
    • Touch fewer customers
    • Impact fewer dollars
    • Are distant from the P&L

AI work keeps coming back to the same point: companies that see financial returns aim AI at core value chains, not just “quick wins” at the margins.

3. The Business Case Is Vibes, Not Math

A lot of AI initiatives started with:

  • “This could save people time.”
  • “This will make us more productive.”
  • “This will improve the experience.”

But when you ask:

  • “How much time, for whom, and what do we do with that time?”
  • “How will that translate into revenue, cost, or risk?”
  • “What’s our hypothesis we can test in 30–90 days?”

…many leaders don’t have precise answers.

Typical issues:

  • No baseline:
    • Current handle time, cycle time, conversion rate, etc., are never measured properly.
  • No target:
    • “Faster” or “better” instead of “reduce X by Y% by when.”
  • No experiment design:
    • No A/B testing, no control groups, no before/after that isolates the AI effect.

The result:

  • Tools get adopted
  • People feel more productive
  • Finance still can’t confidently say:
    • “AI increased EBIT by X points”
    • “AI cut cost-to-serve by Y%”

Research emphasizes that value from AI is highly uneven and that most organizations lack the capabilities to systematically link AI initiatives to business outcomes, which is exactly how you end up in the 74%.

4. People Changed Tools, Not Work

On paper:

  • AI tools are “live.”
  • Licenses are provisioned.
  • People are “using AI.”

But when you zoom into the day-to-day:

  • Processes are unchanged:
    • Same meetings
    • Same approval layers
    • Same handoffs
  • Roles are unchanged:
    • No clarity on what’s automated, what’s augmented, and what stays human
    • Job descriptions and performance measures still assume old ways of working
  • Incentives are unchanged:
    • Reps, agents, and ops people are still measured on volume, not value
    • No explicit reward for redesigning how they work with AI

So AI becomes:

  • A layer on top of existing workflows
  • Not a reason to simplify, remove, or reorder steps

When tools change but work doesn’t, you get:

  • Incremental speed for individuals
  • No structural improvement in:
    • Headcount required
    • Revenue per head
    • Cost per transaction

McKinsey calls this out in their “rewiring” work: most companies fail to move from individual productivity gains to system-level value, because they avoid touching processes, governance, and roles.

5. Finance and AI Rarely Talk the Same Language

This is one of the quietest, most damaging gaps.

On the AI side, reporting sounds like:

  • “Model accuracy increased from 78% to 86%.”
  • “We’ve got 2,000 weekly active users for our AI assistant.”
  • “The chatbot deflects 40% of tickets.”

On the finance side, what matters is:

  • “Did revenue grow faster than it would have?”
  • “Did we reduce headcount growth or external spend?”
  • “Did we reduce losses or risk events?”

Without a translation layer, you get:

  • AI teams celebrating technical or adoption wins
  • CFOs asking, “So what?”

Multiple surveys show insight, that 74% of execs don’t see material financial impact, and this is, in part, a measurement problem:

  • AI benefits are loosely defined
  • AI metrics are not wired into:
    • Monthly performance reviews
    • Planning and budgeting cycles
    • Capital allocation decisions

If the CFO can’t see AI’s contribution in their numbers, it doesn’t count as “material.”

6. Governance, Risk, and Legal Keep AI in the Sandbox

Even when pilots show promise, many AI initiatives hit a wall at:

  • Legal
  • Risk
  • Compliance
  • Security

Common reasons:

  • Unclear data usage and retention policies
  • Weak logging or auditability
  • No clear way to handle:
    • Bias and fairness
    • Escalations
    • Human-in-the-loop overrides

So AI gets:

  • Approved in theory
  • Confined to narrow, low-risk scenarios
  • Blocked from the high-impact workflows that actually move money

From a P&L perspective, you’ve:

  • Proved AI can work
  • Failed to make it deployable at scale

Many organizations extracting value from AI typically invest early in governance and risk frameworks that enable, not just restrict, deployments in sensitive areas.

No governance, no deployment.

No deployment, no financial impact.

7. Quick Self-Check: Are You in the 74% (Be Honest)?

Take one of your AI initiatives and run it through this checklist:

1. Hard KPI or vibes?
  • Can you state the single financial metricthis initiative is on the hook to move?
    • Revenue: conversion, NRR, upsell, churn?
    • Cost: cost-to-serve, FTE needed, external spend?
    • Risk: losses avoided, error rates?

If not, you’re likely in “vibes” territory.

2. Edge use case or core workflow?
  • Is this AI:
    • Sitting on the margins (internal assistant, nice-to-have feature)?
    • Or touching a high-volume, high-value flow (tickets, deals, onboarding, claims)?

If it doesn’t touch something meaningful, don’t expect meaningful impact.

3. Tool change or work change?
  • Did you:
    • Redesign steps, approvals, handoffs, and roles?
    • Or simply add an AI tool into the existing process?

No changes in how work flows leads to no changes in how money flows.

4. Can finance see it?
  • Does your CFO:
    • Have AI outcomes wired into their dashboards?
    • Believe the numbers enough to mention them in planning or investor conversations?

If the answer is “not really,” the impact is probably smaller than you’d like.

5. Are risk and legal in the loop, or in the way?
  • Did you:
    • Bring risk/compliance in early to co-design guardrails?
    • Or wait until the end and get a “maybe, but only in this tiny sandbox”?

Tiny sandbox = tiny upside.

Where The Scale Crew Fits In

The fact that 74% of executives don’t see material impact from AI isn’t a condemnation of AI.

It’s a signal that most orgs are missing the translation between AI activity and financial reality.

At The Scale Crew, we work with US startups, SMBs, and mid-market companies who are:

  • Skeptical of AI theater
  • Under pressure to “show something”
  • Not willing to spend another year on experiments that don’t move the needle

We don’t start by pitching a custom app.

We help you and your leadership team get clear on:

  • Where AI should not be involved, at least not yet
  • Which workflows are most likely to produce measurable financial impact
  • Whether your smartest move is to:
    • Build something custom
    • Boost what you already own
    • Or simply buy something off the shelf
  • What has to change in:
    • Measurement (so finance can see it)
    • Process and roles (so work actually changes)
    • Guardrails (so risk and legal can say “yes” to high-impact use cases)

That’s the core of our AI Readiness & Transformation Program:
helping you move from “we’ve invested in AI” to “we can point to where it shows up in the numbers.”

If You Suspect You’re in the 74%

If you read that 74% stat and thought, “That might be us,” here’s a simple way to start a conversation:

We’ll can help you determine:

  • Whether your planned initiative has a real shot at material financial impact
  • Whether you should build, boost, or buy
  • And where the biggest gaps are, before you pour more budget into efforts that keep you in the 74% instead of the 26%.
Share the Post:

Related Posts