AI Wins | Scale Crew HR LLC

Why Business AI Wins Are Always Cross-Functional, Never Just Leadership or Technical Driven

Most AI stories are told in two flavors:

  • “We need visionary leadership to drive AI.”
  • “We need the right technical stack to unlock value.”

Both are true.
Both are incomplete.

The companies actually getting business value from AI aren’t just led well or technically sharp. They’ve figured out something more basic:

AI wins are always cross-functional.

Whenever AI “works,” you can trace it back to business + IT + HR + operations + frontlines pulling in the same direction.

Research done in 2025 on the AI value gap shows it clearly:

  • Out of 1,250+ global companies, only 5% are capturing meaningful financial returns from AI (revenue growth, cost reduction, workflow impact).
  • Roughly 60% report little or no value, despite making big AI investments.

Business Insider’s breakdown of that report highlights what sets the winners apart:

  • Co-ownership of AI between business units and IT
  • Strategic workforce planning and upskilling (~50% of employees trained in AI vs ~20% at slower firms)
  • A strong tech and data foundation built on a central platform
  • Employees involved twice as often in reshaping workflows as agents and tools are built

Those are all cross-functional moves.

So let’s break down why that matters, and why “leadership-only” or “IT-only” approaches keep failing.

1. When AI Lives Only in IT, It Dies Before It Hits the P&L

Here’s what we see in most organizations where AI is underperforming:

  • AI “belongs” to:
    • IT
    • Data science
    • A central “AI Center of Excellence”
  • Success is measured in:
    • Number of pilots
    • Model benchmarks
    • Internal demos
  • Business teams:
    • Show up for steering meetings
    • Give loose input on requirements
    • Go back to hitting their quarterly targets the old way

What’s missing:

  • A named business owner who lives and dies by a KPI
  • A clear statement of:
    • “This AI workflow exists to move this metric…”
    • “…by this much…”
    • “…for this team/segment.”

The work on “future-built” companies (the small group getting outsized AI returns) notes that they are far more likely to have a “model of co-ownership” between business departments and IT, so each group has autonomy and accountability for AI outcomes.

That looks like:

  • AI initiatives with:
    • A business sponsor (e.g., Head of CX, VP of Sales, COO)
    • A tech sponsor (e.g., CIO, Head of Data)
  • Shared responsibility for:
    • Value delivered
    • Risks managed
    • Trade-offs on scope vs. timeline

If AI sits solely in the tech org:

  • It becomes a tool project, not a P&L bet
  • Business leaders can safely say,
    • “That’s an IT thing, not my problem”
  • IT can safely say,
    • “We delivered the solution; it’s on the business to use it”

And the result is predictable:
Cool prototype, no business impact.

If AI only lives on the IT org chart, don’t expect it to show up on the income statement.

2. If Only a Tiny AI Pod Is Trained, Expect Tiny Impact

Most AI programs brag about having:

  • An “AI Guild”
  • An internal “prompt engineering workshop”
  • A handful of power users

Meanwhile, the broader workforce:

  • Hears “AI is important”
  • Gets one webinar or self-paced course
  • Goes back to doing work exactly the same way

The data backs this up:

  • Recent reports show that in successful AI adopters, about 50% or more of employees are expected to be upskilled in AI, versus around 20% at less mature companies.
  • Microsoft’s 2024 Work Trend Index shows only 39% of workers using AI at work have received AI training from their company, and only 25% of companies plan to offer training this year.

So the pattern is:

  • Leaders say:
    • “AI is a must-have skill.”
  • Companies actually:
    • Train a small minority.
  • Employees:
    • Either fend for themselves…
    • Or don’t adopt.

The winners treat AI as a company-wide capability shift, not a special-ops skill:

  • AI basics for a wide slice of the org:
    • What it can/can’t do
    • Where it’s allowed
    • Where it’s banned
  • Role-specific guidance:
    • Sales: how AI fits into prospecting, proposals, forecasting
    • Support: triage, summarization, QA
    • Ops: routing, anomaly detection, documentation
  • Manager enablement:
    • How to review AI-assisted work
    • How to coach with AI in the loop
    • How to talk about mistakes and experimentation

If only a tiny AI segment is trained, you get:

  • Pockets of brilliance
  • Zero structural change

If half the company is trained in context of their work, you get:

  • Continuous experimentation
  • A steady flow of real-world improvement ideas
  • AI woven into everyday execution, not just side projects

3. Without a Central AI Spine, Every Team Is Re-Inventing the Wheel

Another big cross-functional fault line: platform vs. chaos.

In slow-moving companies, AI tends to look like this:

  • Marketing picks one AI vendor
  • Support teams another
  • Product teams a third
  • Ops is using a random SaaS add-on
  • Every team:
    • Writes its own policies
    • Handles its own logging (if any)
    • Negotiates its own security and legal reviews

Side effects:

  • Governance is a mess:
    • Inconsistent security, privacy, and compliance rules
    • No central view of where AI is live
  • Costs are bloated:
    • Duplicate spend on overlapping tools
    • Infrastructure sprawl
  • Scale is impossible:
    • Nobody can quickly reuse what worked in one team for another

Multiple analyses points to something different in future-built companies: a strong tech architecture and data foundation, often using a mix of pre-built AI technology plus customized options, and all governed by enterprise-wide data policies and central oversight.

That doesn’t have to mean “massive platform project.”

At minimum, it looks like:

  • One central place to:
    • Define AI and data policies
    • Manage identity, access, and approvals
    • Log prompts, outputs, and tool calls
  • A set of reusable components:
    • Retrieval/search
    • Summarization
    • Classification and routing
    • Evaluation and monitoring

The cross-functional piece:

  • IT owns the backbone
  • Legal/compliance define boundaries
  • Security enforces the rules
  • Business units plug in use cases

This is the difference between:

  • “20 pilots, all unique, none at scale” and
  • “A growing library of AI-powered workflows running on the same rails”

4. Co-Design with Employees: The Missing Stakeholder

Here’s where “leadership-only” approaches really break down: the people doing the work.

In many organizations, AI is designed far away from the front lines:

  • Decisions made in:
    • C-suite offsites
    • Architecture councils
    • Vendor workshops
  • By the time employees see it:
    • The UI is baked
    • The workflow is set
    • The announcement is a foregone conclusion

What happens next?

  • People:
    • Click around
    • Hit friction and weird edge cases
    • Quietly revert to old tools/processes
  • Leaders:
    • See low adoption numbers
    • Decide “the tech isn’t ready”
    • Or blame “change resistance”

But deep research highlights a different behavior in successful adopters:

  • At companies that are actually getting AI value:
    • About 50% or more of employees are being upskilled in AI
    • These companies “involve the workforce 2x or more as often as others” in reshaping workflows when they build, test, and deploy AI agents.

Co-design is where cross-functionality gets real:

  • HR + line managers:
    • Work out how roles change
    • Clarify expectations and incentives
  • Ops + front-line teams:
    • Map real workflows and pain points
    • Decide where AI slots in and where humans must stay in charge
  • IT + security/compliance:
    • Layer in what’s safe, lawful, and observable

Good co-design looks like:

  • Whiteboarding with the people who actually do the work
  • Running limited pilots where feedback is structured, not ignored
  • Updating prompts, flows, and policies based on what’s learned
  • Making explicit:
    • “Here’s what AI does”
    • “Here’s what you do”
    • “Here’s how we’ll measure success and keep this safe”

AI tools built for employees but not with them almost always land with a failure.

5. Why Pure “Leadership” or Pure “Tech” Plays Keep Failing

If we simplify, most struggling AI efforts look like one of these:

Leadership-only
  • Big vision
  • Loud messaging
  • No concrete:
    • Ownership model
    • Platform strategy
    • Co-design practice
  • Result:
    • Great town halls
    • No operational change

Tech-only
  • Strong engineering
  • Lots of proof-of-concepts
  • Minimal:
    • Business sponsorship
    • Workforce prep
    • HR/ops involvement
  • Result:
    • Great demos
    • No sustained adoption
Cross-functional (what actually works)

You see:

  • Business + IT co-owning high-impact workflows
  • HR, L&D, and managers deeply involved in upskilling and role design
  • Legal, risk, and security embedded early in the process
  • Front-line teams helping design and continuously refine the workflows

That’s when AI stops being:

  • “A thing we’re trying over in that team”
  • And becomes:
    • “This is how we do this part of the business now”

McKinsey’s work on digital and AI operating models describes this as “rewiring the business,” stressing that cross-functional collaboration across the C-suite is non-negotiable for AI transformation.

In other words:

The real AI upgrade is organizational, not just technical or based on leadership vision alone.

What This Means If You’re a Startup, SMB, or Mid-Market

The good part: you don’t need a Fortune 100 structure to get this right.
You just need a minimum cross-functional setup.

For each serious AI initiative, ask:

  • Who owns the KPI?
    • Is there a specific leader (e.g., VP RevOps, Head of CS) who will fight for this?
  • Who’s the technical partner?
    • Named person/team from IT/data/engineering with clear accountability.
  • Who represents the people doing the work?
    • Managers and front-liners in the room early, not after launch.
  • Who covers risk and trust?
    • Someone from legal/compliance/security who helps design guardrails, not just block things.
  • Who’s responsible for skills and change?
    • HR/L&D helping plan training, role changes, and incentives.

If any of those chairs are empty, your AI initiative is probably underpowered before it starts and has a greater chance of failing.

You don’t need to overcomplicate this. Start small:

  • Pick one workflow that truly matters to the business.
  • Put the right mix of people around it from day one.
  • Decide how you’ll measure success and what “safe enough” looks like.
  • Ship something small and real, then learn and reuse.

That’s cross-functional in practice.

Where The Scale Crew Fits In

At The Scale Crew, we meet a certain type of company over and over:

  • They’re skeptical of AI theater
  • They’re under pressure to “have an AI plan”
  • They’re not sure how to get beyond:
    • “Our CIO is on it”
    • Or “We hired a Head of AI, we’re good”

We don’t start by selling you a custom app.

We help you and your leadership team get clear on:

  • Where AI does not belong yet
  • Which workflows might actually deserve a cross-functional AI effort
  • Whether you really need to:
    • Build something custom
    • Boost tools you already pay for
    • Or simply buy a solution off the shelf

That’s what our AI Readiness & Transformation Program is designed to do:
create the cross-functional conditions where AI has a real shot at working, and do this without you wasting money trying to brute-force it through leadership hype or tech alone.

If you’re looking at AI right now and thinking:

“We’ve got leaders talking about it and IT experimenting with it, but I don’t see a system here…”

We can help you discover whether AI belongs in that workflow at all, and what’s missing in your cross-functional setup before you invest heavily.

Share the Post:

Related Posts