When AI projects stall, the postmortem usually sounds like this:
- “The model wasn’t accurate enough.”
- “The tech is still immature.”
- “We picked the wrong vendor.”
But if you zoom out from any one project and look at the pattern across organizations, a very different picture shows up.
What shows up in AI adoption work, can be summarized in multiple reports and analyses, which finds that when companies implement AI:
- ~70% of the challenges come from people and process issues
- ~20% come from technology and data integration
- Only ~10% come from the AI algorithms themselves
BCG even codifies this as the 10-20-70 rule:
- 10% – algorithms
- 20% – tech and data
- 70% – people, processes, and ways of working
Yet most organizations behave as if those percentages are reversed.
We want to talk about that mismatch, and how to recognize when your “AI problem” is actually a people-and-process problem wearing a technical costume.
1) The 10-20-70 Rule (and Why It Matters)
AI leaders keep repeating a simple point:
If you want AI value, plan to spend ~70% of your effort changing how people work, not tuning models.
Here’s the breakdown:
- 10% – Algorithms
- Model selection and tuning
- Prompt design, evaluation metrics
- Fine-tuning, RAG strategies, etc.
- 20% – Tech & data
- Integrations with your stack
- APIs, identity, access control
- Data pipelines, quality, governance
- 70% – People & processes
- How work is structured
- Who does what (human vs AI)
- Skills, training, incentives
- Culture, trust, and change management
The problem:
- Most budgets, leadership attention, and vendor conversations are pointed at the 10-20%.
- The 70% gets:
- A training session
- A comms plan
- A “change workstream” in a slide deck
That’s how you end up with technically solid AI and no lasting business impact.
2. Four People Problems That Masquerade as “Tech Issues”
When AI adoption falters, it’s tempting to blame model performance.
But if you look at how work is actually getting done, you’ll often find human and organizational issues underneath. Some common ones:
A. No clarity on jobs and roles
What people actually experience:
- “Is this tool here to help me or replace me?”
- “If I use AI and make a mistake, will that hurt me more than if I stay ‘manual’?”
- “What exactly am I still responsible for?”
Signs this is your real issue:
- Quiet resistance: people stick to old workflows even when a new AI tool exists.
- Shadow usage: employees use their own tools instead of the official ones.
- People say things like:
- “We don’t know what’s expected now.”
- “I’m not sure how this changes my job.”
What gets misdiagnosed:
- “The tool isn’t user-friendly enough.”
- “The model needs to be more accurate before we roll it out.”
B. Underinvestment in skills and confidence
BCG and PMI both emphasize that AI transformation is 70% people, and that without upskilling and coaching, you’re basically dropping a jet engine into a team that’s never flown.
On the ground, that looks like:
- One prompt-engineering workshop
- Maybe a recorded webinar
- No ongoing support
So what happens?
- People use AI for trivial tasks (“rewrite this email”)
- They avoid higher-stakes use (“draft the first version of this proposal”)
- The organization never captures real leverage
Leaders read that as:
- “Our employees aren’t innovative enough.”
- “They’re not taking advantage of the tools.”
When the real issue is:
- You didn’t give them the confidence and guidance to use AI where it matters.
C. Misaligned incentives
If people are measured on:
- Volume (“tickets closed,” “calls handled”)
- Time spent (“hours logged”)
- Or “not making mistakes”
…they will behave accordingly.
So even if AI can:
- Handle low-value work
- Improve quality
- Free up time for higher-impact tasks
Employees will stick with:
- The old way that matches their scorecard, not the new way that theoretically helps the company.
This looks like:
- “We launched an AI copilot; nobody uses it for serious work.”
- “Power users love it, but our core metrics haven’t changed.”
Under the hood, it’s an incentive design problem, not a model problem.
D. Fear and lack of psychological safety
When people believe:
- Mistakes with AI will be punished
- Leadership is “watching” adoption as a performance test
- AI is a one-way train to job cuts
…they will use AI defensively:
- Only in low-risk, invisible areas
- Only when they’re sure it won’t backfire
- Only enough to say, “Yes, I tried it”
Our guidance with clients on unlocking AI value stresses that “winning with AI is a sociological challenge as much as a technological one” the “valuable stuff” (trust, behavior change, culture) becomes the hard constraint.
If experimentation feels dangerous, AI never leaves the shallow end.
3. Process Problems: When Work Itself Isn’t Ready for AI
Even if your people are willing, your workflows might not be.
Common patterns:
A. No standard process to begin with
Trying to “AI-ify” a process that:
- Lives in people’s heads
- Has multiple unofficial versions
- Depends on “how Sarah does it vs. how Jackson does it”
…is a recipe for chaos.
You end up:
- Encoding inconsistent behavior
- Trying to automate exceptions
- Fighting endless edge cases
Leaders call this:
- “The AI isn’t robust enough.”
But the real issue is:
- You don’t have a stable process to automate.
B. Bolting AI on instead of redesigning flow
Many orgs try:
- “Same process, plus an AI button.”
So you get:
- More steps
- More context switching
- More approvals
Instead of:
- Fewer steps
- Clearer decision points
- Better handoffs
Symptoms:
- Work gets slower, not faster.
- People say:
- “This is one more tool I have to feed.”
- “It doesn’t fit how we actually work.”
The model is fine.
The process design is not.
C. No instrumentation or feedback loops
If you don’t have:
- Baselines (how long things took, how accurate they were, what error rates looked like)
- Clear metrics (what success looks like post-AI)
- Feedback channels (from users back to the team)
Then:
- You can’t tell if the process is actually better with AI
- You can’t tell where it’s failing
- You can’t justify scaling, or killing the initiative
Leaders read:
- “We don’t have the data to prove this works.”
Which is really:
- “We didn’t build in the measurement from day one.”
4. Why Companies Overspend on the 30% and Underspend on the 70%
If it’s so clear the bottleneck is people and process, why do organizations keep pouring effort into the technical side?
A few understandable reasons:
- Tech is easier to buy than behavior change is to lead.
- You can sign contracts and see dashboards.
- You can’t buy a culture that embraces AI.
- Vendors sell tech, not org design.
- Every demo shows what the tool can do.
- Almost none show what it takes to change a workflow and a team.
- Budgets are structured around tools, not transformation.
- There’s a line item for software and infrastructure.
- There is rarely a line item for:
- Manager training
- Process redesign
- Ongoing coaching and experimentation
- Leaders overestimate their people’s readiness.
- “Our teams are smart; they’ll figure it out.”
- But AI isn’t just “another tool” it’s a different way of working.
BCG’s own AI pages essentially call this out: the companies that succeed “devote far more effort on people and processes than on technology,” even though most budgets do the opposite.
Quick Self-Check: Are You Treating a 70% Problem Like a 30% Problem?
Take one AI initiative, current or planned, and run these questions against it.
A. Where did most of the budget go?
Rough breakdown:
- Licenses, infrastructure, vendors
- Data engineering and integrations
- People things:
- Training
- Coaching
- Change management
- Process redesign
If >70% of your spend is in the first two buckets, you’re probably overweight on the 30%.
B. Where did leadership spend their time?
- Demos and model reviews?
- Tool/vendor selection?
- Or:
- Process mapping?
- Role and job-impact discussions?
- Training and communication with teams?
If leadership time was mostly in tech conversations, expect tech-shaped outcomes.
C. How specific is your answer to “what changes for whom?”
Can you say, in one sentence:
- “For this role, AI will take over these tasks, and change these ones”?
Or is it closer to:
- “We think this will make people more productive”?
Vague change expectations leads to vague results.
D. Have frontline teams actually co-designed the new workflow?
- Were they in the room mapping current and future state?
- Did you run small pilots and incorporate their feedback?
- Did any of their suggestions materially change the plan?
If not, you’re likely shipping a tool, not a new way of working.
E. Could you show the CEO & CFO how this changes the P&L?
For this initiative, could you credibly answer:
- “Here is the metric we expect to shift.”
- “Here is the baseline and target.”
- “Here is how we’ll know, within 30–90 days, whether it’s working.”
If you can’t tie it to a financial story, don’t expect it to register as success.
Where The Scale Crew Fits In
The 10-20-70 rule is not just an interesting stat. It’s a design constraint.
If 70% of AI challenges are people and process, then 70% of your planning and effort needs to go there, especially if you’re a startup, SMB, or mid-market company that cannot afford to learn this the expensive way.
At The Scale Crew, we work with teams that:
- Are tired of tech-first AI conversations that never pay off
- Are skeptical of AI theater, but
- Know they need a plan for where AI truly belongs in their business
We don’t lead with “Let’s build you a custom AI app.”
We start with questions like:
- Do you even need something custom right now?
- Or can you boost tools you already own?
- Which workflows are actually ready for AI?
- From a process standpoint
- From a people standpoint
- What would it look like to put most of your effort into the 70% that matters?
- Roles and responsibilities
- Training and adoption
- Guardrails and governance
- Measurement and iteration
That’s the core of our AI Readiness & Transformation Program:
- Help you avoid pouring money into the wrong 30%
- Show you where AI is worth it, and where it isn’t
- Make sure that when you do invest, you’re set up to change work, not just tools
If You Suspect Your AI Problems Are 70% People and Process
If this post sounds uncomfortably familiar, reach out to start a conversation with us:
We’ll help you think through:
- Whether AI should touch that KPI at all right now
- Whether your biggest risks are technical or people/process
- And whether you’re planning to spend your next dollar in the 10–20%… or the 70% where the real leverage (and risk) actually lives.


