Why Do 95% of AI Initiatives Fail and How Can You Prevent It? 

business leaders sitting around AI box
  • Home
  • /
  • Insights
  • /
  • Why Do 95% of AI Initiatives Fail and How Can You Prevent It?
August 21, 2025

Executive takeaway

In a recent interview, John Munsell, author of INGRAIN AI - Strategy Through Execution, argued that many companies fixate on building or buying one flagship AI application to juice efficiency. That can help, but he called it the wrong center of gravity. The bigger, faster, and more durable gains come from broadly upskilling the entire organization—from the C-suite to the front desk—so people can use chat-based AI to accelerate the everyday work they already do.

The ā€œsingle-app fallacyā€

Munsell’s core critique: leaders often want a custom, vertical AI app that solves one clear problem (e.g., a specialized workflow or function). It’s attractive because it’s concrete, budgetable, and demo-able. But it constrains ROI to one slice of work and a small user set.

In his words, ā€œmost businesses are looking for a single AI application that they can custom build… And they think that will move the needle for them. And it will. But the bigger movement of that needle is by getting everybody to use it incrementally at their desktops.ā€

Two issues with the one-app approach:

  1. Scope – Even a great point solution only lifts the metrics it touches; it doesn’t address the thousands of micro-tasks that consume most people’s time across the org.
  2. Adoption – If only a subset can use it (permissions, training, relevance), leverage is capped. Meanwhile, the rest of the workforce keeps operating at pre-AI velocity.

The overlooked lever: organization-wide upskilling

The INGRAIN AI approach is to raise the AI fluency floor everywhere. The goal isn’t ā€œagents everywhereā€ or complex automations by default; it’s desk-level acceleration using chat-based tools to handle drafting, analysis, summarization, reformatting, outlining, checking, and ideation—inside the work people already do. Munsell stresses this is pragmatic, not exotic: ā€œIt doesn’t mean building complicated, agentic workflows… It just means, how do you use these chat-based interfaces to solve day-to-day problems?ā€

The key focal points for AI upskilling programs

When you strip away the hype, successful AI adoption comes down to a few core disciplines—these are the focal points every upskilling program needs if you want AI to move from random use to real, enterprise-wide capability:

1. Audience Breadth

AI fluency isn’t just for analysts or tech leads—it matters everywhere. From the C-suite deciding strategy to the receptionist handling scheduling, everyone hits time-eating tasks and bottlenecks. Broader adoption means compounding gains across the entire organization, not just in a few isolated teams.

2. Start Where People Are

Most employees freeze at the same hurdle: ā€œI don’t know where to start.ā€ The solution is simple, contextual training. Start with show-and-do sessions on tasks they already perform. Stack difficulty later as confidence grows. This keeps momentum high and eliminates the intimidation factor.

3. The AI Strategy CanvasĀ®

Getting to an AI-first culture requires a common AI language. The AI Strategy Canvas creates that alignment by forcing clarity on context, constraints, resources, rules, and requests. It reduces wasted effort, prevents ā€œrandom acts of AI,ā€ and gives teams a repeatable framework for problem framing. Without it, adoption is scattered. With it, adoption compounds.

4. Scalable Prompt EngineeringĀ®

Most wins in generative AI come from getting better at asking. Scalable Prompt Engineering equips employees with reusable, role-specific prompts they can adapt and share. It makes skills portable across teams, tools, and tasks. This turns one-off AI conversations into repeatable workflows that raise the floor for everyone’s effectiveness.

5. Right Tool for the Job

Not every problem calls for AI. Sometimes a pivot table, database filter, or shared doc is faster and cleaner. A key part of upskilling is discernment; helping people see when AI adds value, and when it’s just overcomplicating the job. This builds credibility and trust in the program.

Together, these focal points move a company’s AI experience from one-off experiments to enterprise-wide capability. The Canvas and Scalable Prompts sit at the core because they establish the common language and shareable skills that make the other pillars stick.

Why upskilling outperforms a single app

  1. Surface area of impact - Every team member touches dozens of small tasks per day. If each person is 10–30% faster on those tasks, you compound gains across the whole org, far outpacing a single deep vertical app used by a few specialists.
  1. Speed to value - Chat-based skills are immediately deployable. You don’t need data integrations or long IT projects; you need instruction, scaffolding, and practice reps in the user’s context.
  1. Change resilience - LLM platforms evolve frequently and sometimes unpredictably. People who know how to think and work with AI can adapt quickly when models or UI behaviors shift, instead of stalling until the one app is updated.
  1. Culture & mindset - Upskilling reframes ā€œAIā€ from a scary black box to a daily assistant. That reduces resistance, crowdsources internal wins, and builds momentum that later justifies deeper automations or a bespoke app; from a position of fluency, not fear.

What ā€œgoodā€ upskilling looks like 

  1. Role-based, day-in-the-life training - Teach people to apply AI to their recurring tasks first (drafting, condensing, converting formats, generating checklists, QA, explaining steps), not abstract demos. Tie sessions to concrete deliverables.
  1. Frameworks over parlor tricks - Emphasize durable prompting frameworks and reasoning patterns so users can generalize skills to new tasks, models, and tools. (His curriculum marches people from fundamentals to creating a lightweight, role-specific custom GPT without needing APIs—knowledge docs + instructions—so skills become reusable assets.)
  1. ā€œWhere to startā€ on-ramps - John suggests meet-them-where-they-are programming. An initial, short AI workshop, where everyone gets to see how easy it is to build a custom GPT or LLM project for their specific job, gets people excited about the applicability and possibility of using AI as an assistant. Then, on-demand, LMS-based training coupled with live group coaching allows people to learn and accelerate at their own pace. The perfect training is role-based, inventories a person’s weekly tasks, and helps them apply AI to generate quick wins in their day-to-day activities. This directly answers the most common blocker he hears: I don’t know where to start.
  1. Discernment training - Bake in the rule of thumb: Sometimes the best solution isn’t AI (use a pivot table, spreadsheet, or a two-step Zap). Teach people to pick the simplest working solution first.
  1. Informed ramping into AI automation - As people start building custom GPTs, Claude Projects, or Gemini Gems, they see new possibilities and better grasp the power of connecting systems. Suddenly, AI isn’t just speeding up tasks—it’s enabling everyone to join in initiatives that align with goals and deliver hundreds of incremental gains. That moves the needle far more than a single app that touches only a few.

A pragmatic rollout sequence

  1. Kickoff Workshop – 1-4 hours illustrating 10–12 everyday tasks that can be managed with AI at the desktop without APIs or integrations. Capture ā€œahaā€ use cases people mention live.
  2. Cohort training – Short, role-grouped sessions (managers, ICs, ops, frontline) focused on their deliverables. Provide ready-to-use prompt templates and a quick ā€œWhen not to use AIā€ checklist.
  3. Coaching hours – Open ā€œoffice hoursā€ where people bring real work; they leave with an improved output created together. This accelerates habit formation.
  4. Lightweight custom GPTs – For each role/team, package instructions + knowledge snippets into a private helper. No APIs at first; keep it simple and safe.
  5. Scale – Identify champions, collect before/after examples, publish an internal gallery of wins, and set team-level goals (e.g., ā€œAutomate 3 weekly tasks per personā€).

Metrics that matter

  • Time saved per task (before/after samples)
  • Cycle time for common deliverables (reports, drafts, briefs)
  • Adoption rate (users who log at least 2–3 AI-assisted tasks per day)
  • Quality indicators (fewer revisions, clearer outputs)
  • Pipeline of wins (new use cases added per team per month)

Where the single app fits (later)

John isn’t anti-app. He’s saying sequence matters: build AI confidence and expertise first, and get cultural buy-in and alignment. Push for AI fluency rather than mere AI literacy. Then provide a creative forum where people can showcase and discuss their AI successes and suggest bespoke apps and automations that accelerate efficiency even faster.

Focus attention on automations where you have validated, repeatable ROI, and trained users ready to exploit them. That order lowers risk, speeds uptake, and ensures the app is multiplied by a skilled workforce rather than bottlenecked by a few experts.

Bottom line

A single AI app can be a win. But if you want the biggest, fastest, most resilient gains, the INGRAIN AI method is best: teach everyone to use AI well in the work they already do, then layer deeper automations and custom apps on top of that foundation. The organization becomes faster everywhere, immediately, and better prepared for whatever you build next.