AIStack Blog • Case Study

Case Study: How to Completely F*ck Up Your AI Project

And What SMEs Must Learn Before Deploying Generative AI

AIStack architecture illustration for governance and RAG operations

There’s a statistic often quoted in AI circles: up to 85% of AI projects fail. The uncomfortable truth is that most are not model failures. They are governance, UX, cost-model, data, and leadership framing failures.

The “Boiling Pasta” Problem (Industrial Version)

In an LNG purification process, a critical Acid Gas Removal Unit behaves like a giant pot of pasta: more heat means more output, too much heat means boil-over, and boil-over means shutdown, cleanup cost, and lost revenue.

A company tried to replace an expert operator with AI-based prediction. Six months later: total failure.

The 5 Fatal AI Mistakes

1) Trying to Replace a Trained Expert with AI

The operator was adaptive, context-aware, and cheaper. The AI was less reliable, under-trained, and installation-sensitive.

AIStack Rule #1: If your strategy starts with “replace humans,” pause.

  • Augment experts first
  • Reduce cognitive load
  • Improve decision speed

2) Ignoring Cost vs Benefit Math

One bad miss cost more than a year of operator salary. Wrong-guess downside overwhelmed AI upside.

AIStack Rule #2: If wrong-AI-cost >> right-AI-benefit, walk away.

3) No Training Data, But “Let’s Do AI Anyway”

Every industrial installation differed. No stable training corpus. This maps directly to SMEs trying GenAI with scattered docs, no metadata, and no governance.

AIStack Rule #3: No structured data means no production AI.

  • No clean knowledge base = no useful RAG
  • No taxonomy/tagging = retrieval and token waste
  • No audit trail = compliance risk

4) Solving the Wrong Question (Streetlight Effect)

Operators asked: “How high can we push performance without increasing risk?” Teams modeled: “How long until failure?” Technically interesting, commercially misaligned.

AIStack Rule #4: Start from business economics, not model convenience.

5) No User Research

The most valuable signal was visual and never instrumented. In GenAI, this looks like leaders demanding a chatbot without workflow mapping.

AIStack Rule #5: No interviews + no shadowing = blind deployment.

AIStack Generative AI Do’s & Don’ts

Don’t

  • Replace experts blindly
  • Ignore asymmetric risk
  • Deploy without structured data
  • Skip governance, logging, rollback

Do

  • Start with economic modeling
  • Quantify risk exposure
  • Build RAG-ready knowledge systems
  • Align outputs to operational KPIs

Why This Matters for SMEs

Random ChatGPT usage can feel safe at five users, become chaos at fifty, and become risk at five hundred.

AIStack is not about flashy AI apps. It is about governance architecture, risk modeling, cost visibility, RAG readiness, and operational AI discipline.