What usually happens is simpler: leaders make commitments when uncertainty is highest. Early in any project, estimates can be wrong by a factor of four in either direction, scope is loosely defined, and the real conditions for success haven’t been tested yet.

That’s also when budgets are approved, timelines are locked, and vendors are signed. So when the project doesn’t land where it was promised, it’s not necessarily that the technology failed. It’s that the errors baked into those early assumptions finally surfaced.

The gap between expectation and outcome isn’t failure. It’s uncertainty doing exactly what it always does. The problem is that no one planned for it.

The problem

Most SMB "technology failures" aren't caused by bad software. They happen because leaders buy a system expecting it to simply work, then discover, mid-implementation, that the assumption was conditional.

During the sales cycle, those conditions were implied, never challenged, but during implementation, they become unavoidably clear.

That's when the implementation stops being a project and starts becoming a leadership problem.

The common story

Something is costing too much time, too many manual workarounds, reporting is fragile, and the team is stretched. You're already the Accidental Tech Boss, you just don't have the project yet. So you decide to "fix the system."

A vendor is selected. A timeline is agreed. The demo looks amazing. The promise is clear: fewer steps, better visibility, less manual work.

Then implementation begins.

  • Week one is optimism.
  • Week three is configuration.
  • Week six is "edge cases."

And this is where the surprises start surfacing,one at a time, each one a small shock:

"We'll need to clean the data first." You discover your data isn't clean enough to migrate. Records are duplicated, fields mean different things in different departments, and no one owns the master list.

"We'll have to redesign the workflow." The system assumes your processes are consistent enough to standardize. They aren't. What people actually do has drifted from what anyone documented — if it was documented at all.

"That approval step isn't supported." Roles and approvals that felt obvious in conversation turn out to be ambiguous in practice. Who actually signs off? Who handles exceptions? The system needs an answer. The business has been operating without one.

Then comes the question no one scheduled time for: your team needs to change how they work, and they need to do it while still doing the old work. Internal time multiplies. The vendor asks for decisions. People get pulled into meetings that feel operational but drain strategic capacity.

The system starts asking questions that the business has avoided:

  • Who owns the process?
  • Who approves exceptions?
  • What does "customer" mean in your data?
  • Which version of the truth is correct?

Progress continues, but the promised outcome hasn't materialized. Then the moment arrives.

Someone says, "We're too far in to stop."

At that point, the decision changes from "Is this the right system?" to "How do we finish this without breaking the business?"

A better way to run implementations

You don't need more discipline after the project starts.

You need discipline before commitment.

Before you sign, insist on answering three practical questions:

1. What must be true for this to happen?

Name the assumptions. Process consistency, data quality, role clarity, and adoption capacity. If the vendor can't state the conditions clearly, then neither side has thought them through.

2. What work disappears, and what work moves to your team?

Implementation is not free. If work is shifting from the vendor to you, that's a cost. Treat it as one.

3. What is the "stop rule"?

Define what evidence would tell you this isn't working early enough to change course. Without a stop rule, sunk cost becomes the decision-maker.

Once you have the answers, figure out a way to test them before committing. Finally, what would tell you early that this isn’t working?