Software Project Prediction Is Possible
Software project delivery timelines can be predicted with useful accuracy (within 30% of actual) by AI analysis of project data.
The Assumption
Murphy’s entire value proposition is predicting when software projects will deliver. But is this even possible?
The hard truth: “Why is software always late?” has been asked for 50 years. No tool has solved it. Maybe it’s unsolvable:
- Software projects are complex adaptive systems
- Requirements change mid-project
- Dependencies are hidden until they bite
- Human factors dominate technical factors
If prediction isn’t possible, Murphy is selling snake oil.
Evidence
Supporting signals:
- Some patterns are predictable (velocity trends, scope creep signals)
- AI can process more data than humans
- Monte Carlo simulations provide probabilistic forecasts
- Early warning is valuable even if not precise
Counter-signals:
- 50 years of failed prediction tools
- Fundamental uncertainty in creative work
- Garbage in, garbage out (project data is messy)
- Goodhart’s Law: measured metrics get gamed
What Would Prove This Wrong
- Predictions consistently off by over 50%
- No better accuracy than naive estimates (e.g., “double the estimate”)
- False positives cause alarm fatigue
- Agencies don’t trust the predictions
Impact If Wrong
If prediction isn’t possible:
- Murphy fails regardless of execution
- Pivot to different value prop (project visibility, not prediction)
- Or abandon Murphy entirely
- Agency expertise becomes less valuable
Testing Plan
Technical validation:
- Build prediction model on historical data
- Backtest against known outcomes
- Measure accuracy: % of predictions within 30% of actual
Customer validation:
- Are predictions more useful than gut feel?
- Do early warnings provide actionable lead time?
- Do agencies change behavior based on predictions?
Kill criteria: If predictions aren’t better than “multiply PM estimate by 1.5”, pivot the value prop.
Related
Depends on:
- Agencies Feel Delivery Pain — prediction only matters if there’s pain
Affects:
- Murphy — entire product viability
Assumption
Software project delivery timelines can be predicted with useful accuracy (within 30% of actual) by AI analysis of project data.
Depends On
This assumption only matters if these are true:
- Agencies Feel Delivery Pain — 🟠 ⚪ 45%
How To Test
Build prediction model. Test on historical project data. Measure accuracy in real projects.
Validation Criteria
This assumption is validated if:
- Predictions within 30% of actual delivery date 70% of the time
- Predictions more accurate than PM gut feel
- Early warning of delays provides actionable lead time
Invalidation Criteria
This assumption is invalidated if:
- Predictions consistently off by over 50%
- No better than naive estimates
- False positives cause alarm fatigue
Dependent Products
If this assumption is wrong, these products are affected:
Dependent Milestones
If this assumption is wrong, these milestones are affected: