Choosing the Right Delay Analysis Method: Why Credibility Matters More Than Complexity
- Jinoy Viswan
- Nov 8
- 6 min read
By Jinoy Viswan, CEO , Aegis Project Management Consultancy, Dubai
The Language of Time
Every project speaks a language.Schedules are its grammar.
Updates are its punctuation.
Progress is its story.
When delay occurs, that language begins to fracture.Sentences break.
Meaning disappears. Analysis becomes translation.
Delay analysis, at its best, is fluent in both languages: the language of planning and the language of reality.
At its worst, it speaks in numbers no one understands.
The Delay analyst’s first task is not to calculate. It is to listen.
Because the method that listens best is the one that tells the truth.

Understanding Method Selection
A delay analysis method is the framework used to measure the relationship between planned work and actual performance.Each method answers a single question: What delayed completion, by how much, and why?
Keith Pickavance described delay analysis as “the engineering of time.” (Delay and Disruption in Construction Contracts, 2017). The phrase reminds us that analysis is not mathematics but interpretation.
The correct method depends on three conditions:
Purpose: claim, defence, or project recovery.
Data quality: reliability of baselines, updates, and as-built records.
Contractual context: how the contract defines delay, notice, and entitlement.
Selecting the wrong method changes not only the schedule but the story of causation itself.
Observational and Modelled Approaches
Category | Nature | Techniques | Application |
Observational | Reviews actual performance. | Windows Time Slice (WTSA), As-Planned vs As-Built (APAB) | Use when contemporaneous data exist. |
Modelled | Simulates hypothetical performance. | Time Impact Analysis (TIA), Collapsed As-Built (CAB) | Use when updates are incomplete or for prospective analysis. |

Roger Gibson noted that the windows technique “provides the most transparent link between progress and delay when updates are reliable.” (Construction Delays, 2014).
Transparency depends on data integrity, not software power.
Prospective and Retrospective Application of Methods
Delay analysis may be prospective or retrospective, depending on when the analysis is undertaken.
Prospective analysis is performed during execution to forecast the likely effect of an event before the works are complete. It relies on current data and supports timely extension-of-time decisions. Time Impact Analysis (TIA) and other model-based simulations are best suited for this purpose because they project delay using existing logic and known durations.
Retrospective analysis is undertaken after the event or after completion. It uses factual records to measure what actually happened rather than to predict. Windows Time Slice (WTSA), As-Planned vs As-Built (APAB), and Collapsed As-Built (CAB) fall in this category.

The SCL Protocol recommends applying a prospective method when reliable current data exist and a retrospective method once the event has passed. The AACE RP 29R-03 expresses the same principle by linking method selection to data availability and timing. The analyst should declare whether each method is applied prospectively or retrospectively and ensure that its logic matches the evidence available at that point in the project timeline.
The Framework for Choosing a Delay Analysis Method
Core Criteria
Criterion | Meaning | Key Question |
Transparency | Logic must be traceable and reproducible. | Can another analyst replicate it? |
Causation | Must link events to delay, not only show overlap. | Does it explain why the delay occurred? |
Reasonableness | Must reflect field reality. | Does it make sense to those who built the project? |
Nicholas Gould observed that “the value of an analysis is measured not by its complexity but by its ability to persuade.” (SCL Paper on Delay and Disruption, 2006).Clarity persuades more than ornamentation.
Schedule Quality Screen
Before selecting any method, validate every programme.Check for open ends, excessive constraints, out-of-sequence progress, negative lags, inconsistent calendars, and progress overrides. Record corrections made.
A method is only as reliable as the schedule from which it reads.
Decision Matrix
Project Condition | Preferred Method | Reason for Choice |
Reliable monthly updates | WTSA | Captures evolving critical paths. |
Incomplete updates, clear baseline | TIA | Models discrete event impact. |
Limited data, known start and finish | APAB | Compares overall plan to outcome. |
Completed project, full as-built logic | CAB | Retrospective reconstruction. |
Mixed causes or concurrency | Hybrid WTSA + TIA | Enables cross-verification. |
Also consider the audience. Negotiations may accept a concise observational story; technical panels require a reproducible audit trail and declared settings.
Method Prerequisites
Method | Prerequisite Data | Limitations and Cautions |
WTSA | Approved baseline and reliable updates. | Not valid where updates are sporadic or logic repeatedly changed. |
TIA | Defined fragnets for each event with factual durations. | Avoid cumulative stacking without calibration. |
APAB | Credible as-built dates. | Suitable for orientation only; not for quantified entitlement. |
CAB | Robust as-built logic. | Declare all assumptions; use to test hypotheses, not as sole evidence. |
Common Methods in Practice
Windows Time Slice Analysis (WTSA)
Divides the project into time windows and measures critical-path movement.Shows delay as it unfolded.Test concurrency by confirming independent, co-extensive critical paths and verifying float absorption before declaring concurrency.
Aegis Insight: WTSA mirrors project reality only if the mirror was kept clean.
Time Impact Analysis (TIA)
Inserts delay events into a baseline to simulate impact.Build fragnets from contemporaneous documents.Disclose calculation settings and maintain consistent retained-logic or progress-override usage across all runs.
David Barry advised that “a methodology must correspond with the contractual machinery it seeks to explain.” (Extension of Time in Construction Contracts, 2011).
As-Planned vs As-Built (APAB)
Compares planned and actual dates.Illustrates variance but not cause. Do not infer concurrency from overlap alone; escalate to a critical-path test before conclusions.
Collapsed As-Built (CAB)
Removes events from the as-built to test hypothetical completion. Delete only events proven independently critical at that time. Disclose logic settings, calendars, and resource-levelling choices.Use this method to test hypotheses and corroborate contemporaneous findings.
Integrating Contractual and Technical Logic
Methods must conform to the contract’s procedural framework.FIDIC 8.4 grants time relief for events beyond the contractor’s control.WTSA and TIA, which demonstrate actual impact, best meet this intent. Using a method inconsistent with procedure weakens any claim.
Before running analysis, verify the record base: approved baselines, progress updates, reports, and correspondence. If the record did not exist when the delay occurred, it cannot prove how the delay unfolded.
Balancing Complexity and Clarity
Complex models often confuse more than they convince.Clarity is credibility. Aegis applies forensic proportionality, use only as much technique as the evidence can support.
Aegis Insight: A simple truth well proven is stronger than a complex truth unproven.
Concurrency, Pacing and Hybrid Approaches
Where concurrency is suspected, demonstrate independent, critical, co-extensive delays. Do not label overlaps as concurrency unless both were critical and effective simultaneously.
When pacing is alleged, show documentary evidence that the slowdown was intentional and mitigation-based.
If using a hybrid, define in advance which events use which method and why. Do not change method mid-analysis to favour outcomes. Apply the selection rule consistently.
Modelling Settings Disclosure
Disclose calculation engine, retained-logic or progress-override mode, calendar configuration, and any resource levelling applied.
Maintain these settings across runs. Undisclosed configuration changes create uncertainty greater than any modelling error.
Audit Trail and Evidence Hierarchy
Maintain a full audit trail: versioned programmes, change logs, event evidence catalogue, and run logs recording settings and calculation dates.
Reproducibility is professional diligence.
Evidence Hierarchy
Approved baseline and updates
Contemporaneous progress data and meeting minutes
Correspondence and RFIs
Resource histograms and time sheets
Reconstructed logic and expert assumptions
Base conclusions as high on this hierarchy as possible.
Common Errors and Ethical Discipline
Frequent errors include: outdated or unapproved baselines; ignoring float ownership; combining rather than segregating concurrent delays; rebuilding logic retrospectively.
Robert Knowles reminded practitioners that “procedural discipline is the foundation of contractual fairness.” (200 Contractual Problems and their Solutions, 2000).
John Uff QC wrote that “fairness is not imposed by statute but practised through sound administration.” (Construction Law, 2022).That principle applies equally to analysis.
This remains the professional creed.
Tailpiece: The Method Serves the Record
Delay analysis is a science of evidence.
The method must emerge from the record, not from preference.
When it fits the data, the result feels inevitable.
When it does not, it feels forced.
The analyst’s duty is to respect logic, not to invent it.
Integrity of method mirrors integrity of practice.
A good method does not impress with complexity; it convinces through honesty.
Delay Analysis Readiness Checklist
Baseline and updates collected and quality-checked
Calendars, constraints, and logic anomalies documented
Evidence catalogue built by event and window
Method-selection rule recorded and applied consistently
Engine settings disclosed and fixed across comparisons
Concurrency tested for independent, co-extensive criticality
Audit trail compiled with all inputs and outputs



Comments