Post Archive
"Analysis of Competing Hypotheses" to the rescue in incident response cases
A practical walkthrough of how ACH can reduce bias and improve decision quality during cyber investigations and incident response.
Background on Analysis of Competing Hypotheses (ACH)
Analysis of competing hypotheses, also known as ACH, is a methodology used to avoid or reduce cognitive bias when interpreting information.
The ACH method was developed by Richard J. Heuer Jr.. It remains highly useful for structured analysis today.
The method is described in chapter 8 of Psychology of Intelligence Analysis.
Structured analysis uses structured techniques to mitigate the adverse impact on our analysis of known cognitive limitations and pitfalls. The most distinctive characteristic is that structured techniques externalize and decompose our thinking in a manner that enables it to be reviewed and critiqued piece by piece, or step by step, by other knowledgeable analysts.
The 8 Steps of Analysis of Competing Hypotheses (ACH)
- Identify the possible hypotheses with analysts from different perspectives.
- List significant evidence and arguments for and against each hypothesis.
- Prepare a matrix with hypotheses across the top and evidence down the side, then assess diagnosticity.
- Refine the matrix and remove evidence or arguments without diagnostic value.
- Draw tentative conclusions by trying to disprove hypotheses rather than prove them.
- Test conclusion sensitivity against critical evidence that may be wrong or misleading.
- Report conclusions and relative likelihood for all hypotheses, not only the top one.
- Identify future milestones that may indicate events are shifting course.
Source: Psychology of Intelligence Analysis.
ACH in the Cyber Domain
ACH is widely used in threat intelligence to reduce bias during analysis. Rather than forcing evidence into a single favored theory, ACH compares evidence against all hypotheses at once.
This supports more balanced decision-making and helps surface hypotheses that might otherwise be overlooked.
The same model is useful during incident response and investigative engagements.
Story Line
As an example, this approach references the DFIR report: BazarCall to Conti Ransomware via Trickbot and Cobalt Strike.
The threat actors used BazarCall to install Trickbot, then executed a Cobalt Strike beacon, moved laterally, and later deployed Conti ransomware across the domain.
Initial access was through phishing with a weaponized spreadsheet attachment. The campaign then escalated through persistence, discovery, and credential access techniques.
What Type of Hypotheses Could Be Used During an Incident or Investigation
Question: Who is the adversary in investigation "31337 attack"?
- H1 - Targeted attack
- H2 - Commodity attack
- H3 - Insider
- H4 - Strategic partner
- H5 - No attack, but an operational issue
Evidence
Evidence can be marked as supporting or contradicting each hypothesis, and mapped across all hypotheses to reduce analyst bias.
- Evidence 1: Initial access through phishing email with attachment.
- Evidence 2: Infrastructure discovery using native tools plus ADFind and Bloodhound.
- Evidence 3: Use of Cobalt Strike as remote access tooling.
- Evidence 4: Credential access via procdump and NTDSUTIL.
- Evidence 5: Deployment of Conti ransomware on infrastructure.
ACH Scoring - Evidence vs Hypothesis
The simplified scoring model compares evidence against each hypothesis to identify the most plausible outcome.
Most plausible ordering in this example:
- Targeted attack
- Strategic partner
- Commodity attack
- Insider
- Operational issue
Scoring can also support prioritization of countermeasures and urgency during an incident.