RedSheep SecurityRedSheepSecurity
Intermediate — Lesson 15 of 12

Threat Hunting Foundations

11 min read

Threat hunting is the proactive, human-driven search for adversary activity that has evaded existing automated defenses. Unlike detection engineering, which builds automated alerts to catch known threats, hunting assumes that an adversary may already be present in the environment and actively seeks evidence of their activity. For CTI analysts, threat hunting represents one of the most direct ways to operationalize intelligence — turning knowledge about adversary behavior into active searches for that behavior in organizational data.

Learning Objectives

  • Define threat hunting and distinguish it from automated detection
  • Understand the three primary hunting methodologies: hypothesis-driven, IOC-driven, and TTP-driven
  • Apply the PEAK framework (Prepare, Execute, Act) to structure a hunt
  • Generate hunting hypotheses from cyber threat intelligence
  • Assess organizational hunting maturity using the Sqrrl Hunting Maturity Model

What Is Threat Hunting?

Threat hunting is a proactive, iterative process conducted by skilled analysts who search through data to find threats that automated tools have missed. Several characteristics define true threat hunting:

  • Proactive: Hunting begins before an alert fires. The hunter initiates the search based on a hypothesis, intelligence, or curiosity — not in response to a detection.
  • Human-driven: While hunters use tools extensively, the process depends on human judgment, creativity, and analytical reasoning. Automated queries alone are not hunting.
  • Iterative: Hunting is a cycle. Findings from one hunt generate new hypotheses for the next. Each hunt improves the organization's understanding of its environment.
  • Assumes compromise: Hunting operates under the assumption that adversaries may already be present. This mindset shifts the question from "Are we compromised?" to "Where is the adversary hiding?"

Key Definition: Threat hunting is the proactive, analyst-driven search for attacker tactics, techniques, and procedures (TTPs) within an environment, operating under the assumption that existing defenses may have been bypassed.

Hunting vs. Detection

Understanding the distinction between hunting and detection is critical because the two disciplines serve different but complementary purposes:

Aspect Threat Hunting Automated Detection
Trigger Hypothesis, intelligence, or analyst initiative Alert fires based on predefined rule or signature
Nature Proactive — searching for unknown threats Reactive — responding to known patterns
Automation Human-driven analysis using tools Automated matching against rules/signatures
Scope Broad, exploratory Narrow, specific to the detection rule
Output New detections, hardened defenses, intelligence Alerts for triage and investigation
Frequency Periodic campaigns Continuous, real-time

The relationship between hunting and detection is cyclical: intelligence drives hunting hypotheses, hunts discover adversary behavior, and confirmed findings become new automated detection rules. This cycle continuously expands the organization's detection coverage.

Hunting Methodologies

There are three primary approaches to threat hunting, each suited to different situations and maturity levels.

Hypothesis-Driven Hunting

Hypothesis-driven hunting is the most analytically rigorous approach. The hunter formulates a testable hypothesis about adversary behavior and then searches for evidence to confirm or refute it.

Process:

  1. Formulate a hypothesis based on threat intelligence, environmental knowledge, or adversary behavior models
  2. Identify the data sources needed to test the hypothesis
  3. Develop search queries or analytical procedures
  4. Execute the hunt, examining the data for evidence
  5. Document findings, whether the hypothesis was confirmed or refuted
  6. If confirmed, develop detections; if refuted, refine the hypothesis or move on

Example hypothesis: "An adversary using Cobalt Strike in our environment would generate periodic HTTPS beaconing to an external IP address with a consistent time interval between connections, which would be visible in our proxy logs as repeated connections to an uncategorized domain at regular intervals."

IOC-Driven Hunting

IOC-driven hunting uses specific indicators of compromise — IP addresses, domain names, file hashes, URLs, email addresses — to search for evidence of known threats in the environment. This is the most straightforward approach but also the most limited.

Process:

  1. Receive IOCs from threat intelligence (reports, feeds, ISAC sharing, government advisories)
  2. Search historical and current data for matches against the IOCs
  3. Investigate any matches for context and scope
  4. Determine whether matches represent true positives

Limitations: IOCs are the most perishable form of intelligence. Adversaries routinely change infrastructure, recompile malware to alter hashes, and rotate domains. An IOC-only approach will miss adversaries who have changed their indicators since the intelligence was published.

TTP-Driven Hunting

TTP-driven hunting searches for adversary behaviors rather than specific indicators. Because TTPs are more difficult and costly for adversaries to change than indicators, TTP-based hunts have a longer shelf life and are more likely to detect novel variants of known attack patterns.

Process:

  1. Select adversary TTPs relevant to the organization (from MITRE ATT&CK, threat reports, or incident experience)
  2. Understand the data artifacts that each TTP produces
  3. Build behavioral searches that detect the pattern, not the specific tool
  4. Execute searches against relevant data sources
  5. Analyze results, filtering expected behavior to identify anomalies

Example: Instead of searching for a specific Mimikatz hash (IOC-driven), search for the behavior Mimikatz performs — accessing LSASS process memory (TTP-driven). This catches Mimikatz, its variants, and any other tool that performs the same credential dumping technique.

The Pyramid of Pain, introduced by David Bianco in 2013, illustrates why TTP-driven hunting is more effective: hash values and IP addresses are trivial for adversaries to change, while TTPs require fundamental changes to their operations.

The PEAK Framework

PEAK is a threat hunting framework developed by SANS that structures the hunting process into three phases: Prepare, Execute, and Act. It provides a repeatable methodology that organizations can adopt regardless of their maturity level.

Prepare

The preparation phase establishes the foundation for a successful hunt. Activities include:

  • Define the hunt objective: What are you looking for and why? This should connect to a PIR, a specific threat, or an intelligence-driven hypothesis.
  • Gather relevant intelligence: Review threat reports, ATT&CK technique descriptions, and prior hunt findings that inform the hypothesis.
  • Identify data sources: Determine which logs, telemetry, and data stores are needed and verify they are available and accessible.
  • Develop the hunt plan: Document the hypothesis, data sources, search queries, expected timelines, and success criteria.
  • Assess feasibility: Ensure the necessary data exists with sufficient retention, fidelity, and coverage.

Execute

The execution phase is where the analyst actively searches for adversary activity:

  • Run queries and searches: Execute the planned searches against identified data sources (SIEM, EDR, network tools).
  • Analyze results: Review query results, looking for anomalies, unexpected patterns, or evidence of the hypothesized behavior.
  • Pivot and iterate: When something interesting is found, pivot to related data sources or adjust queries to explore further.
  • Document everything: Record queries run, data examined, findings (including negative findings), and any environmental observations.

Act

The act phase translates hunt findings into organizational improvements:

  • Report findings: Document what was found (or not found) and communicate results to stakeholders.
  • Develop detections: Convert confirmed adversary behaviors into automated detection rules for the SIEM or EDR.
  • Recommend mitigations: If vulnerabilities or misconfigurations were discovered, recommend remediation actions.
  • Update intelligence: Feed findings back into the CTI process — new indicators, confirmed TTPs, or environmental insights.
  • Generate new hypotheses: Use findings to develop hypotheses for future hunts.

Data Sources for Hunting

Effective hunting requires access to rich, detailed data. The most valuable sources include:

Endpoint Data

  • EDR telemetry: Process creation with full command lines, parent-child relationships, file operations, registry modifications, network connections per process
  • Windows Event Logs: Security log (4624/4625 logon events, 4688 process creation), PowerShell script block logging (4104), Sysmon (detailed process, network, file, and registry events)
  • File system artifacts: Prefetch files, shimcache, amcache, recent files

Network Data

  • DNS logs: Query and response logs — essential for detecting C2, DGA domains, and DNS tunneling
  • Proxy/web logs: URL requests, user agents, response codes, bytes transferred
  • Netflow/IPFIX: Network connection metadata (source, destination, ports, bytes, duration)
  • Firewall logs: Allowed and denied connections, especially egress traffic
  • PCAP: Full packet captures for deep analysis (high storage cost, typically used selectively)

Authentication Data

  • Active Directory logs: Logon events, privilege changes, group membership modifications
  • VPN logs: Remote access connections, source IPs, session durations
  • Cloud identity logs: Azure AD/Entra ID sign-in logs, conditional access results, MFA events

Email Data

  • Email gateway logs: Sender, recipient, subject, attachment hashes, URLs, delivery verdicts
  • Phishing reports: User-reported suspicious emails

Measuring Hunt Success

Hunting success should be measured across multiple dimensions, not solely by whether threats were found:

  • Findings per hunt: Number of confirmed true positive findings (adversary activity, misconfigurations, policy violations)
  • Detections created: Number of new automated detection rules produced from hunt findings
  • Time to detection improvement: Reduction in dwell time for threats similar to those hunted
  • Coverage expansion: New data sources identified or onboarded as a result of hunting
  • Hypothesis outcomes: Ratio of confirmed vs. refuted hypotheses (a healthy program has both — all confirmed suggests hypotheses are not ambitious enough)
  • Environmental knowledge gained: Understanding of normal baselines, data gaps, and infrastructure behavior

The Sqrrl Hunting Maturity Model

The Hunting Maturity Model (HMM) was developed by Sqrrl (later acquired by Amazon Web Services) to help organizations assess and improve their hunting capabilities. It defines five maturity levels:

Level Name Description
HMM0 Initial No hunting capability. Organization relies entirely on automated detection.
HMM1 Minimal Organization can search for indicators (IOC-driven hunting) using threat intelligence feeds. Limited to searching for known-bad artifacts.
HMM2 Procedural Organization follows published hunting procedures and playbooks. Hunts are repeatable but follow prescribed steps rather than original hypotheses.
HMM3 Innovative Organization creates new hunting hypotheses based on threat intelligence and environmental knowledge. Analysts develop custom queries and analytical approaches.
HMM4 Leading Organization automates successful hunts, contributes to community knowledge, and continuously develops novel hunting techniques. Hunting findings systematically feed back into detection engineering and intelligence.

Most organizations operate at HMM1 or HMM2. Reaching HMM3 requires skilled analysts, quality data, and CTI integration. HMM4 is aspirational for most teams and represents a fully mature, feedback-driven hunting program.

The Role of CTI in Driving Hunts

CTI and threat hunting have a symbiotic relationship. CTI drives hunting by providing:

  • Hypothesis fuel: Threat reports identifying adversary TTPs provide the basis for hunting hypotheses
  • Targeting guidance: PIRs identify which threats matter most, helping hunters prioritize
  • IOCs for sweeps: New indicators from intelligence sharing enable immediate IOC searches
  • Contextual enrichment: When a hunter finds something anomalous, CTI helps determine whether it matches known adversary behavior

Hunting drives CTI by producing:

  • Environmental intelligence: Understanding what "normal" looks like in the organization
  • Confirmed TTP observations: Validation that specific adversary techniques are viable in the environment
  • New indicators: Discovery of previously unknown malicious infrastructure or tools
  • Intelligence gaps: Identifying what information would have made the hunt more effective

Key Takeaways

  • Threat hunting is proactive, human-driven, and assumes compromise — it is fundamentally different from automated detection
  • The three primary methodologies are hypothesis-driven (most rigorous), IOC-driven (most accessible), and TTP-driven (most durable)
  • The PEAK framework (Prepare, Execute, Act) provides a structured, repeatable approach to conducting hunts
  • Quality data is the foundation of hunting — endpoint, network, authentication, and DNS logs are essential
  • The Sqrrl Hunting Maturity Model (HMM0-HMM4) provides a roadmap for organizational improvement
  • CTI and hunting are symbiotic: intelligence drives hypotheses, and hunt findings generate new intelligence

Practical Exercise

Conduct a tabletop threat hunt using the PEAK framework:

  1. Prepare: Choose a MITRE ATT&CK technique relevant to your environment (e.g., T1053 — Scheduled Task/Job). Read the ATT&CK page for the technique, including procedure examples and data sources.
  2. Develop a hypothesis: Write a testable hypothesis (e.g., "An adversary using scheduled tasks for persistence would create new scheduled tasks with unusual names or pointing to executables in non-standard directories, which would be visible in Windows Security Event 4698").
  3. Identify data sources: List the specific logs and data you would need. Assess whether you have access to those data sources in your environment.
  4. Write search queries: Draft 2-3 SIEM queries (SPL, KQL, or pseudocode) that would test your hypothesis.
  5. Define success criteria: What would a true positive look like? What would a false positive look like? How would you distinguish them?
  6. Plan the Act phase: If you found adversary activity, what detections would you create? What would you report and to whom?

Further Reading

  • Lee, R. & Lee, D. (SANS). SANS FOR508: Advanced Incident Response, Threat Hunting, and Digital Forensics. (Industry-standard course covering hunting methodology)
  • Bianco, D. (2013). The Pyramid of Pain. Enterprise Detection & Response blog. https://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html
  • Sqrrl. (2016). A Framework for Cyber Threat Hunting. (Introduces the Hunting Maturity Model; archived copies available online)
  • MITRE ATT&CK: https://attack.mitre.org/ (Essential reference for TTP-driven hunting)