RedSheep SecurityRedSheepSecurity
Foundations — Lesson 5 of 10

Indicators of Compromise (IOCs)

12 min read

Indicators of Compromise (IOCs) are forensic artifacts that suggest a system or network has been breached or is under attack. They are the most tangible output of threat intelligence and the most commonly shared type of threat data. However, IOCs have significant limitations that every CTI analyst must understand. This lesson covers IOC types, the Pyramid of Pain framework, IOC lifecycle, classification models, sourcing, and the pitfalls of over-reliance on indicator-driven defense.

Learning Objectives

  • Identify and describe the common types of IOCs used in threat intelligence
  • Explain the Pyramid of Pain and its implications for defensive strategy
  • Understand IOC lifecycle and the concept of indicator decay
  • Distinguish between atomic, computed, and behavioral indicators
  • Evaluate the strengths and limitations of IOC-driven threat intelligence

What Are Indicators of Compromise?

Definition: An Indicator of Compromise (IOC) is an observable artifact — a piece of forensic data — that, with high confidence, identifies malicious activity on a system or network. IOCs are used to detect, respond to, and share information about threats.

IOCs are the technical building blocks of threat intelligence. They are what analysts extract from malware samples, incident investigations, and threat reports. They are what get ingested into SIEMs, fed to firewalls, and shared through ISACs.

Types of IOCs

IOCs span multiple categories of technical artifacts. Each type has different detection characteristics, durability, and operational utility.

Network-Based IOCs

IOC Type Description Example Detection Method
IP Addresses IPv4 or IPv6 addresses associated with malicious activity 198.51.100.23 (C2 server) Firewall logs, DNS logs, proxy logs, netflow
Domain Names Fully qualified domain names used for C2, phishing, or staging update-service[.]malicious[.]com DNS logs, proxy logs, passive DNS
URLs Full uniform resource locators including path and parameters hxxps://example[.]com/payload/stage2.exe Proxy logs, web gateway logs
Email Addresses Sender addresses used in phishing campaigns hr-department@spoofed-company[.]com Email gateway logs, mail server logs
SSL/TLS Certificates Certificate hashes or attributes associated with malicious infrastructure SHA-1 fingerprint of a self-signed cert on a C2 server TLS inspection logs, certificate transparency logs
JA3/JA3S Hashes Fingerprints of TLS client/server handshake parameters a0e9f5d64349fb13191bc781f81f42e1 Network monitoring tools that compute JA3 hashes

Host-Based IOCs

IOC Type Description Example Detection Method
File Hashes Cryptographic hashes of malicious files (MD5, SHA-1, SHA-256) d41d8cd98f00b204e9800998ecf8427e (MD5) Endpoint detection, file integrity monitoring
File Names/Paths Names or locations of known-malicious files C:\Windows\Temp\svchost.exe (masquerading) Endpoint logs, Sysmon Event ID 11
Registry Keys Windows registry modifications for persistence or configuration HKCU\Software\Microsoft\Windows\CurrentVersion\Run\UpdateService Sysmon Event ID 13, registry auditing
Mutexes Named mutual exclusion objects created by malware to prevent multiple instances Global\{unique-malware-mutex-name} Process monitoring, Sysmon
Scheduled Tasks/Services Persistence mechanisms created by adversaries A scheduled task named SystemHealthCheck running a malicious binary Windows Event Log 4698, Sysmon Event ID 1
Named Pipes Inter-process communication channels used by malware or lateral movement tools \\.\pipe\msagent_## (Cobalt Strike default) Sysmon Event IDs 17/18

File Hash Comparison

Understanding the differences between hash algorithms matters for IOC sharing and detection.

Algorithm Length Example Notes
MD5 32 hex characters (128-bit) d41d8cd98f00b204e9800998ecf8427e Widely used but cryptographically broken; collision attacks are practical. Still common in IOC feeds for compatibility.
SHA-1 40 hex characters (160-bit) da39a3ee5e6b4b0d3255bfef95601890afd80709 Also considered cryptographically weak (SHAttered attack, 2017). Being phased out.
SHA-256 64 hex characters (256-bit) e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 Current standard for IOC sharing. No practical collision attacks. Recommended for all new IOC work.

Best Practice: Always use SHA-256 as the primary hash when sharing IOCs. Include MD5 and SHA-1 for backward compatibility with older systems, but do not rely on them as sole identifiers.

The Pyramid of Pain

The Pyramid of Pain was introduced by David Bianco in 2013 and has become one of the most cited frameworks in CTI. It ranks indicator types by how much pain their detection causes the adversary — that is, how difficult it is for the adversary to change that indicator and continue operating.

        /\
       /  \          TTPs (Tactics, Techniques, and Procedures)
      /    \         — TOUGH: Adversary must change their entire approach
     /------\
    /        \       Tools
   /          \      — CHALLENGING: Must find/develop new tooling
  /------------\
 /              \    Network/Host Artifacts
/                \   — ANNOYING: Must reconfigure, but can do so
|----------------|
|   Domain Names |   — SIMPLE: Easy to register new domains
|----------------|
|  IP Addresses  |   — EASY: Trivial to change (new VPS, proxy, VPN)
|----------------|
|  Hash Values   |   — TRIVIAL: Recompile, change one byte, new hash
|________________|

What the Pyramid Teaches Us

Level Indicator Type Adversary Cost to Change Defensive Value
Bottom Hash Values Trivial — changing a single byte produces a new hash Very low persistence; useful for immediate incident scoping
IP Addresses Easy — rotate to a new VPS, use a VPN or proxy Low persistence; IP-based blocking is easily evaded
Domain Names Simple — new domains are cheap to register (though aged domains and categorized domains take more effort) Slightly higher, especially with passive DNS tracking
Middle Network/Host Artifacts Annoying — must change C2 protocols, user agents, file paths, registry keys Moderate; forces adversary to modify operational details
Tools Challenging — must find or develop alternative tools; retool entire operation High; forces significant operational investment
Top TTPs Tough — must fundamentally change how they operate Highest; detection at this level is the most durable

The lesson of the Pyramid is clear: invest in detecting behaviors (TTPs) rather than relying solely on atomic indicators. IOC-based detection has its place, but it operates at the bottom of the pyramid where adversary evasion is trivial.

IOC Lifecycle and Decay

IOCs are not permanent. Their value degrades over time as adversaries rotate infrastructure, recompile malware, and shift tactics. Understanding IOC lifecycle is essential for maintaining useful detection content.

Decay Rates by Type

IOC Type Typical Useful Lifespan Why It Decays
IP Addresses Hours to days Adversaries rotate IPs frequently; cloud and VPS providers reassign addresses; an IP flagged as malicious today may belong to a legitimate user tomorrow
Domains Days to weeks Adversaries register new domains; old ones may be sinkholed by defenders or abandoned
File Hashes Days to weeks Recompilation, packing, or minor code changes produce new hashes
URLs Hours to days Specific paths on compromised infrastructure change rapidly
Registry Keys / Mutexes Weeks to months More effort to change, but adversaries update them between campaigns
TTPs Months to years Fundamental operational changes are costly; many groups use consistent techniques for years

Managing IOC Decay

  • Set expiration dates — IOCs should have a defined shelf life after which they are reviewed or retired
  • Track first-seen and last-seen dates — An indicator last seen three years ago is likely no longer relevant
  • Monitor for false positives — Stale IP indicators commonly trigger alerts on reassigned, legitimate infrastructure
  • Prioritize enrichment — Before acting on an old IOC, re-enrich it (WHOIS, passive DNS, current VirusTotal results) to confirm it is still relevant

Atomic, Computed, and Behavioral Indicators

IOCs can be classified by their complexity, a taxonomy described in the MITRE-funded work on indicator types:

Atomic Indicators

Atomic indicators are individual data points that cannot be broken down further. They are self-contained and can be used independently.

  • Examples: IP address, domain name, email address, single file hash
  • Strengths: Easy to share, easy to ingest into automated systems, fast to operationalize
  • Weaknesses: Trivial for adversaries to change; no context without enrichment

Computed Indicators

Computed indicators are derived by applying algorithms or analysis to data. They are more durable than atomic indicators because they capture patterns rather than specific values.

  • Examples: YARA rules (pattern-matching across file contents), Snort/Suricata signatures (network traffic patterns), fuzzy hashes (ssdeep/TLSH for similarity matching), JA3 hashes
  • Strengths: More resilient to minor changes (e.g., a YARA rule can detect malware variants even if the hash changes)
  • Weaknesses: Require more effort to create and validate; false positive risk increases with broader patterns

Behavioral Indicators

Behavioral indicators describe patterns of activity rather than specific artifacts. They correspond to the top of the Pyramid of Pain (TTPs).

  • Examples: "Process spawned from Microsoft Office application executes PowerShell with encoded command" (T1059.001); "Credential dumping via LSASS memory access" (T1003.001); "Lateral movement via Windows Remote Management" (T1021.006)
  • Strengths: Most durable detection; adversaries must fundamentally change how they operate to evade
  • Weaknesses: Higher false positive rates require careful tuning; more complex to implement; require behavioral telemetry (EDR, Sysmon)

IOC Sources

CTI teams consume IOCs from multiple sources. Understanding the strengths and limitations of each source type is essential for quality control.

Common IOC Sources

Source Examples Strengths Considerations
Commercial Threat Intelligence Recorded Future, Mandiant Advantage, CrowdStrike Falcon Intelligence Curated, enriched, high confidence Cost; potential for vendor bias toward threats their products detect
Open-Source Feeds AlienVault OTX, Abuse.ch (URLhaus, MalwareBazaar, Feodo Tracker), PhishTank Free, community-driven, broad coverage Variable quality; require validation before operational use
ISACs/ISAOs FS-ISAC, H-ISAC, MS-ISAC, IT-ISAC Sector-specific, peer-validated, community trust Membership requirements; sharing quality varies by community
Government CISA AIS, FBI FLASH reports, NSA Cybersecurity Advisories Authoritative, high confidence May lag behind real-time; sometimes heavily caveated
Internal Incident response findings, malware analysis, log analysis Highest relevance (directly from your environment) Requires internal IR and analysis capability
Vendor Reports Mandiant APT reports, CrowdStrike adversary profiles, Unit 42 threat briefs Detailed context, often include TTPs alongside IOCs IOCs may be stale by publication time

Source Evaluation

Not all IOC sources are equally reliable. Analysts should evaluate sources using criteria adapted from intelligence tradecraft:

  • Reliability — Has this source provided accurate information in the past?
  • Credibility — Is the information consistent with other sources?
  • Timeliness — How old is the information? Is it still operationally relevant?
  • Relevance — Does this information relate to threats facing our organization?

The Admiralty Code (also called the NATO System) provides a standardized framework for rating source reliability (A through F) and information credibility (1 through 6), and is used by some CTI teams for formal source evaluation.

Limitations of IOC-Driven Intelligence

While IOCs are valuable, over-reliance on them creates significant blind spots.

The "Known-Bad" Problem

IOC-based detection only finds known threats. It cannot detect novel malware, new infrastructure, or previously unseen adversary activity. Organizations that rely solely on IOC matching are always one step behind the adversary.

Volume vs. Quality

Ingesting millions of indicators without validation leads to:

  • Alert fatigue — SOC analysts are overwhelmed by low-confidence alerts
  • False positives — Stale or poorly sourced indicators trigger on legitimate traffic
  • Operational noise — Valuable alerts are buried in irrelevant ones

Lack of Context

A raw IOC without context — who uses it, what campaign it belongs to, how confident we are, when it was last seen — is of limited value. An IP address alone does not tell an analyst whether it represents a nation-state C2 server or a compromised legitimate host that has since been cleaned.

The Path Forward

The most effective approach combines IOC-based detection (for known threats) with behavioral detection (for unknown threats). This means:

  • Use IOCs for immediate blocking and alerting on known-bad infrastructure and malware
  • Use TTP-based detections (Sigma rules, behavioral analytics, hunt queries mapped to MITRE ATT&CK) to catch adversary behavior regardless of the specific indicators used
  • Continuously evaluate IOC feeds for quality and retire indicators that no longer provide value

Key Takeaways

  • IOCs are forensic artifacts (IPs, domains, hashes, URLs, registry keys, etc.) used to detect and share threat information
  • The Pyramid of Pain (Bianco, 2013) ranks indicators by how difficult they are for adversaries to change — hash values are trivial to change, TTPs are hard
  • IOCs have a limited lifespan and must be managed with expiration dates and regular validation
  • Atomic indicators are easy to share but easy to evade; behavioral indicators are harder to implement but far more durable
  • IOC quality depends on source reliability, timeliness, and relevance to your organization
  • IOC-driven detection alone is insufficient — it must be complemented by TTP-based behavioral detection to address novel threats

Practical Exercise

IOC Extraction and Pyramid Mapping

  1. Find a recent threat advisory from CISA (cisa.gov/advisories) or a vendor threat report that includes IOCs.
  2. Extract all IOCs from the report and categorize them:
    • Network indicators (IPs, domains, URLs)
    • Host indicators (hashes, file paths, registry keys)
    • Behavioral indicators (described TTPs)
  3. Map each indicator to its level on the Pyramid of Pain.
  4. For each category, write one sentence describing:
    • How long you would expect this indicator to remain useful
    • How you would operationalize it (block, alert, hunt)
  5. Identify which indicators are atomic, which are computed, and which are behavioral.

This exercise builds practical skills in IOC handling and reinforces the Pyramid of Pain framework.

Further Reading

  • "The Pyramid of Pain" — David Bianco (2013, updated 2014). The original blog post introducing the framework. Available at detect-respond.blogspot.com.
  • MITRE ATT&CKattack.mitre.org. The framework for cataloging TTPs — the top of the Pyramid of Pain.
  • STIX/TAXII Standards — OASIS Open. The standard formats for sharing structured threat intelligence, including IOCs. Documentation at oasis-open.github.io/cti-documentation.
  • "The Practice of Network Security Monitoring" — Richard Bejtlich (No Starch Press, 2013). Covers the operational use of network-based indicators in detection and monitoring.