RedSheep SecurityRedSheepSecurity
Academy/Advanced/Lesson 23
Advanced — Lesson 23 of 10

Attribution Deep Dive

9 min read

Attribution — determining who is responsible for a cyberattack — is one of the most complex and consequential challenges in cyber threat intelligence. Unlike traditional crime scenes where physical evidence often points to a perpetrator, cyber operations involve layers of obfuscation, shared tooling, and deliberate deception. This lesson examines the levels of attribution, the evidence types analysts rely on, the challenges of false flag operations, and the roles of government and private sector in making attribution claims.

Learning Objectives

  • Understand the three levels of attribution: technical, operational, and strategic/political
  • Identify the types of evidence used to support attribution assessments
  • Recognize false flag techniques and study the Olympic Destroyer case
  • Evaluate the roles of government and private sector in public attribution
  • Apply sound judgment about when attribution is appropriate and when it is not

The Three Levels of Attribution

Attribution is not a single binary determination. It operates across a spectrum of depth and consequence.

Technical Attribution

Technical attribution identifies the infrastructure, tools, and methods used in an attack. This is the most accessible layer — it answers questions like "what IP addresses were used," "what malware family was deployed," and "what command-and-control servers were contacted." Technical attribution does not inherently identify who sits behind the keyboard; it identifies the digital artifacts left behind.

Operational Attribution

Operational attribution connects the technical evidence to specific individuals, teams, or organizational units. This layer answers "who conducted this operation." It often requires signals intelligence, human intelligence, or law enforcement investigation to bridge the gap between technical indicators and human operators. The 2014 U.S. Department of Justice indictment of five members of PLA Unit 61398 (linked to APT1) is a landmark example of operational attribution.

Strategic/Political Attribution

Strategic attribution assigns responsibility to a nation-state or its leadership and carries diplomatic, economic, or military consequences. When the United States, United Kingdom, and other Five Eyes nations jointly attributed the NotPetya attack to Russia's GRU in February 2018, that was a strategic attribution with geopolitical implications. This level requires the highest confidence and typically involves intelligence community consensus.

The Spectrum of Attribution Confidence

Key Concept: Attribution is expressed in degrees of confidence, not certainty. The intelligence community uses language like "almost certainly," "likely," "possibly," and "unlikely" to convey analytical confidence.

Confidence Level Typical Language Evidence Basis
High "Almost certainly" / "We assess with high confidence" Multiple independent evidence types, corroborated by SIGINT/HUMINT
Moderate "Likely" / "We assess with moderate confidence" Strong technical evidence, some corroboration, consistent with known actor behavior
Low "Possibly" / "We cannot rule out" Limited or single-source evidence, circumstantial alignment

Analysts must resist the pressure to provide definitive attribution when the evidence does not support it. Premature or poorly supported attribution can damage credibility and have unintended consequences.

Evidence Types for Attribution

No single evidence type is sufficient for reliable attribution. Strong assessments rely on convergence across multiple independent lines of evidence.

Infrastructure Analysis

Examining the servers, domains, and IP addresses used in an operation. Reuse of infrastructure across campaigns can link operations to the same actor. However, shared hosting, bulletproof providers, and compromised infrastructure complicate this analysis. Threat actors increasingly use disposable infrastructure, living-off-the-land techniques, and legitimate cloud services.

Malware Code Analysis

Code similarities, shared libraries, compiler artifacts, debug paths, and unique implementations can link malware families to the same developer or development team. For example, the overlap between the Duqu, Flame, and Stuxnet codebases helped researchers connect these tools to a broader development program. Unique code, however, is not the same as unique actors — code is shared, stolen, and leaked.

Tactics, Techniques, and Procedures (TTPs)

TTPs represent behavioral patterns and are harder to change than tools or infrastructure. An actor's consistent use of specific initial access methods, lateral movement techniques, or data exfiltration patterns can be a strong attribution indicator. MITRE ATT&CK provides a common framework for describing and comparing TTPs across actors.

Language and Cultural Artifacts

Metadata in documents, malware strings, keyboard layout preferences, and language in phishing lures can provide clues about the operators' origin. Mandiant's APT1 report (2013) noted Simplified Chinese language settings and Shanghai working hours. However, these artifacts are easily spoofed.

Operational Timing

Patterns in when operations occur — working hours aligned with specific time zones, pauses during national holidays, or surges around geopolitical events — can support attribution hypotheses. These patterns are useful in aggregate but not individually conclusive.

Targeting Patterns

Who the actor targets and what data they seek can indicate motivation and sponsorship. An actor consistently targeting defense contractors for intellectual property aligns with state-sponsored espionage. An actor targeting financial institutions for monetary theft aligns with cybercrime or, in North Korea's case, state-sponsored financial operations.

False Flag Operations: The Olympic Destroyer Case

Definition: A false flag operation is a cyberattack deliberately designed to appear as if it was conducted by a different actor, using planted evidence to mislead investigators.

The most thoroughly documented false flag in cyber operations is Olympic Destroyer, which targeted the 2018 Winter Olympics in Pyeongchang, South Korea. The malware disrupted the opening ceremony, taking down Wi-Fi, the official website, and ticketing systems.

Initial analysis revealed multiple misleading indicators:

  • Lazarus Group indicators: Code samples contained artifacts matching North Korea's Lazarus Group, including specific Rich Header values in the PE file that matched known Lazarus tools
  • Chinese APT indicators: Other code sections resembled tools associated with Chinese threat actors
  • Russian false leads: Some infrastructure overlapped with Russian operations

Kaspersky Lab's Global Research and Analysis Team (GReAT) conducted deep analysis and determined that the Rich Header values had been deliberately forged — the actual compilation environment did not match the planted headers. Subsequent investigation by multiple organizations, combined with government intelligence, attributed Olympic Destroyer to Sandworm (GRU Unit 74455), the same Russian military intelligence unit responsible for NotPetya and attacks on Ukraine's power grid.

The Olympic Destroyer case demonstrates why single-indicator attribution is dangerous and why analysts must examine the full body of evidence critically.

Government Attribution

Governments possess unique attribution capabilities through signals intelligence, human intelligence, and law enforcement powers that the private sector cannot replicate.

Indictments and Legal Actions

The U.S. Department of Justice has issued indictments against state-sponsored cyber operators from China (PLA Unit 61398, MSS-affiliated actors), Russia (GRU officers for NotPetya, election interference), Iran (Mabna Institute for academic espionage), and North Korea (Park Jin Hyok for WannaCry and Sony Pictures). These indictments serve as public attribution with legal weight, though the named individuals are rarely extradited.

Coordinated Public Attributions

Five Eyes nations (US, UK, Canada, Australia, New Zealand) and allies have issued joint attribution statements for operations including NotPetya (2018), Microsoft Exchange exploitation by Hafnium/APT40 (2021), and SolarWinds (attributed to Russia's SVR). These coordinated statements carry significant diplomatic weight and signal intelligence community consensus across multiple nations.

Private Sector Attribution

Private sector threat intelligence firms make attribution assessments based on technical evidence, incident response engagements, and telemetry from their customer bases.

Mandiant (now part of Google Cloud) established the model for private sector attribution with the APT1 report in February 2013, which linked a specific Chinese cyber espionage campaign to PLA Unit 61398 in Shanghai. The report included photographic evidence of the unit's building, infrastructure analysis, and operational patterns.

CrowdStrike popularized actor naming conventions (Bear for Russia, Panda for China, Kitten for Iran, Chollima for North Korea) and has attributed operations through incident response and endpoint telemetry.

Private sector attribution faces limitations: companies lack SIGINT/HUMINT access, may have commercial incentives, and their visibility is limited to their customer base. However, private sector firms can publish findings that governments may be unwilling to declassify.

Ethical Considerations and When NOT to Attribute

Attribution carries real-world consequences. Incorrect attribution can:

  • Escalate geopolitical tensions between nations
  • Damage the credibility of the attributing organization
  • Misdirect defensive efforts and threat hunting
  • Create legal liability

When to exercise caution or refrain from attribution:

  • When evidence supports multiple hypotheses equally
  • When a single evidence type is the sole basis for the claim
  • When false flag indicators are present but unresolved
  • When the geopolitical context creates pressure for a particular conclusion
  • When attribution would not change the defensive response (focus on the TTPs instead)

The principle of "attribute only when it adds value" should guide analysts. In many operational contexts, understanding the threat behavior and defending against the TTPs matters more than naming the actor.

Key Takeaways

  • Attribution operates at three levels: technical, operational, and strategic/political, each requiring progressively more evidence and carrying greater consequences
  • Strong attribution requires convergence across multiple independent evidence types — no single indicator is sufficient
  • False flag operations like Olympic Destroyer demonstrate that attackers actively plant misleading attribution evidence
  • Government attribution leverages unique intelligence capabilities; private sector attribution relies on technical evidence and telemetry
  • Analysts must honestly represent their confidence level and resist pressure to attribute beyond what the evidence supports
  • Sometimes the most responsible analytical judgment is to not attribute at all

Practical Exercise

Select a publicly documented APT campaign (suggestions: APT28/Fancy Bear's DNC intrusion, Lazarus Group's Bangladesh Bank heist, or APT10's Cloud Hopper campaign). Using only open-source reporting, create an attribution evidence matrix:

  1. Create a table with columns: Evidence Type | Specific Evidence | Confidence Contribution | Potential for Spoofing
  2. Fill in rows for each evidence category: infrastructure, malware, TTPs, language artifacts, timing, targeting
  3. Assess: Which evidence types provided the strongest basis for attribution? Which could have been planted?
  4. Write a one-paragraph attribution statement using appropriate confidence language
  5. Identify what additional evidence would increase (or decrease) your confidence

Further Reading