Detection engineering is the discipline of designing, building, testing, and maintaining the logic that identifies malicious activity in an environment. When driven by cyber threat intelligence, detection engineering transforms raw knowledge of adversary behavior into actionable, automated defense. This lesson covers detection rule formats, the detection lifecycle, the intelligence-driven approach, coverage mapping, and the critical relationship between CTI analysts and detection engineers.
Learning Objectives
- Understand what detection engineering is and how it relates to CTI
- Write and interpret Sigma, YARA, and Snort/Suricata rule syntax at a foundational level
- Apply the intelligence-driven detection workflow: CTI report to deployed detection
- Use the DeTT&CT framework to map detection coverage against MITRE ATT&CK
- Manage false positives and measure detection quality over time
What Is Detection Engineering?
Definition: Detection engineering is the systematic practice of creating, testing, deploying, tuning, and retiring detection logic that identifies threats in security telemetry. It bridges the gap between knowing what threats exist (CTI) and finding those threats in your environment (SOC operations).
Detection engineering has matured from ad-hoc rule writing into a formalized discipline. The detection-as-code movement treats detection rules as software artifacts: version-controlled, peer-reviewed, tested, and deployed through CI/CD pipelines. This approach brings software engineering rigor to security operations.
Key principles of detection-as-code:
- Detection rules are stored in version control (Git)
- Changes go through code review before deployment
- Rules are tested against known-good and known-bad data
- Deployment is automated through pipelines
- Rule performance is measured and tracked
Detection Rule Formats
Sigma Rules
Sigma is a generic, open signature format for SIEM systems, created by Florian Roth and Thomas Patzke. Sigma serves as a common language for detection rules that can be converted into the query syntax of specific SIEM platforms (Splunk SPL, Elastic Query DSL, Microsoft Sentinel KQL, and many others).
A Sigma rule is written in YAML and contains:
title: Suspicious LSASS Process Access
id: a]unique-uuid-here
status: experimental
description: Detects process access to LSASS memory, indicative of credential dumping
references:
- https://attack.mitre.org/techniques/T1003/001/
author: Example Author
date: 2024/01/15
tags:
- attack.credential_access
- attack.t1003.001
logsource:
category: process_access
product: windows
detection:
selection:
TargetImage|endswith: '\lsass.exe'
GrantedAccess|contains:
- '0x1010'
- '0x1038'
filter_main_legitimate:
SourceImage|endswith:
- '\wmiprvse.exe'
- '\taskmgr.exe'
condition: selection and not filter_main_legitimate
falsepositives:
- Legitimate security tools
- System administration tools
level: high
Why Sigma matters for CTI analysts: You can express detection logic once in Sigma and convert it to any supported SIEM format using tools like sigma-cli or pySigma. The SigmaHQ repository on GitHub contains thousands of community-maintained rules.
YARA Rules
YARA (created by Victor Alvarez at VirusTotal) is a pattern-matching tool designed to identify and classify malware samples. YARA rules operate on files and memory, scanning for byte patterns, strings, and conditions.
rule Suspicious_PowerShell_Download {
meta:
description = "Detects PowerShell download cradle patterns"
author = "Example Author"
reference = "https://attack.mitre.org/techniques/T1059/001/"
strings:
$s1 = "DownloadString" ascii nocase
$s2 = "DownloadFile" ascii nocase
$s3 = "Invoke-WebRequest" ascii nocase
$s4 = "Net.WebClient" ascii nocase
$s5 = "Start-BitsTransfer" ascii nocase
condition:
2 of them
}
YARA is used for: malware classification during triage, threat hunting across file shares and endpoints, retroactive scanning of historical samples, and memory forensics analysis.
Snort and Suricata Rules
Snort (created by Martin Roesch, now maintained by Cisco) and Suricata (developed by the Open Information Security Foundation, OISF) are network intrusion detection/prevention systems. They inspect network traffic using signature-based rules.
alert http $HOME_NET any -> $EXTERNAL_NET any (
msg:"ET MALWARE Cobalt Strike Beacon C2 Activity";
flow:established,to_server;
content:"GET"; http_method;
content:"/visit.js"; http_uri;
content:"Cookie:"; http_header;
pcre:"/Cookie:\s*[A-Za-z0-9+\/]{40,}={0,2}/H";
sid:2030000; rev:1;
classtype:trojan-activity;
reference:url,attack.mitre.org/software/S0154/;
)
Suricata extends Snort's capabilities with multi-threading, protocol-aware logging (EVE JSON), and additional keywords for TLS, DNS, and HTTP inspection. Both tools use a largely compatible rule syntax.
| Rule Format | Domain | Primary Use Case |
|---|---|---|
| Sigma | SIEM log analysis | Log-based behavioral detection |
| YARA | File and memory scanning | Malware identification and classification |
| Snort/Suricata | Network traffic | Network intrusion detection |
The Detection Rule Lifecycle
Detection rules are not static. They follow a lifecycle:
1. Creation
Triggered by a CTI report, incident finding, threat hunt result, or vulnerability disclosure. The rule is written to detect the specific behavior identified.
2. Testing and Validation
The rule is tested against:
- True positive data: Known-malicious samples or attack simulations to confirm the rule fires correctly
- True negative data: Normal environment telemetry to assess false positive rate
- Edge cases: Variations of the behavior that should (or should not) trigger the rule
3. Deployment
The rule is deployed to production detection systems, initially in a logging-only or low-priority mode to observe real-world performance.
4. Tuning
Based on production performance, the rule is refined. Exclusions are added for legitimate activity that triggers false positives. Detection logic is tightened or broadened based on observed coverage.
5. Review and Retirement
Rules are periodically reviewed. Rules that no longer detect relevant threats (e.g., the targeted infrastructure is dismantled, the malware family is extinct) are retired. Rules that generate only noise with no true positives are candidates for removal.
Intelligence-Driven Detection
The intelligence-driven approach connects CTI analysis directly to detection engineering through a structured workflow:
CTI Report → TTP Extraction → Data Source Identification → Detection Logic → Testing → Deployment
Step-by-Step Workflow
- Consume the CTI report: Read the full report on a threat actor campaign, vulnerability exploitation, or new malware family
- Extract TTPs: Map the adversary's actions to MITRE ATT&CK techniques and sub-techniques (as covered in Lesson 24)
- Identify data sources: For each technique, determine what telemetry is required (process creation logs, network flow data, file modification events, etc.)
- Verify collection: Confirm your environment actually collects and retains the necessary telemetry
- Write detection logic: Create Sigma rules, YARA rules, or SIEM queries that target the specific procedures described in the report
- Test: Validate against simulated attacks or historical data
- Deploy and monitor: Push to production and track performance
Key Principle: The best detection rules target procedures (the specific implementation) rather than just techniques (the general category). A rule detecting "any PowerShell execution" is noisy; a rule detecting "encoded PowerShell launched by a Word macro process with a specific command structure" is precise.
DeTT&CT: Mapping Detection Coverage
DeTT&CT (Detect Tactics, Techniques & Combat Threats) is an open-source framework created by Marcus Bakker and Ruben Bouman that helps security teams map their detection coverage, data source quality, and visibility against MITRE ATT&CK.
DeTT&CT enables teams to:
- Assess data source quality: Rate how well each data source is collected, processed, and available for detection
- Map detection coverage: Document which ATT&CK techniques have detection rules, at what quality level
- Identify gaps: Visualize where detection coverage is missing for techniques relevant to priority threat actors
- Prioritize investments: Focus detection engineering effort on the highest-impact gaps
The framework uses YAML configuration files and generates ATT&CK Navigator layers, providing a visual representation of your detection posture.
False Positive Management
False positives are the persistent challenge of detection engineering. Every false positive consumes analyst time and erodes trust in detection systems.
Strategies for managing false positives:
- Baseline the environment: Understand what is normal before writing rules. A rule alerting on
certutil.exedownloading files will generate noise if your IT team uses certutil for legitimate certificate operations. - Use allowlists surgically: Exclude specific known-good processes, users, or paths — never broadly suppress alerts
- Layer detection logic: Combine multiple conditions rather than relying on a single indicator
- Track false positive rates: Measure the ratio of false positives to true positives per rule and prioritize tuning for the noisiest rules
- Document tuning decisions: Record why each exclusion was added so future analysts understand the rationale
Detection Quality Metrics
| Metric | Description |
|---|---|
| True Positive Rate | Percentage of real threats correctly detected |
| False Positive Rate | Percentage of alerts that are not actual threats |
| Mean Time to Detect (MTTD) | Average time from adversary action to alert firing |
| Coverage Percentage | Proportion of relevant ATT&CK techniques with deployed detections |
| Rule Health | Percentage of rules that have fired at least once in 90 days (rules that never fire may be misconfigured or irrelevant) |
CTI and Detection Engineering: The Relationship
CTI analysts and detection engineers have a symbiotic relationship:
- CTI provides: Threat actor profiles, TTP analysis, procedure-level details, priority intelligence requirements, and context for what matters
- Detection engineering provides: Feedback on what is detectable, data source limitations, false positive challenges, and detection gap assessments
- Together they produce: Threat-informed, prioritized, and validated detection coverage
Organizations that separate these functions into silos lose the feedback loop. The most effective teams have CTI analysts who understand detection logic and detection engineers who consume threat intelligence.
Key Takeaways
- Detection engineering is a disciplined practice with a full lifecycle: creation, testing, tuning, and retirement
- Sigma provides a SIEM-agnostic detection rule format; YARA handles file/memory scanning; Snort/Suricata cover network traffic
- The intelligence-driven approach creates a direct pipeline from CTI reporting to deployed detection rules
- DeTT&CT enables systematic mapping of detection coverage against ATT&CK
- False positive management and detection metrics are essential for maintaining trust and effectiveness
- CTI and detection engineering are most effective when tightly integrated, not siloed
Practical Exercise
Using a recent public threat report (suggestions: any CISA Advisory, Mandiant blog post, or Recorded Future Insikt Group report):
- Extract three TTPs from the report at the procedure level
- Write one Sigma rule targeting one of the procedures (use the SigmaHQ rule format)
- Write one YARA rule targeting a file-based indicator from the report (a string pattern, byte sequence, or file characteristic)
- Map the data sources required for each detection — identify what logs you would need and what Event IDs or data fields are critical
- Assess gaps: If you were deploying these rules in an environment with only Windows Security Event Logs and basic network flow data (no Sysmon, no EDR), which detections would work and which would fail?
Further Reading
- Sigma Project. SigmaHQ Rule Repository. Available at: https://github.com/SigmaHQ/sigma
- Alvarez, Victor. YARA Documentation. Available at: https://yara.readthedocs.io/
- Bakker, Marcus and Bouman, Ruben. DeTT&CT Framework. Available at: https://github.com/rabobank-cdc/DeTTECT
- Roesch, Martin (1999). "Snort — Lightweight Intrusion Detection for Networks." Proceedings of LISA '99, USENIX.