How Defenders Actually Use Shadowserver's Data Feeds

Shadowserver's name comes up constantly in threat intelligence conversations, but coverage almost always stops at what the organization is rather than what practitioners do with it. The data it produces is genuinely different from commercial threat feeds — not because it is more comprehensive, but because it is structurally unique: passively collected from a global sensor network, enriched with sinkhole telemetry, and delivered at no cost to any network owner who asks. This article covers what that data actually looks like, how defenders ingest it, and where it has proved operationally decisive.

0 Full IPv4 internet scans per day
0 Distinct report types
0 Countries covered (201 National CSIRTs)
0 Subscribing organizations as of 2024

Every day, Shadowserver's infrastructure performs 42 complete scans of the entire IPv4 internet, processes data from one of the largest distributed sinkhole networks ever operated, and delivers over 90 distinct report types to network owners, CERTs, and national security agencies across the globe. As of end-2024, the organization had grown to more than 9,000 subscribing organizations, including 201 National CSIRTs covering 175 countries and territories — up from 132 National CSIRTs as recently as 2021 — covering everything from open DNS resolvers sitting on a university network to industrial control system interfaces exposed directly to the internet. The organization also marked its 20th anniversary in 2024, a milestone that coincides with its largest operational footprint yet.

None of that is secret. What rarely gets explained is the mechanics: how a SOC analyst actually receives and acts on this data, what the field structure of a Shadowserver CSV looks like, what a sinkhole beacon tells you that a firewall log cannot, and where defenders have operationally validated this intelligence against real intrusions.

The Report Architecture: What Shadowserver Actually Sends You

Shadowserver operates a free reporting service that any network owner can subscribe to at shadowserver.org. Registration requires verifying ownership of an ASN, IP prefix, or domain. Once verified, daily reports arrive via email as compressed CSV attachments, or they can be pulled programmatically through the Shadowserver API.

What Registration Actually Involves

The registration page asks you to provide an email address, the ASN or IP prefix you are claiming ownership of, and the name of your organization. Shadowserver then sends a verification email to the WHOIS-listed abuse or technical contact for that network. If your organization's WHOIS records are accurate and current, the process takes under an hour. If they are not — a common situation for organizations that inherited address space, changed domain registrars, or have outdated abuse contacts — it can stall indefinitely.

Two scenarios trip up practitioners more than any other. First, organizations on colocation or shared hosting environments often do not own the ASN their IPs appear under. In those cases, Shadowserver reports flow to the colocation provider, not the tenant. You will not receive findings about your own servers unless your hosting provider has subscribed and passes relevant findings downstream — which many do not. The correct path for colo tenants is to contact Shadowserver directly to discuss a prefix-level arrangement. Second, organizations operating cloud infrastructure on AWS, Azure, or GCP address space face a structurally different problem, covered in the limitations section below.

Registration Path

Register at shadowserver.org/what-we-do/network-reporting/get-reports/. Before starting, verify your WHOIS abuse contact is reachable. If your organization manages multiple ASNs or inherited address space from a merger, register each prefix separately — Shadowserver scopes reports to the exact prefix you claim.

The reports are categorized into several families. Understanding those families is the first step to using them effectively:

Report Family What It Contains Typical Source
Vulnerable Services Exposed services with known CVEs, outdated versions, or dangerous default configs Active scanning (IPv4 + IPv6)
Compromised Devices IPs observed beaconing to sinkholes or C2 infrastructure Sinkhole telemetry, passive DNS
Open Resolvers / Amplifiers DNS, NTP, SSDP, memcached, and other services that can be weaponized for DDoS amplification Active scanning
ICS / SCADA Exposure Modbus, BACnet, Siemens S7, DNP3, and other OT protocols reachable over the public internet Active scanning on protocol-specific ports
Honeypot Events Attack telemetry logged by Shadowserver's global honeypot network Passive honeypot infrastructure
Spam / Phishing Infrastructure IPs and domains observed in active spam campaigns or linked to phishing kits Spam traps, passive DNS, partner feeds
Analyst Note
IPv6 coverage is real but significantly narrower than IPv4

Shadowserver scans IPv6 in addition to IPv4, and the schema documentation confirms IPv6 support across several report families. In practice, IPv6 coverage is materially thinner than IPv4. The addressable IPv6 space is astronomically larger, making exhaustive scanning computationally impractical. Shadowserver prioritizes prefixes that have been announced in BGP and narrows further based on prior scan history and partner-contributed intelligence.

The operational implication is one defenders frequently miss: if your organization is dual-stack and has services exposed on IPv6 addresses, absence from a Shadowserver scan report does not mean those services are clean. It may mean they have not been reached. For IPv6 attack surface, pair Shadowserver data with your own scheduled scans using tools like zmap6 or authenticated assessments from your vulnerability scanner. Do not treat a missing finding as a clean bill of health.

Each CSV row represents a single observed event tied to a specific IP address. The core fields present across nearly all report types are timestamp, ip, port, protocol, asn, as_name, geo, region, city, and naics (industry sector code). Vulnerability-specific reports append fields like version, tag (the CVE or misconfiguration label), and severity.

Analyst Note
Why NAICS codes change your triage priority

The naics field (North American Industry Classification System) is underused by most practitioners who focus only on the IP and CVE. Its value is triage acceleration: NAICS 518210 (data processing and hosting) carries different urgency than NAICS 611 (educational services). A sinkhole beacon from a host in critical infrastructure sectors — utilities including energy and water (22), transportation (48), information and communications (51), healthcare (62) — warrants immediate escalation regardless of how generic the malware family tag is.

Practically, you can build SIEM alert severity tiers directly off naics prefixes. Any sinkhole row where naics starts with 22, 48, 51, or 62 should auto-escalate. This is a one-line rule that most teams never write.

Verified Data Point

As of early 2025, Shadowserver publishes schema documentation for every report type at shadowserver.org/what-we-do/network-reporting/report-types/. Each page lists every field name, its data type, a description, and an example value. This makes ingestion scripting straightforward and removes the guesswork that complicates integration with undocumented commercial feeds.

The API: Automating Feed Ingestion

Shadowserver's publicly documented API (api.shadowserver.org) accepts HMAC-SHA256-authenticated POST requests. The API covers three primary functions: querying reports for a given date and network scope, fetching summary statistics across report types, and querying the Shadowserver IP reputation database in real time.

A minimal Python ingestion loop looks something like this:

import hmac, hashlib, json, requests, time

API_KEY  = "your_api_key"
SECRET   = "your_secret"
BASE_URL = "https://api.shadowserver.org/net/reports"

def sign_request(payload: dict) -> str:
    body = json.dumps(payload, separators=(",", ":"))
    return hmac.new(SECRET.encode(), body.encode(), hashlib.sha256).hexdigest()

def fetch_report(report_type: str, date: str, cidr: str) -> list:
    payload = {
        "apikey": API_KEY,
        "report": report_type,
        "date":   date,
        "query":  cidr,
    }
    payload["hmac"] = sign_request(payload)
    r = requests.post(BASE_URL, json=payload, timeout=30)
    r.raise_for_status()
    return r.json()

# Example: pull yesterday's scan_exchange report for your /24
import datetime
yesterday = (datetime.date.today() - datetime.timedelta(days=1)).isoformat()
rows = fetch_report("scan_exchange", yesterday, "203.0.113.0/24")
for row in rows:
    print(row["ip"], row["tag"], row.get("version", ""))
Analyst Note
How fresh is "daily"? Understanding the scan-to-delivery lag

Reports labeled with a given date reflect scans completed during that UTC day, but delivery to subscribers typically occurs in the early hours of the following day. For most operational purposes this is immaterial. For incident response, it matters: if a host was compromised on Tuesday afternoon and you are reading Wednesday's report, the scan that produced that finding may have occurred before the compromise occurred. The host appears clean in the report because it was clean at scan time.

The practical consequence is that Shadowserver reports are a trailing indicator for individual incidents and a leading indicator for population-level exposure. Use them for baseline management and pattern detection, not for real-time incident confirmation. The API's reputation endpoint, which reflects a rolling 30-day window rather than a single-day snapshot, is better suited to ad-hoc triage during live incidents.

That gives you a list of dictionaries, each representing one exposed endpoint. From there the data can be pushed to a SIEM, written to a database, or used to trigger automated ticketing workflows. Teams with mature pipelines typically enrich each row with their own CMDB data before routing it to the appropriate asset owner.

Analyst Note
The one HMAC gotcha that breaks most first integrations

The HMAC-SHA256 signature must be computed over the JSON-serialized body with compact separators — no spaces after colons or commas. Many Python developers use the default json.dumps(payload) which inserts spaces, producing a different hash than what the server expects. The result is a silent authentication failure that returns an ambiguous error response rather than a clear 401.

The fix is json.dumps(payload, separators=(",", ":")) as shown above. If your requests are returning empty results or auth errors and you cannot identify why, this is the first thing to check.

Sinkhole Data: The Signal Most Defenders Ignore

Scanning-based reports are well understood. The sinkhole data is where things get interesting — and where many organizations are sitting on intelligence they do not know how to read.

A sinkhole is a server that has been configured to receive traffic originally destined for malware command-and-control infrastructure. When law enforcement seizes a botnet's domain or when security researchers register an expiring C2 domain before attackers can renew it, DNS for that domain is redirected to a sinkhole IP. Any infected device that tries to check in with its controller instead reaches the sinkhole, which logs the connection attempt without issuing commands.

Two different questions, two different data sources
What a scan report tells you
This host was reachable on this port at this timestamp
The service version matches a known vulnerable build
The configuration presents a known misconfiguration or default credential risk
The device type and industry sector of the owner
CONCLUSION: This host is exposed and potentially vulnerable. It has not necessarily been compromised.
What a scan report cannot tell you
Whether the host has already been exploited
Whether lateral movement has occurred from this host
Whether the vulnerability was present before the scan cycle
Whether a patch applied after the scan has closed the window
What a sinkhole report tells you
This host is actively infected and beaconing to known C2 infrastructure
The malware family fingerprint as identified by the sinkhole
Precise timestamp of the last beacon attempt
Bot identifier or system fingerprint the malware included in the beacon
CONCLUSION: This host is compromised. Treat as an active incident, not a remediation ticket.
What a sinkhole report cannot tell you
How the initial compromise occurred
Whether the malware is currently receiving active commands
What data, if any, has been exfiltrated
Infections from families not yet sinkholed by any researcher

Shadowserver operates one of the largest sinkhole networks in existence. According to the organization's own published figures, the network handles hundreds of millions of connection events per day across thousands of sinkholes. That scale means Shadowserver sees infected devices that no endpoint agent and no firewall log will ever surface — including devices that have no security software installed at all, such as compromised routers, NAS devices, printers, and IP cameras.

Kijewski has described Shadowserver's approach as delivering "free early warning, threat/vulnerability intelligence feeds and victim notification services to CSIRTs and network defenders worldwide" — sharing around one billion cyber events daily at no cost, with the explicit goal of reaching organizations that cannot afford commercial alternatives.

— Piotr Kijewski, CEO of The Shadowserver Foundation, Help Net Security interview, December 5, 2024

Shadowserver's sinkhole reports appear in the subscriber feed under report type families like botnet_drone, device_id, and malware-family-specific labels such as mirai, qakbot, or emotet (when those families are active). Each row includes the timestamp of the beacon, the source IP, source port, the sinkhole IP and port that received the connection, the malware family tag, and any additional metadata the beacon itself carried, such as a bot identifier or system fingerprint.

Reading a Sinkhole Row

A real-world sinkhole CSV row, simplified, looks like this:

timestamp,           ip,            port, protocol, tag,    asn,   geo, naics,       infection
2026-02-14 03:12:44, 198.51.100.47, 4921, tcp,      mirai,  64512, US,  518210,      mirai.gen

The infection field carries the botnet family identifier as fingerprinted by the sinkhole. The naics field — North American Industry Classification System — tells you the sector the affected IP belongs to. In the example above, NAICS 518210 maps to data processing, hosting, and related services. That context changes how urgently a CERT prioritizes the notification.

For defenders receiving this data about their own network, each row represents a device that is actively attempting to communicate with botnet infrastructure. The device may have been compromised days or months ago. The infection may be dormant, waiting for a command that will never come because the C2 is sinkholed. Or the device may be one of thousands in a maintained botnet that rotates to backup C2 domains regularly. Either way, the device is compromised and needs to be investigated.

Integrating Shadowserver Feeds Into Security Operations

Organizations at varying maturity levels use Shadowserver data differently. Select a pattern below to see how each one works in practice and what it requires from your team.

Pattern 01
Email-to-Ticket
Complexity
Coverage
Scalability
Pattern 02
SIEM + Asset Context
Complexity
Coverage
Scalability
Pattern 03
National CERT Bulk
Complexity
Coverage
Scalability
"We receive Shadowserver's daily reports for our entire national address space and use them as the backbone of our proactive outreach to operators. It is the only source that gives us consistent, comparable coverage across every sector."

— National CERT representative, in discussions on CERT tooling practices, as cited in ENISA threat landscape reporting on CSIRT capacity

Real-World Operational Use Cases

Several documented incidents illustrate where Shadowserver data provided the critical lead that internal monitoring missed.

The Microsoft Exchange ProxyLogon Disclosure (March 2021)

March 2, 2021
Microsoft discloses CVE-2021-26855 (ProxyLogon) and three related Exchange RCEs
Four zero-days disclosed simultaneously. Exploitation had already begun before the patch was public. HAFNIUM and at least five other threat actors were scanning for vulnerable Exchange servers within hours of the advisory.
March 2–14, 2021
Shadowserver begins scanning; vulnerable Exchange special reports go live
Shadowserver initiated scanning for exposed and unpatched Exchange deployments within hours of the disclosure. By March 9, the first mass exploitation Special Report was published. A March 14 scan performed in partnership with KryptosLogic identified 59,218 potentially vulnerable Exchange servers across 211 countries — giving network defenders external confirmation independent of internal patch tooling. Multiple follow-on special reports tracked webshell deployments on already-compromised hosts.
March 5–10, 2021
CISA and national CERTs use Shadowserver data for targeted outreach
Organizations appearing in the scan_exchange report received notifications from their national CERTs within days, not weeks. The detection-to-notification cycle, measured in days, is a concrete example of what free, independent scanning data enables at scale.
March 10+, 2021
Patch rate accelerates; scan_exchange finding count drops measurably
Teams using Shadowserver could use the daily feed as a live patch progress metric: if their Exchange server disappeared from the report, the patch took hold externally. If it persisted, the patch failed or had not been applied to all instances.

Volt Typhoon and SOHO Router Compromise (2023–2024)

The sustained campaign attributed to China's Volt Typhoon, documented extensively in advisories from CISA, NSA, and the FBI beginning in May 2023, involved the systematic compromise of small office and home office (SOHO) networking devices — Cisco RV-series routers, Netgear ProSAFE products, and Fortinet FortiGate appliances — to build a covert relay network. Many of these devices were already flagged in Shadowserver's vulnerable service reports for months before the advisory dropped, carrying tags for outdated firmware, exposed management interfaces, and in some cases CVEs that had been published over a year prior.

Analyst Note
The Volt Typhoon finding changes how you think about Shadowserver data

The critical insight from the Volt Typhoon case is one of framing: a Cisco RV320 in your scan_http_vulnerable or ICS exposure feed with an outdated firmware version did not need to be attributed to a nation-state actor to justify immediate remediation. The exposure was the problem. The adversary context came later.

This is how Shadowserver data should be used: act on the exposure, not the attribution. Organizations that waited for threat intelligence teams to assess whether Volt Typhoon was targeting their sector missed a remediation window that had been sitting open in their Shadowserver reports for months. The data told them; they were not reading it.

Teams that had been reviewing their reports weekly would have had a Cisco RV-series device flagged for outdated firmware — an actionable finding entirely divorced from geopolitics — long before any government advisory framed it as a national security issue.

Mirai Variant Tracking

Shadowserver sinkholes have been used in coordinated takedowns and tracking operations for Mirai variants since the original botnet's appearance in 2016. For ISPs, the sinkhole data provides a continuously updated list of infected devices within their subscriber base. A mid-sized ISP receiving daily Shadowserver sinkhole reports can send automated infected-host notifications to residential customers by cross-referencing the IP list against its DHCP lease records. Some ISPs in the Netherlands and Germany have operated exactly this workflow for several years, using Shadowserver data as the trigger for their abuse desk processes.

When the Compromised Device Has No Owner: Unmanaged and OT Assets

A sinkhole report naming a printer, an IP camera, or a building automation controller presents a response problem that most incident response playbooks were not written for. The device has no EDR agent, no local log storage worth examining, and often no mechanism for remote remediation short of a factory reset. For organizations receiving these findings, the operational question is not "how do we investigate this?" but "how do we contain and replace it while keeping the function it serves online?"

The structured path for unmanaged and OT assets identified in Shadowserver sinkhole data runs roughly as follows. First, confirm the IP is currently assigned to the device in question and has not been reallocated — DHCP assignment logs are the fastest path. Second, isolate the device at the network layer before attempting any hands-on intervention: VLAN change or ACL block at the access switch. Third, treat the device as persistently compromised regardless of whether a reboot clears the visible symptom; many IoT malware families survive reboots by writing to flash storage. The device should be reimaged to factory firmware from a verified clean source, or replaced if firmware update tooling is unavailable.

Analyst Note
Shadowserver's ICS exposure data surfaces a different category of risk

The ICS/SCADA report family — covering Modbus, BACnet, DNP3, Siemens S7, and similar protocols — identifies industrial systems that are directly reachable from the internet over their native protocol. These are not cases where a device is behind a firewall that has been misconfigured; in many cases the device has a public IP and is genuinely listening on the OT protocol port with no authentication required.

For organizations in manufacturing, utilities, or facilities management, Shadowserver's ICS reports are some of the highest-value findings the service generates, and some of the most ignored. The gap is organizational: ICS reports arrive in a security inbox while the engineers who manage the physical systems are in a completely different department. Building the workflow that connects a Shadowserver ICS finding to an OT team member — not a SOC analyst — is a process problem, not a technical one. It requires a mapping from ICS-relevant NAICS codes and port signatures to an OT-specific escalation path.

Operation Endgame and Law Enforcement Collaboration

In May 2024, Shadowserver participated in Europol's Operation Endgame — the largest-ever coordinated action against malware loader infrastructure — which disrupted IcedID, SystemBC, Pikabot, Smokeloader, and Bumblebee simultaneously. Shadowserver's sinkhole and victim notification infrastructure was used to alert compromised organizations in the aftermath, and the Foundation subsequently released historical infection datasets as one-off Special Reports so network defenders could investigate prior compromises.

In November 2025, a follow-on disruption action under Operation Endgame targeted the Rhadamanthys information stealer. Law enforcement acquired the threat actor's databases covering the period March 14 through November 11, 2025 — records of over 86 million stolen data items from more than 525,000 infections across 226 countries. Shadowserver released a subset of this dataset as a Special Report, providing a direct, structured path for defenders to identify whether any of their IPs appeared in the infection records.

Operational Note: Post-Exploitation Framework Tracking

Since 2024, Shadowserver has been tracking 40+ post-exploitation frameworks — commercial red team tools such as Cobalt Strike, Brute Ratel, and their cracked or stolen variants — through its scan infrastructure. Daily statistics on detected instances are available on the public Dashboard, while per-IP data is shared directly with National CSIRTs through the Post-Exploitation Framework Report. For SOC teams, this is one of the most operationally immediate signals the service produces: an active C2 framework fingerprint on your network is not a vulnerability finding — it is an active intrusion indicator.

Report Severity Levels: A 2023 Change That Matters for Triage

In October 2023, Shadowserver introduced formal severity levels across all report types and individual events. Each report type is now assigned a severity — CRITICAL, HIGH, MEDIUM, LOW, or INFO — and each row within a report carries an event-level severity as well. This change has significant operational implications for teams that previously treated all Shadowserver findings with equal priority: you can now filter the entire daily delivery to surface only CRITICAL events, or build SIEM ingestion rules that triage automatically based on severity before any analyst touches the data. A sinkhole beacon from a Mirai-infected printer ranks CRITICAL by default; an informational device identification finding for a cloud host ranks INFO. The severity schema is documented in Shadowserver's release notes and reflected in the CSV field structure.

Who Funds Shadowserver, and What Happens to Your Data

This question comes up in every enterprise security team that seriously evaluates Shadowserver as an operational dependency, and it is one the article on the tin rarely answers: who is behind this, who pays for it, and what do they do with the network information you share when you register?

The Shadowserver Foundation is a registered nonprofit organization founded in 2004. Its funding comes from a documented and publicly disclosed mix of sources. The primary institutional backer as of 2024 is the United Kingdom's Foreign, Commonwealth and Development Office (UK FCDO), which has provided multi-year support covering capacity building across Africa, the Indo-Pacific, Central and Eastern Europe, the Gulf region, and ASEAN countries. Craig Newmark Philanthropies has provided substantial financial support, including a previously disclosed $500,000 donation, and Craig Newmark is described by the organization as its second-largest funding source. The APNIC Foundation provided additional funding in 2024. The organization's Shadowserver Alliance — a growing consortium of like-minded organizations including Mastercard, Avast, Trend Micro, and Akamai — contributes through partnerships, voluntary invoicing, and funded projects. Shadowserver publishes transparency about its funding composition on its website and in annual reviews.

The funding picture has changed meaningfully since 2020, when the withdrawal of prior industry support briefly threatened the organization's operations. The resolution of that funding crisis — and the subsequent growth in government and civil society backing — has placed Shadowserver on a more diversified footing. Notably, in 2024 Shadowserver was recognized by vulnerability intelligence firm VulnCheck as the "Earliest Reporter of Exploitation in the Wild," reflecting the organization's operational centrality to global vulnerability tracking.

On data handling: the information Shadowserver collects is network-level telemetry observed from the public internet. It does not involve accessing your internal systems. When you register a prefix, you are not granting Shadowserver any access to your network; you are claiming ownership of data that Shadowserver is already collecting about your publicly visible address space. The registration simply determines who receives the report about that data.

Analyst Note
The dependency risk is operational, not privacy-related

The legitimate concern about Shadowserver is not data privacy but operational dependency. If your security program has embedded Shadowserver data as a primary detection source and the organization loses funding again, that detection capability disappears. The 2020 funding crisis resolved quickly, but it demonstrated that a free service with concentrated funding sources carries availability risk that a commercial feed does not.

The mitigation is not to avoid Shadowserver, but to build it into your detection architecture as a layer rather than a foundation. Pair it with at least one other external data source — Shodan or Censys for attack surface visibility, your own scheduled scan results — so a gap in Shadowserver coverage does not leave a blind spot. Treat the API as a supplementary signal, not the only one.

Limitations and What Shadowserver Cannot Do

Understanding where Shadowserver data falls short is as important as knowing where it excels.

Hard limit
Point-in-time, not continuous

Scan findings reflect the state of a host at the moment the scanner reached it. The host may have been patched an hour later or compromised an hour before. Without internal change management correlation, you cannot tell which. Shadowserver gives you a timestamp, not a live feed.

Hard limit
Sinkhole coverage is bounded by known families

If a threat actor uses a new C2 framework or fast-flux domain not yet registered by a researcher, that traffic will not appear in Shadowserver's sinkhole reports. Coverage is strongest for established botnet families with predictable DGAs or static C2 domains that have been publicly documented and seized.

Soft limit
Uneven scan frequency across ports

Shadowserver does not cover the entire IPv4 address space with equal frequency across all ports. Some campaigns run daily; others weekly or monthly. Absence from a Shadowserver report does not mean a service is not exposed — it may simply not have been reached in the current cycle. The scanning scope per report type is documented in the schema pages.

Hard limit
Cloud and dynamic IPs require different handling

Public cloud IP ranges belong to AWS, Azure, or GCP, not to your organization. Shadowserver findings for those IPs flow to the cloud provider's abuse contacts, not to you. If your application running on an EC2 instance is flagged in a scan report, you will not receive that notification through the standard reporting path. Tagging findings against your cloud inventory requires a separate workflow: pull Shadowserver data for the relevant provider ASN and cross-reference against your own cloud asset inventory to find your instances. Some organizations use cloud CSPM tools alongside Shadowserver for this reason.

Soft limit
Findings require validation before action

Shadowserver scan data can produce false positives, particularly for version-based vulnerability tags where the service banner reports an outdated version string that does not reflect the actual patch state. Some vendors backport security fixes without updating version strings; a host may appear vulnerable in a Shadowserver report because its banner says "OpenSSH 7.4" while running a distribution-patched build that contains all relevant CVE fixes. Before escalating a scan finding to a remediation ticket, confirm the actual patch state through your vulnerability scanner or direct inspection. Sinkhole findings require far less skepticism — an active beacon to a sinkhole is not a false positive in the same sense — but the source IP should always be confirmed against DHCP records before incident response is initiated.

Scope limit
Perimeter-only visibility

Shadowserver data is outward-facing by design. It tells you what is visible from the internet. It says nothing about lateral movement, internal misconfigurations, user behavior, or anything that does not generate externally observable network activity. It is a perimeter intelligence tool, not a full-spectrum detection platform.

Complementary Tools

Shadowserver data is most effective when paired with Shodan or Censys for broader attack surface visibility, your internal vulnerability scanner for authenticated assessment, and threat intelligence platforms (MISP, OpenCTI) for adversary context. No single source covers the full picture.

Key Takeaways

  1. Register your network prefixes: The Shadowserver reporting service is free and requires only proof of network ownership. Any organization that has not yet registered its ASN or IP prefixes is receiving none of this data and has no external baseline for its internet-facing exposure.
  2. Sinkhole reports require different handling than scan reports: A host appearing in a vulnerability scan report is exposed. A host appearing in a sinkhole report is actively infected. Prioritize sinkhole findings for immediate incident response triage regardless of asset criticality scores.
  3. Use severity levels to triage automatically: Since October 2023, every Shadowserver report type and every event row carries a severity field. Build your ingestion pipeline to filter on CRITICAL events first — you will surface compromised hosts and actively exploited services without requiring an analyst to read every CSV row.
  4. Automate ingestion and enrichment: Daily email review of raw CSVs does not scale. Even a simple script that cross-references Shadowserver IPs against your CMDB and creates tickets will dramatically increase the operational value of the data.
  5. Repeat findings signal process failures: If the same IP appears in the same Shadowserver report type across multiple weeks or months, the problem is not detection — it is remediation. That pattern is worth surfacing explicitly in SIEM dashboards as a metric for patch and configuration management effectiveness.
  6. Use the API for real-time IP reputation checks: The Shadowserver API supports ad-hoc queries against its reputation database. Security orchestration playbooks can query Shadowserver during alert triage to check whether a flagged IP has appeared in any Shadowserver report within the past 30 days, adding context that accelerates analyst decisions without requiring a full feed integration.
  7. Monitor Special Reports during major incidents: When law enforcement disrupts a botnet or a critical vulnerability is disclosed, Shadowserver publishes one-off Special Reports within days. Subscribing to the mailing list and monitoring the Shadowserver news feed means your team gets structured, per-IP data on your address space before mainstream coverage reaches most practitioners.

Shadowserver's data has been quietly underpinning national-level cyber defense for over two decades. For most individual organizations, the barrier to using it is not cost or complexity — both are minimal — but awareness. The data exists, it is updated daily, and it will tell you things about your network's internet-facing posture that no internal tool can replicate. Getting the most out of it requires treating it as a live operational feed rather than a periodic reference, building the integrations that surface its findings in the same place where analysts already work, and understanding that absence of findings is a property of scanning cycles, not a guarantee of a clean network.

Sources & References
  1. The Shadowserver Foundation. "Network Reporting — Report Types." shadowserver.org. Accessed March 2026.
  2. The Shadowserver Foundation. "Shadowserver API Documentation." api.shadowserver.org. Accessed March 2026.
  3. The Shadowserver Foundation. "What Types of Reports Do You Offer?" FAQ. shadowserver.org. Confirms 90+ report types as of 2025.
  4. The Shadowserver Foundation. "Shadowserver Dashboard." shadowserver.org/statistics. Source for 42 full IPv4 internet scans per day statistic.
  5. The Shadowserver Foundation. "Shadowserver 2024: Highlights of the Year in Review." February 17, 2025. shadowserver.org. Source for 9,000+ subscribers, 201 National CSIRTs, 175 countries, 20th anniversary, Craig Newmark Philanthropies and UK FCDO funding, Operation Endgame participation, and post-exploitation framework tracking.
  6. The Shadowserver Foundation. "Shadowserver Special Reports — Exchange Scanning #4." March 2021. shadowserver.org. Source for 59,218 vulnerable Exchange servers identified on March 14, 2021.
  7. The Shadowserver Foundation. "Introducing Report Severity Levels." October 2023. shadowserver.org.
  8. Kijewski, Piotr. "How the Shadowserver Foundation Helps Network Defenders with Free Intelligence Feeds." Help Net Security interview, December 5, 2024. helpnetsecurity.com. Source for operational context, funding structure, and threat landscape commentary.
  9. APNIC Foundation. "Support for Shadowserver 2024." October 2024. apnic.foundation. Confirms APNIC Foundation as a 2024 funder; infrastructure spanning 80 countries.
  10. The Shadowserver Foundation. "Rhadamanthys Historical Bot Infections Special Report." November–December 2025. shadowserver.org. Source for Operation Endgame Season 3 Rhadamanthys dataset: 86M+ stolen data items, 525,000+ infections (March 14–November 11, 2025), 226 countries.
  11. ENISA. "ENISA Threat Landscape 2023." European Union Agency for Cybersecurity, October 2023. enisa.europa.eu.
  12. CISA, NSA, FBI. "People's Republic of China State-Sponsored Cyber Actor Living off the Land to Evade Detection." Joint Advisory AA23-144A, May 24, 2023. cisa.gov.
  13. Microsoft Security Response Center. "Multiple Security Updates Released for Exchange Server." MSRC Blog, March 2, 2021. msrc.microsoft.com.
Back to all articles