Microsoft Sentinel in 2026 and How to Control Ingestion Costs
In this article

Microsoft Sentinel has come a long way since its GA in 2019. The detection library is deep, the automation through Logic Apps works, and the integration with the broader Defender stack covers identity, endpoint, and cloud posture under one console. We covered the architecture in our original Sentinel piece from 2019, and most of that still holds.
But in every Sentinel conversation we have with enterprise clients today, the first question is not about capability. It is about cost.
Sentinel bills on data ingestion volume. In a mid-size Azure environment with Entra ID, a few hundred VMs, network security groups, and a handful of SaaS integrations, an unmanaged Sentinel workspace can easily reach 50-100 GB/day. At the default pay-as-you-go rate of roughly EUR 2.50 per GB for Analytics Logs (approximate West Europe pricing, check current rates for your region), that is EUR 4,500 to EUR 9,000 per month before anyone writes a detection rule. We have seen organisations hit EUR 15,000/month within weeks of “just connecting everything” because nobody designed an ingestion strategy.
The good news: Microsoft has shipped several mechanisms over the past few years that give you real control. The bad news: most of them require upfront design decisions that too many teams skip.
The Log Dumpster Antipattern
The most common mistake we see in Sentinel deployments is what we call the log dumpster. Someone enables every available data connector, all logs flow into the workspace at the Analytics tier, and costs spiral. Three months later, a cost alert fires and the security team scrambles to figure out what to cut.
The core problem is that nobody asked two questions before enabling each connector: “What detection rules will use this data?” and “How often will an analyst query it?”
In a typical enterprise workspace, 80% of ingested data is never referenced by a scheduled analytics rule or a manual investigation. It sits in the workspace, burning budget, because someone thought “more data is always better” without considering the cost per GB.
An ingestion strategy starts with use cases, not connectors.
What to Onboard First
If you are standing up Sentinel or cleaning up an existing deployment, here is a practical starting sequence for many Azure-first enterprises. Microsoft does not prescribe a universal first five for all customers, so adapt this to your environment.
Entra ID sign-in and audit logs come first. Identity is the perimeter in cloud environments. Failed sign-ins, risky sign-ins, conditional access changes, role assignments, application consent grants. The volume is moderate (typically 1-5 GB/day for a mid-size tenant) and almost every useful detection rule references these tables.
Azure Activity Logs are next. Subscription-level control plane events: who created, modified, or deleted resources. Low volume, high value. Activity Logs also feed into several built-in Sentinel analytics rules out of the box.
Then Defender for Cloud alerts. If you run Defender plans (Defender for Servers, Defender for Containers, etc.), forward the alerts to Sentinel. Alert volume is small, and these are pre-correlated signals that save your SOC from writing detections Microsoft already ships.
Add NSG flow logs for network visibility. Flow data tells you who is talking to whom. The volume can be significant, but NSG flow logs are a strong candidate for the Basic Logs tier (more on that below).
Finally, pick one key application. Your most business-critical or most exposed application. Ingest its Application Insights or custom logs into Sentinel. Build detection rules specific to it. Get good at one before onboarding twenty.
Everything else should go through an explicit cost/value review before connecting.
Azure docs: Free data sources for Sentinel · Data connectors reference
Data Collection Rules Changed Everything
Before 2022, getting data into a Log Analytics workspace meant installing the legacy Log Analytics agent (MMA) or configuring diagnostic settings that shipped raw, unfiltered logs. You paid for every byte that landed in the workspace regardless of whether you needed it.
Data Collection Rules (DCRs) replaced that model. A DCR sits between the data source and the workspace and lets you filter, transform, and route data before it counts against your ingestion. The Azure Monitor Agent (AMA) uses DCRs natively, and workspace-transformation DCRs apply to logs arriving via diagnostic settings or API.
One important caveat: DCRs are a major cost lever for AMA-based and supported data flows, but many important Sentinel sources have connector-specific ingestion behavior and less flexible pre-ingestion filtering. SaaS connectors, Microsoft security product integrations, and some managed data connectors ingest data with limited or no DCR-level filtering. For those sources, cost control happens through source selection and table strategy rather than pre-ingestion transformation.
Practical example: Windows Security Events. The full SecurityEvent table can generate 10-20 GB/day per hundred servers. A DCR that filters to only the event IDs your analytics rules actually reference (4624, 4625, 4648, 4672, 4688, 4720, 4732, and a handful of others) can cut that volume by 60-80%.
// DCR transformation query example: keep only security-relevant Event IDs
source
| where EventID in (4624, 4625, 4648, 4672, 4688, 4720, 4732, 4740, 4776)
DCRs are also how you route different data to different destinations. High-value security logs go to your Sentinel workspace at the Analytics tier. Verbose operational logs go to a separate workspace or to the Basic Logs tier. Raw syslog from network appliances gets filtered down to only the severity levels and facility codes that matter.
If you deployed Sentinel before DCRs existed and never revisited your data collection, you are almost certainly overpaying.
Azure docs: Data Collection Rules overview · Workspace transformation DCRs
Basic Logs vs Analytics Logs
Microsoft offers two ingestion tiers, and choosing the right one per table is one of the most effective cost levers.
Analytics Logs cost approximately EUR 2.50/GB (rough estimate, pay-as-you-go, West Europe; pricing varies by region and changes over time). Full KQL query capability. 90-day default interactive retention. Your scheduled analytics rules, hunting queries, and workbooks all run against Analytics tables. Use this tier for data that drives detections.
Basic Logs cost approximately EUR 0.50/GB (rough estimate, check current pricing for your region). You get 30-day interactive retention with limited KQL (no joins, no aggregations outside a few specific operators, 8-minute query timeout). Scheduled analytics rules cannot query Basic Log tables. Some Sentinel experiences and workbook features also behave differently with Basic Log tables. Use this tier for selected high-volume, low-detection-value data that you need for investigation and incident response, not as a blanket downgrade strategy.
Good candidates for Basic Logs: NSG flow logs, verbose firewall/proxy logs, raw syslog from network appliances, container stdout logs, storage account access logs. These tables can represent 40-60% of total ingestion volume in many environments, so moving them to Basic Logs cuts the effective per-GB cost substantially.
The decision is per table. You configure it in the workspace table settings, and you can change it. Start a new table as Basic, and if you later find you need scheduled rules against it, promote it to Analytics.
Azure docs: Configure Basic Logs · Log data plans
Retention: Stop Overpaying for Defaults
Sentinel and Azure Monitor retention economics are often confused because Sentinel sits on top of a Log Analytics workspace and the pricing layers overlap. The key distinction: ingestion charges, table plan (Analytics vs Basic), interactive retention, and long-term archive retention are all separate billing dimensions. Sentinel workspaces default to 90 days of interactive retention for Analytics tables. Beyond that, you can configure archive retention up to 12 years. Archived data costs a fraction of interactive storage but requires a search job or restore operation to query.
Most teams leave the 90-day interactive default and never configure archive policies. The result: either they keep data longer than needed at the expensive interactive tier, or they lose data they might need for compliance or long-running investigations.
Design retention per table based on two questions: “How far back do analysts regularly query this data?” (that is your interactive retention) and “How long must we keep it for regulatory or forensic purposes?” (that is your archive window). For many enterprises operating under NIS2 or DORA, audit logs may need 2-5 years of archive retention, while verbose network flow data might only need 90 days total.
Content Hub: Start With 10 Rules, Not 100
Sentinel’s content hub ships hundreds of analytics rule templates, workbooks, playbooks, and hunting queries. The temptation to enable everything is strong. Resist it.
Enabling 200 analytics rules on day one creates an alert storm. Your SOC will see hundreds of incidents per day, most of them low-confidence or irrelevant to your environment. Analysts burn out triaging noise, and genuine threats get buried. We have watched teams disable Sentinel incidents entirely because the volume was unmanageable, which is worse than having no SIEM at all.
Start with 10-15 high-confidence rules that match your onboarded data sources. For the five data sources we recommended above, that typically means: impossible travel detection, sign-in from unfamiliar locations, mass download or deletion events in Entra ID, privilege escalation (new Global Admin, new role assignment), suspicious Azure resource deployment patterns, and Defender alert correlation rules.
Run those for two weeks. Tune the thresholds to your environment. Understand the false positive patterns. Then add the next batch of 10. After three months you will have 30-40 well-tuned rules that generate actionable incidents instead of 200 rules that generate noise.
Commitment Tiers
If your workspace consistently ingests 100 GB/day or more, commitment tiers save 30-50% compared to pay-as-you-go. The tiers start at 100 GB/day and scale to 5,000+ GB/day with increasing discounts.
At the 100 GB/day tier, the effective rate drops from roughly EUR 2.50/GB to around EUR 1.70/GB (West Europe pricing, check current rates). At 500 GB/day, the discount deepens further.
Two things to watch: commitment tiers have a 31-day minimum. If your ingestion is volatile (say, 120 GB/day on weekdays but 30 GB/day on weekends), model the effective daily average before committing. Also, commitment tiers apply to the entire workspace, not per table. Basic Logs are billed separately and do not count toward the commitment tier.
Use the Sentinel cost workbook to model your actual ingestion patterns before picking a tier.
Workspace Architecture
A single Sentinel workspace with table-level RBAC is often the simplest and most cost-efficient starting point, unless compliance, geography, or operating model constraints justify multiple workspaces. Multi-workspace deployments add operational complexity (cross-workspace queries are slower, analytics rules cannot natively span workspaces, and incident management becomes fragmented) without a proportional benefit unless you have a hard requirement.
Legitimate reasons for multiple workspaces: data residency regulations that require logs to stay in a specific Azure region, legal separation between entities, MSSP scenarios where tenant isolation is required, M&A situations where integration timelines matter, business-unit autonomy requirements, or extreme cost allocation needs where different units must have completely separate billing.
If none of those apply, start with one workspace per tenant, use resource-context RBAC to scope analyst access, and route data tiers through DCRs and table configuration.
Sentinel Cost Review Checklist
If you have a running Sentinel workspace and suspect you are overpaying, walk through these items:
- List every enabled data connector and the daily GB it ingests. The Sentinel cost workbook or a
Usagetable KQL query gives you this breakdown - For each connector, identify which analytics rules reference its tables. If no rules reference a table, ask whether it belongs in the workspace at all
- Move high-volume, low-signal tables to Basic Logs (NSG flows, raw firewall logs, container stdout, storage access logs)
- Review Data Collection Rules. Are you filtering at collection time, or ingesting raw and ignoring 80% of it?
- Check retention settings per table. Set interactive retention to what analysts actually query, and configure archive retention for compliance needs
- Count your active analytics rules and their incident volume. If more than half generate incidents nobody investigates, disable them and focus on high-confidence detections
- If daily ingestion exceeds 100 GB, model the savings from a commitment tier using the cost workbook
- Confirm workspace architecture. If you run multiple workspaces without a regulatory or tenancy reason, consolidate
Run this checklist quarterly. Ingestion patterns change as workloads scale, new connectors get enabled, and analytics rules evolve. A workspace that was cost-efficient six months ago may not be today.
Related: Azure Sentinel: What Cloud-Native SIEM Means foundational architecture · Your Board Is Asking About NIS2 regulatory drivers for retention · Why Your Azure Bill Is High Even When Right-Sized log ingestion as a cost centre
Need help with your Azure security posture?
We help enterprises design and tune Azure security controls: WAF policies, Sentinel ingestion, Defender for Cloud, identity governance, and NIS2/DORA readiness.
More from the blog
Shared vs Separate Azure Hubs for Regulated Workloads Under NIS2 and DORA
Your Board Is Asking About NIS2. Here Is What You Actually Need to Do