Skip to main content
GenioCT

Shared vs Separate Azure Hubs for Regulated Workloads Under NIS2 and DORA

By GenioCT | | 10 min read
Azure Architecture Security Enterprise

In this article

Separate production and non-production hubs simplify risk containment, resilience testing, and audit evidence for NIS2 and DORA environments.

Every Azure landing zone deployment eventually reaches this fork: does production traffic share a hub with non-production, or do you run separate hubs? The answer matters less in unregulated environments where cost optimisation wins. It matters a great deal when NIS2 and DORA are in scope.

We see this question regularly during landing zone audits. The architecture team usually has an opinion. The security team has a different one. The finance team wants to know why they would pay for two firewalls when one works fine. The compliance team is not sure what the regulations actually require. Everyone is technically correct from their own vantage point, and the decision stalls.

Both topologies are defensible. A shared hub is cheaper, simpler to operate, and reduces the number of firewall instances, gateways, and route tables you manage. A separate hub per environment costs more but draws sharper isolation boundaries between production and everything else. The question is which trade-off aligns better with your regulatory obligations.

For environments subject to NIS2 or DORA, we recommend separate hubs as the starting default. The regulations do not mandate a specific Azure topology, but separation simplifies risk containment, resilience testing, and audit evidence. Here is the reasoning.

What NIS2 Actually Cares About

NIS2 does not prescribe Azure network architecture. It prescribes outcomes: “appropriate and proportionate technical, operational and organisational measures” to manage risks to network and information systems (Article 21). The required measures include risk analysis, incident handling, business continuity, supply chain security, and “security in network and information systems acquisition, development and maintenance.” These obligations define what organisations must achieve, not how.

The word “proportionate” does real work here. Regulators care about whether your controls match your risk profile and whether you can demonstrate that they do. A shared hub that connects production and non-production workloads through the same Azure Firewall instance, the same route tables, and the same management plane increases the blast radius of several failure modes: a misconfigured firewall rule that exposes production traffic to a dev spoke, a routing table change in non-prod that inadvertently affects prod paths, an identity compromise on the shared hub subscription that grants access to both environments.

None of these scenarios are theoretical. We have seen each one in production environments.

Hub separation is one design decision inside that broader risk-management framework. NIS2 Article 21 also covers incident handling, business continuity, supply chain security, cryptography, and access control. The hub question matters, but it is not the whole compliance story.

The argument for separation is not that NIS2 requires separate hubs. It is that regulators care less about the exact topology and more about whether a failure in one environment can cascade into another. Separate hubs make that cascade harder. Proving that a shared hub prevents cascade requires significantly more documentation, testing, and continuous monitoring.

In Belgium, NIS2 implementation and supervision sit within a national context shaped by the Centre for Cybersecurity Belgium (CCB). For Belgian entities, the practical question is not whether Brussels prescribes an Azure hub count, but whether your design is proportionate, resilient, and demonstrable under supervision. The “appropriate and proportionate” duty translates directly into architecture decisions like this one.

What DORA Changes

DORA goes further than NIS2 on environment separation. The RTS on ICT risk management framework (CDR 2024/1774, Article 8) requires “the separation of ICT production environments from the development, testing, and other non-production environments” and that “development and testing” are conducted “in environments which are separated from the production environment.” The separation must “consider all of the components of the environment, including accounts, data or connections.” Testing in production is permitted only in exceptional circumstances, with explicit approval, clear justification, and time limitations. Article 25 of DORA itself emphasises resilience testing of ICT systems under realistic conditions.

A shared hub introduces a single shared network dependency between production and non-production. That dependency weakens several claims that DORA auditors will want to validate. Can you perform resilience testing on non-production systems without any risk of affecting production network paths? Can you simulate a hub failure in non-production without risking production connectivity? Can you demonstrate that the blast radius of a non-production incident is fully contained?

With a shared hub, the honest answer to each question is “yes, but only if our firewall policies, route isolation, and RBAC are configured perfectly and stay that way over time.” With separate hubs, the answer is “yes, because these are separate hub environments with independent control paths.” The second answer is easier to evidence to internal control, risk, and supervisory stakeholders and harder to challenge over time.

Again, DORA does not require separate hubs by name. But separate hubs make the environment separation and resilience testing requirements considerably easier to prove. For financial services organisations, that difference between “technically compliant if everything works perfectly” and “structurally compliant by design” matters.

When Shared Hubs Still Work

Separate hubs are our recommended default, not a universal requirement. Shared hubs remain viable in specific circumstances, and pretending otherwise would be dishonest.

A shared hub can satisfy regulatory obligations if you implement strict route isolation where non-production spokes genuinely cannot route to production spokes, separate firewall policy rule collection groups per environment with independent change approval, separate subscriptions for workloads with strong RBAC segregation on the hub subscription itself, independent monitoring and alerting per environment, and documented evidence that these controls are tested regularly.

That list sounds manageable on paper. In practice, the operational overhead is significant. Each audit cycle requires you to prove that these controls still hold. Firewall rules accumulate, as we described in our landing zone audit findings. RBAC assignments drift. Route table changes happen under pressure during incidents. The continuous burden of proving that a shared hub maintains proper isolation often exceeds the infrastructure cost savings.

Shared hubs also work for small regulated footprints where non-production workloads genuinely cannot reach production-regulated spokes and the network is simple enough to verify by inspection. They can also serve as a transitional state during migration, where you consolidate onto separate hubs once the initial move to Azure is complete.

The honest question to ask: will your team maintain this level of isolation discipline for years, across personnel changes, under incident pressure, and through every audit cycle?

The pattern we recommend for NIS2 and DORA environments separates the network control plane by environment.

Production hub: dedicated firewall instance (Azure Firewall Premium for IDPS and TLS inspection on regulated traffic), dedicated route tables, dedicated ExpressRoute or VPN gateway if on-premises connectivity is required, dedicated hub subscription with its own RBAC boundary. Production spokes peer only to the production hub. The management plane for this hub is accessible only to production network administrators.

Non-production hub: separate firewall instance (Standard tier is usually sufficient), separate route tables, separate or shared gateway depending on on-premises connectivity needs for dev/test, separate hub subscription. Dev, test, staging, and sandbox spokes peer here. Experimentation happens in this boundary. Firewall policy changes go through a lighter approval process because the blast radius is limited to non-production.

The benefits compound. Blast radius shrinks to one environment per hub. Architecture diagrams for auditors show clear environment separation without needing footnotes about logical isolation within a shared control plane. Resilience testing on non-production spokes cannot affect production network paths regardless of misconfiguration. Firewall policies evolve independently, so a new rule needed for a test workload does not require production change control. The management plane separation means a compromised non-prod admin account does not grant access to production networking.

Field observation: In regulated Azure environments, the hardest compliance discussions are rarely about encryption or identity. They are about whether production and non-production are truly isolated.

What to Separate Besides the Hub

The hub is the most visible separation point, but it is only one layer. True environment separation involves three layers: network isolation (separate hubs or routing domains), control plane isolation (subscriptions, RBAC, policy scopes), and operational isolation (deployment pipelines, admin access paths, approval chains). Hub separation addresses the first layer but works best when the other two are aligned. Organisations that separate hubs but share everything else end up with a compliance story that falls apart under scrutiny.

Subscriptions should be separated by environment. Production workloads in production subscriptions, non-production in their own. This is landing zone baseline guidance regardless of regulatory requirements.

Firewall policies need separate policy objects, not just separate rule collection groups within a shared policy. A shared policy means a change to the policy structure affects both environments. Separate policies provide independent lifecycle management.

RBAC and administrative access paths should ensure that non-production administrators cannot modify production resources. PIM-eligible roles scoped to specific subscriptions, not broad assignments at the management group level that span both environments.

CI/CD approval chains should require different approvers for production deployments. A pipeline that can deploy to both prod and non-prod with the same approval gate is a separation gap that auditors will flag.

Logging workspaces may need separation for regulated workloads where non-production staff should not have access to production telemetry. A single Log Analytics workspace with workspace-level RBAC can work, but separate workspaces per environment remove the risk of accidental exposure.

Admin access paths deserve their own separation. Bastion hosts, jump boxes, PAW devices, and VPN/ExpressRoute management connectivity should not be shared across environments. A shared admin access path reintroduces the coupling that hub separation was meant to remove. If an administrator can reach both prod and non-prod networking from the same session, the management plane is not truly separated.

Route tables and peering rules should be environment-specific. No peering between production and non-production spokes. No route that allows traffic to cross the boundary.

One nuance: some services will still be shared centrally. Identity (Entra ID), source control, package repositories, and CI/CD foundations often serve both environments. That does not automatically invalidate the separation story, but these shared services must be designed so they do not collapse the prod/non-prod boundary. Shared identity is fine. Shared identity with the same admin accounts managing both prod and non-prod infrastructure is not.

The compliance story that holds up is the combination of all these layers, not just the hub separation on its own. As we covered in board-level NIS2 and DORA obligations, regulators look at the full picture.

The Real Trade-Off

Separate hubs cost more. As an order of magnitude, two Azure Firewall Premium instances run roughly EUR 2,600 per month instead of EUR 1,300 for one (actual costs vary by region, SKU, and traffic profile). Duplicate gateways add another EUR 200-400 per month depending on SKU. Additional routing, monitoring, and management overhead requires platform team time.

Against that cost, consider what the alternative demands. A shared hub under regulatory scrutiny requires continuous proof that isolation controls work: quarterly firewall rule reviews documented for auditors, RBAC attestation reports showing no cross-environment access, route table audits confirming no prod/non-prod path leakage, and incident response plans that account for shared infrastructure failure modes.

The cost of proving shared-hub isolation to auditors, cycle after cycle, often exceeds the infrastructure cost of running two hubs. The platform team time spent preparing evidence, responding to audit queries about shared infrastructure, and maintaining the discipline of perfect logical isolation is real and recurring. Two hubs eliminate most of those conversations.

For NIS2 and DORA environments, separate production and non-production hubs are usually the safest default. The regulations do not mandate separate hubs, but separation simplifies risk containment, resilience testing, and audit evidence. The incremental infrastructure cost buys you a compliance posture that is structurally sound rather than dependent on continuous operational perfection.

Typical engagement: For regulated Azure environments we review the network topology, environment separation model, firewall policy structure, identity boundaries, and compliance evidence readiness. The output is an architecture recommendation with a regulatory alignment assessment. Typical scope is 2-3 weeks.

Related: Azure Landing Zones in 2026 and What Actually Matters Now day-2 operational challenges · What an Azure Landing Zone Audit Actually Finds common findings across enterprise environments · Your Board Is Asking About NIS2 board-level obligations · Azure Firewall: When Cloud-Native Network Security Finally Makes Sense hub firewall architecture

Looking for Azure architecture guidance?

We design and build Azure foundations that scale - landing zones, networking, identity, and governance tailored to your organisation.

Start with a Platform Health Check - results within 1 week.
Talk to an Azure architect
Share this article

Start with a Platform Health Check

Not sure where to begin? A quick architecture review gives you a clear picture. No obligation.

  • Risk scorecard across identity, network, governance, and security
  • Top 10 issues ranked by impact and effort
  • 30-60-90 day roadmap with quick wins