Azure DNS Private Resolver: The End of Custom DNS VMs in Your Hub
In this article
- The Hybrid DNS Problem
- Why the Forwarder VMs Were a Liability
- What DNS Private Resolver Does
- Inbound Endpoints: On-Premises Resolving Azure
- Outbound Endpoints: Azure Resolving On-Premises
- Architecture Patterns
- The Migration: Replacing Your DNS VMs
- The DNS Resolution Chain with Private Endpoints
- Cost
- Time to Retire the VMs

Every enterprise hub-spoke architecture in Azure has DNS forwarder VMs sitting in the hub. Two Windows Server VMs running the DNS role. Or a pair of Linux boxes running BIND or CoreDNS. They forward queries between on-premises Active Directory DNS and Azure Private DNS zones. They are always there, and they are always a liability.
Microsoft just announced Azure DNS Private Resolver general availability. It’s a fully managed service that replaces those VMs. After migrating several hybrid environments to it, we can say this: if you have DNS forwarder VMs in your hub, start planning their retirement.
The Hybrid DNS Problem
The root issue is that on-premises and Azure use different DNS systems, and they need to talk to each other.
On-premises, you have Active Directory DNS. Your domain controllers serve corp.contoso.com and all the internal zones your organisation has accumulated over the years.
In Azure, you have Azure DNS - the built-in resolver at 168.63.129.16 that every VM uses by default. When you deploy Private Endpoints, the A records land in Azure Private DNS zones like privatelink.blob.core.windows.net. Azure DNS resolves those zones natively - as long as the Private DNS zone is linked to the VNet.
The problem: on-premises DNS doesn’t know about Azure Private DNS zones. Azure DNS doesn’t know about corp.contoso.com. Neither side can resolve the other’s records.
The traditional fix: deploy DNS forwarder VMs in the hub VNet. Configure them as conditional forwarders - queries for corp.contoso.com go to on-premises domain controllers over ExpressRoute or VPN, and queries for Azure Private DNS zones go to 168.63.129.16. Then set every VNet’s DNS server setting to point at these forwarder VMs instead of the Azure default.
It works. But you’re running infrastructure that does nothing except forward DNS packets, and if it goes down, every VM in every spoke loses name resolution.
Azure docs: Private DNS zone overview · Private endpoint DNS configuration
Why the Forwarder VMs Were a Liability
We’ve seen the same failure modes across every client running this pattern:
Single point of failure. Even with two VMs behind a load balancer, DNS failover isn’t instant. We’ve seen 30-60 second resolution gaps during failover - long enough for health probes to fail and applications to throw errors.
Patching risk. Every resource in every spoke is one reboot away from losing name resolution. You stagger the patching, you test the failover, and you still hold your breath.
No auto-scaling. DNS query volume scales with your Azure footprint. Nobody sizes DNS VMs for peak load. They get sized once and forgotten.
Monitoring gaps. Most teams monitor whether the VM is running. Few monitor whether DNS resolution is actually working. The VM can be healthy while the DNS service is hung or the conditional forwarder target is unreachable.
What DNS Private Resolver Does
DNS Private Resolver is a managed service that you deploy into a VNet. It has two types of endpoints:
Inbound endpoints receive DNS queries from outside Azure - typically from on-premises networks. You get a private IP address in a dedicated subnet, and on-premises DNS servers forward queries to that IP. The resolver handles resolution against Azure Private DNS zones linked to the VNet.
Outbound endpoints send DNS queries from Azure to external resolvers - typically on-premises DNS servers. You attach DNS forwarding rulesets to the outbound endpoint, defining which domains get forwarded where.
The combination replaces both directions of the conditional forwarding chain that the forwarder VMs provided.
On-premises DNS Azure DNS Private Resolver
┌──────────────────┐ ┌──────────────────────────┐
│ corp.contoso.com │◄── outbound ──────│ Outbound endpoint │
│ (AD DNS) │ forwarding │ Ruleset: │
│ │ │ corp.contoso.com → DC │
│ │ │ │
│ Conditional fwd: │── inbound ───────►│ Inbound endpoint │
│ privatelink.* → │ queries │ Resolves via Azure DNS │
│ resolver IP │ │ + Private DNS zones │
└──────────────────┘ └──────────────────────────┘
Azure docs: DNS Private Resolver overview · Inbound endpoints
Inbound Endpoints: On-Premises Resolving Azure
When on-premises clients need to resolve Azure Private DNS zones - which they absolutely do once you’re using Private Endpoints - the inbound endpoint is the target.
You create an inbound endpoint in a dedicated /28 subnet (minimum size). The resolver gets a private IP in that subnet, say 10.0.5.4. On your on-premises DNS servers, you configure conditional forwarders:
privatelink.blob.core.windows.net → 10.0.5.4
privatelink.database.windows.net → 10.0.5.4
privatelink.vaultcore.azure.net → 10.0.5.4
privatelink.azurewebsites.net → 10.0.5.4
When an on-premises client queries mystorageaccount.blob.core.windows.net, the on-premises DNS server follows the CNAME to mystorageaccount.privatelink.blob.core.windows.net, hits the conditional forwarder, sends the query to 10.0.5.4, and the resolver returns the private IP from the linked Private DNS zone. The exact same flow the forwarder VMs handled - except now it’s a managed service with built-in HA across availability zones.
Outbound Endpoints: Azure Resolving On-Premises
Outbound endpoints handle the other direction. Azure VMs that need to resolve on-premises domains get their queries forwarded to on-premises DNS servers.
You create an outbound endpoint in another dedicated /28 subnet. Then you create a DNS forwarding ruleset and attach it to the outbound endpoint. The ruleset contains rules like:
Domain: corp.contoso.com
Target DNS servers: 10.100.1.10, 10.100.1.11 (on-premises DCs)
Domain: legacy.internal
Target DNS servers: 10.100.2.5
The ruleset can be linked to multiple VNets - one ruleset can serve your entire hub-spoke topology. When a VM in a linked VNet queries server1.corp.contoso.com, the resolver forwards it through the outbound endpoint to the on-premises domain controllers. Everything else falls through to Azure DNS as normal.
Azure docs: Outbound endpoints and rulesets · DNS forwarding rulesets
Architecture Patterns
Single-region hub-spoke. Deploy one resolver in the hub VNet with one inbound and one outbound endpoint. Link the forwarding ruleset to all spoke VNets. On-premises conditional forwarders point at the inbound endpoint IP. This covers 90% of deployments.
Multi-region. Deploy a resolver in each regional hub. Each gets its own inbound endpoint IP. On-premises DNS servers need conditional forwarders to both regional IPs, with the local region preferred. Private DNS zones must be linked to all VNets in all regions.
Multi-hub with Azure Virtual WAN. The resolver deploys into a spoke VNet connected to the vWAN hub, not into the vWAN hub itself (you can’t deploy resources directly into vWAN hubs). Route DNS traffic to the resolver spoke through the vWAN routing tables.
The Migration: Replacing Your DNS VMs
The migration path isn’t complicated, but it needs to be sequenced carefully. DNS failures are total failures - nothing works when name resolution breaks.
- Deploy the resolver alongside your existing forwarder VMs. Create both endpoints and configure the forwarding ruleset with your on-premises domains
- Test from a single spoke. Change one non-production spoke’s VNet DNS settings to Azure default (
168.63.129.16) and link the forwarding ruleset to that VNet. Validate on-premises resolution works - Update on-premises conditional forwarders. Add the resolver’s inbound endpoint IP as a secondary target alongside your existing forwarder VM IPs
- Migrate spokes incrementally. Move spoke VNets one at a time - update VNet DNS settings and link the forwarding ruleset
- Cut over on-premises. Update conditional forwarders to use only the resolver’s inbound IP
- Decommission the VMs. Wait a week. Confirm everything is stable. Shut them down
The critical detail: when VNets use Azure default DNS (168.63.129.16), Azure DNS resolves linked Private DNS zones natively. The outbound endpoint is only for forwarding to external DNS servers like on-premises DCs.
The DNS Resolution Chain with Private Endpoints
After migration, the full resolution chain for a Private Endpoint looks like this:
From Azure VMs:
VM queries mystorageaccount.blob.core.windows.net
→ Azure DNS (168.63.129.16)
→ CNAME to mystorageaccount.privatelink.blob.core.windows.net
→ Private DNS zone (linked to VNet) returns 10.1.2.5
→ VM connects to private IP
From on-premises:
Client queries mystorageaccount.blob.core.windows.net
→ On-premises DNS
→ CNAME to mystorageaccount.privatelink.blob.core.windows.net
→ Conditional forwarder sends to DNS Private Resolver inbound IP (10.0.5.4)
→ Resolver queries Azure DNS + linked Private DNS zone
→ Returns 10.1.2.5 to on-premises client
→ Client connects via ExpressRoute/VPN to private IP
The resolver sits in the exact same position the forwarder VMs did. The difference is you don’t manage the infrastructure underneath it.
Cost
The resolver isn’t free, and the pricing caught some teams off guard. Each inbound or outbound endpoint costs roughly $180/month. A typical deployment with one inbound and one outbound endpoint runs about $360/month. DNS queries add a small per-query cost on top, but it’s negligible for most workloads.
Compare that to two DNS forwarder VMs for HA - Standard_B2ms at around $60/month each, plus OS licensing, plus monitoring, plus the operational cost of patching and troubleshooting. The resolver is more expensive in pure compute terms, but cheaper when you account for the operational burden you’re eliminating. For multi-region deployments, multiply by the number of regions.
Azure docs: DNS Private Resolver pricing
Time to Retire the VMs
If you’re running a hybrid Azure environment with Private Endpoints and on-premises Active Directory DNS, this is a no-brainer upgrade. The forwarder VMs were always a workaround for a gap in Azure’s DNS capabilities. That gap is now closed.
The VMs were a liability. Every patching cycle, every failover event, every time someone changed a conditional forwarder on one VM but not the other - those were incidents waiting to happen. A managed service with built-in zone redundancy and no OS to maintain eliminates an entire class of operational risk.
Don’t overthink the migration. Deploy the resolver alongside your existing VMs, test one spoke, roll forward, and decommission. The whole process takes a week if you’re cautious, a day if you’re not. The forwarder VMs served their purpose. It’s time to let them go.
Looking for Azure architecture guidance?
We design and build Azure foundations that scale - landing zones, networking, identity, and governance tailored to your organisation.
More from the blog
Azure Private Link: How It Changed the Enterprise PaaS Playbook
Azure Bastion: Why Your VMs Don't Need Public IPs Anymore