The DNS Problems That Break Your Private Link Connectivity
In this article
- Private DNS Zones Not Linked to the Right VNets
- Split-Brain DNS Between On-Premises and Azure
- Hub-Spoke DNS Forwarding Gaps
- Custom DNS Servers That Don’t Forward to 168.63.129.16
- Stale DNS Records After Resource Recreation
- Conditional Forwarder Misconfiguration
- Terraform and IaC Gotchas
- A Diagnostic Checklist
- Getting DNS Right From the Start
Private Link itself is not complicated. You create a private endpoint, it gets an IP in your subnet, and the PaaS service is reachable over the VNet. We covered the fundamentals in our original Private Link article.
The part that breaks things is DNS. Every private endpoint needs a DNS record that maps the service’s public FQDN to the private IP. If that resolution doesn’t work from every location that needs it (spokes, on-premises, other subscriptions), the client falls back to the public IP or fails entirely.
If you are troubleshooting a private endpoint that “should work but doesn’t,” the answer is almost certainly somewhere in this list.
Private DNS Zones Not Linked to the Right VNets
Azure Private DNS zones only resolve for VNets that have a virtual network link to the zone. If your private endpoint’s A record lives in privatelink.blob.core.windows.net and that zone is linked to the hub VNet but not to Spoke B, VMs in Spoke B will resolve the public IP instead.
Linking the zone to every VNet that needs to resolve it is mechanically simple. The reason it gets missed is organisational: the team deploying private endpoints is not the team managing DNS zones. Platform teams own the zones in the connectivity subscription; application teams deploy endpoints in spoke subscriptions. Without an automated process to link zones during spoke provisioning, links get forgotten.
Spoke provisioning should include linking all relevant Private DNS zones. Microsoft maintains the full list of private DNS zone names per service, and at last count it had over 100 entries.
Split-Brain DNS Between On-Premises and Azure
On-premises clients need to resolve privatelink.* FQDNs to private IPs. Their DNS servers don’t know about Azure Private DNS zones unless you configure conditional forwarding.
A VM in Azure resolves storageaccount.blob.core.windows.net to 10.1.2.3 (the private endpoint IP). A server on-premises resolves the same FQDN to the public IP. Both sides think they are talking to the same service. Only the Azure VM uses the private path.
This fails silently while public access is enabled. The on-premises server connects over the internet and everything works, just not privately. The moment you disable public access, on-premises connectivity breaks.
On-premises DNS needs a conditional forwarder for every privatelink.* zone you use, pointing to your Azure DNS Private Resolver inbound endpoint or your hub forwarder VMs. Miss even one zone and that service type stops resolving privately from on-premises.
Hub-Spoke DNS Forwarding Gaps
In a standard hub-spoke architecture, spoke VNets use the hub as their DNS server (Azure Firewall DNS proxy, DNS Private Resolver, or legacy forwarder VMs). Private DNS zones are linked to the hub VNet. Resolution from spokes works because queries go to the hub, which resolves through Azure DNS with the zone links.
The zone link matters for the VNet where DNS resolution actually happens, not where the private endpoint sits. If the hub is the intermediary and the zone is linked to the hub, cross-spoke resolution works fine.
But if someone changes a spoke’s DNS settings to use 168.63.129.16 directly (we see this during troubleshooting that never gets reverted), that spoke loses Private DNS zone resolution because the zone isn’t linked to the spoke VNet.
A related gap: Azure Firewall’s DNS proxy. If the firewall’s upstream DNS is a custom server that doesn’t forward privatelink.* zones to 168.63.129.16, you get public IPs back.
Custom DNS Servers That Don’t Forward to 168.63.129.16
Organisations running Active Directory DNS servers on Azure VMs for domain-joined workloads hit this one. The VNet DNS points to these AD servers. They handle corp.contoso.com perfectly, but when a workload queries storageaccount.blob.core.windows.net, the AD server resolves through root hints or external forwarders, bypassing Azure DNS entirely.
The CNAME chain for a private endpoint:
storageaccount.blob.core.windows.net
→ storageaccount.privatelink.blob.core.windows.net
→ 10.1.2.3 (from Azure Private DNS zone)
The first CNAME is in public DNS. Any resolver sees it. The second resolution (the A record in the privatelink.* zone) only works through Azure DNS at 168.63.129.16, from a VNet with the zone linked. If the AD server follows the CNAME and queries the internet instead, it gets NXDOMAIN or the public IP.
Configure conditional forwarders on the AD DNS servers for each privatelink.* zone, pointing to 168.63.129.16. The AD servers continue to handle their own zones; everything else gets handed back to Azure DNS.
Stale DNS Records After Resource Recreation
Delete a storage account and recreate it with the same name, and the old A record in the Private DNS zone may persist pointing to a dead IP. Or the new private endpoint creates a second A record, and DNS round-robins between the stale and current IPs.
The symptom is intermittent connectivity that looks like a networking issue. Teams spend hours on NSGs, route tables, and firewall rules before someone checks the DNS records.
This bites hardest in IaC environments with destroy-and-recreate patterns. Make sure the private endpoint and its DNS zone group are destroyed before the parent resource and created after, and check for orphaned A records as part of resource decommissioning.
Conditional Forwarder Misconfiguration
The zone you need to forward is the privatelink.* variant, not the base zone. We see this regularly: someone creates a conditional forwarder for blob.core.windows.net instead of privatelink.blob.core.windows.net, breaking resolution for all of Azure Blob Storage. Or they forward core.windows.net, which is even broader.
What works is one conditional forwarder per privatelink.* zone, forwarding to your Azure DNS resolver. One forwarder per zone, scoped to the right zone, nothing broader.
Watch out for wildcard forwarders too. Windows DNS Server doesn’t support wildcards in conditional forwarder zone names the way you might expect. The zone name must match exactly.
A timing issue that catches teams: when you add a new PaaS service type (say, Azure SQL for the first time), you need privatelink.database.windows.net in your forwarder list. The Azure side works immediately through zone linking. On-premises breaks until someone adds the forwarder.
Terraform and IaC Gotchas
IaC adds its own failure modes.
The azurerm_private_endpoint resource supports a private_dns_zone_group block that automatically manages DNS records. Omitting it and managing records separately creates a race condition: the endpoint deploys, but the DNS record may not exist when the first health check runs.
If Private DNS zones live in the same Terraform state as application resources, destroying the app destroys the zone and breaks every other private endpoint using it. Zones belong in platform infrastructure state, not application state.
A common confusion: auto-registration creates A records for VMs in linked VNets, but has nothing to do with private endpoints. Enabling it on privatelink.* zones just creates spurious VM records.
Watch the link cap too. A Private DNS zone supports up to 1,000 VNet links, and environments with hundreds of spokes do hit that ceiling.
A Diagnostic Checklist
When private endpoint connectivity fails, run through this sequence:
- From the client, resolve the FQDN:
nslookup storageaccount.blob.core.windows.net. Does it return a private IP or a public IP? - Follow the CNAME chain. Is there a
privatelink.*CNAME in the response? - If the CNAME exists but resolves to a public IP, the Private DNS zone is either not linked, or the DNS server is not forwarding to Azure DNS.
- Check which DNS server the client is using. Is it Azure DNS, a custom DNS server, or Azure Firewall’s DNS proxy?
- If it is a custom DNS server, check its conditional forwarders for the relevant
privatelink.*zone. - Check the Private DNS zone in the Azure portal. Does the A record exist? Does it have the correct IP?
- Check the zone’s virtual network links. Is the relevant VNet linked?
Most problems resolve at step 3 or step 5.
Getting DNS Right From the Start
At scale, the architecture looks the same in every healthy environment we audit. Private DNS zones live in the connectivity subscription, linked to every spoke VNet through an automated process (Azure Policy, a subscription vending pipeline, or a Terraform module that runs on every new spoke). Azure DNS Private Resolver sits in front as the forwarding layer for both Azure-side custom resolvers and on-premises servers. On-premises conditional forwarders point at the resolver’s inbound endpoint, one per privatelink.* zone you actually consume. Zone links and forwarder lists get audited quarterly, because both drift the moment someone deploys a new PaaS service type without updating the platform side.
For the broader architecture context, see our articles on Private Link fundamentals and Azure Landing Zones in 2026, which covers the platform infrastructure patterns that make this manageable.
DNS is the part of Private Link nobody puts on the architecture diagram, and the part that decides whether the architecture diagram is accurate.
Looking for Azure architecture guidance?
We design and build Azure foundations that scale - landing zones, networking, identity, and governance tailored to your organisation.
More from the blog
APIM vs Azure Front Door vs Application Gateway and When to Use Each
Azure Front Door in 2026 and the Standard vs Premium Decision