When Azure Functions Can Replace Entra Application Proxy and When They Cannot
In this article

Microsoft Entra Application Proxy is still the right answer for many internal web apps, especially when you need broad protocol compatibility or built-in integration with legacy authentication flows. It supports KCD for Kerberos backends, IWA-adjacent publishing patterns, and native header-based SSO out of the box. PingAccess remains an adjacent option for organisations already invested in it, but it is no longer required for header SSO. Application Proxy remains the native, Microsoft-supported way to publish on-premises apps with Entra ID pre-authentication.
But for a narrower set of modern HTTP apps and APIs, there is another option: a cloud-native reverse proxy built with Azure Functions, App Service authentication, and private network connectivity. You avoid the connector VMs, the Windows Server patching, and the 3 AM pages when a connector service goes down.
This pattern is not a universal replacement for Entra Application Proxy. It is a connector-free option for internal web apps and APIs that already fit an HTTP request/response model and can rely on Entra-based authentication at the edge.
Which Option Fits Which Case
Before diving into implementation, here is the decision table. If your scenario sits in the left column, the serverless pattern may save you from running connector infrastructure. If it sits in the right column, Application Proxy is the safer bet.
| Requirement | Function-based proxy | Entra Application Proxy |
|---|---|---|
| Entra pre-auth for HTTP app or API | Good fit | Good fit |
| No connector VM infrastructure | Strong fit | Requires connector |
| Modern API gateway-style pattern | Strong fit | Possible, less natural |
| Header-based SSO to legacy apps | Good fit | Also good |
| KCD / Kerberos estate | Complex workaround (see below) | Native support |
| NTLM-heavy applications | Not supported | Better fit |
| Broad multi-app publishing (dozens of apps) | More custom work per app | Built-in wildcard support |
| WebSocket or long-lived connections | Limited (230 s hard HTTP ceiling) | Better fit |
| High-throughput streaming | Not ideal | Better with persistent connections |
The sweet spot for the serverless pattern is organisations that run API-first backends, have moved away from NTLM, and want their infrastructure to be PaaS or serverless end to end.
The Problem This Pattern Solves
You need Entra ID pre-authentication. Your backend sits on a private network. You do not want to manage connector infrastructure.
Entra Application Proxy solves this with a private network connector on Windows Server. At least one connector VM, ideally two for high availability. That is patching, monitoring, capacity planning, and a Windows licence for each. For organisations running fully serverless or PaaS-heavy workloads, maintaining VMs just for proxying feels disproportionate.
The alternative: put an Azure Function between your users and your backend. The Function handles authentication through Easy Auth and forwards requests over a private VNet connection. The backend never needs a public endpoint. The user never sees anything except your public domain.
How the Pattern Works
Four layers, each with a clear responsibility.
Internet
│
▼
Application Gateway (WAF v2)
│ ← TLS termination, WAF rules, host-header preservation
▼
Azure Function App (Easy Auth / Entra ID)
│ ← Authentication, identity extraction, proxy logic
│ ← VNet integrated (outbound)
▼
Private Network (VNet / peered VNet / ExpressRoute)
│
▼
Backend Application (IIS, App Service, API)
Application Gateway sits at the edge. It handles TLS termination, WAF protection, and routes traffic to the Function App backend pool. The critical configuration is host-header preservation (covered below).
The Function App runs with Easy Auth configured against your Entra ID tenant. Every request must pass Entra authentication before your code executes. App Service authentication exposes the authenticated identity through headers like X-MS-CLIENT-PRINCIPAL and related fields. The Function reads these, prepares the downstream request, and forwards it over the VNet.
VNet integration on the Function App gives it outbound connectivity into your private network. TCP and UDP are supported, which matters for DNS resolution and, in the Kerberos scenario, KDC communication.
The backend receives requests from the Function over the private network with identity information attached.

Two Proxy Modes
Header-Bridging Mode
The simpler pattern. The Function authenticates the user via Easy Auth, reads the claims from X-MS-CLIENT-PRINCIPAL (base64-encoded JSON containing the full claim set), picks out what the backend actually needs, and injects its own trusted headers before forwarding. Two rules matter: decode the principal properly rather than relying on the side headers for anything structured, and strip any inbound X-Authenticated-* headers on the request so a caller can’t inject spoofed values before you add yours.
// Strip any inbound X-Authenticated-* to prevent header spoofing
foreach (var name in req.Headers.Select(h => h.Key).ToList())
{
if (name.StartsWith("X-Authenticated-", StringComparison.OrdinalIgnoreCase))
req.Headers.Remove(name);
}
// Decode the full claim set from Easy Auth
var principalB64 = req.Headers.GetValues("X-MS-CLIENT-PRINCIPAL").FirstOrDefault();
var json = Encoding.UTF8.GetString(Convert.FromBase64String(principalB64));
var principal = JsonSerializer.Deserialize<ClientPrincipal>(json);
var upn = principal.Claims.FirstOrDefault(c => c.Type == "preferred_username")?.Value;
var oid = principal.Claims.FirstOrDefault(c => c.Type == "oid")?.Value;
var groups = principal.Claims.Where(c => c.Type == "groups").Select(c => c.Value);
using var client = _httpClientFactory.CreateClient("backend");
var downstream = new HttpRequestMessage(new HttpMethod(req.Method), req.Url.PathAndQuery);
downstream.Headers.Add("X-Authenticated-User", upn);
downstream.Headers.Add("X-Authenticated-Oid", oid);
downstream.Headers.Add("X-Authenticated-Groups", string.Join(",", groups));
The X-MS-CLIENT-PRINCIPAL-NAME and X-MS-CLIENT-PRINCIPAL-ID side headers are convenience fields for UPN and object ID; the -IDP header is the identity provider name, not a group list. For anything beyond a quick display name, decode the full principal.
The backend trusts these headers because it only accepts traffic from the Function App over the private network. No other source can reach it, so the headers cannot be spoofed, as long as the Function explicitly strips any inbound X-Authenticated-* first. This works well for legacy apps that already support header-based authentication, which many IIS apps do through custom HTTP modules.
Token-Broker Mode (On-Behalf-Of)
For backends protected by Entra ID, the Function uses the On-Behalf-Of (OBO) flow to exchange the user’s token for a downstream token scoped to the backend API:
var oboResult = await confidentialClient
.AcquireTokenOnBehalfOf(
new[] { "api://backend-app/.default" },
new UserAssertion(userToken))
.ExecuteAsync();
downstream.Headers.Authorization =
new AuthenticationHeaderValue("Bearer", oboResult.AccessToken);
The OBO flow preserves the user’s identity chain. The backend sees a token that identifies the original user, not the Function App’s identity. Audit logs on the backend still show who made the request.
OBO requires the Function App’s app registration to have the backend API permission configured with an admin consent grant for the OBO scope. Miss this, and you get cryptic AADSTS errors that don’t clearly point to the consent gap.
Host-Header Preservation
The configuration detail below causes the most debugging time when missed.
Application Gateway, by default, rewrites the Host header to the backend pool member’s hostname. If your backend pool member is myfunction.azurewebsites.net, that is what the Function App sees. But Easy Auth’s redirect URIs and cookie domains are configured for your public domain (app.contoso.com).
When the host header doesn’t match, the Entra redirect after authentication goes to the wrong URL. Cookies get set for the wrong domain. The authentication loop never completes.
The cleaner fix, and the one Microsoft’s App Service + Application Gateway guidance pushes toward, is to bind the same public custom domain to the Function App itself and preserve the original host header end to end. That way the Function App, Easy Auth, and Application Gateway all agree on a single hostname, and you avoid brittle static overrides in HTTP settings. With that in place:
resource httpSettings '...' = {
properties: {
pickHostNameFromBackendAddress: false
// Leave hostName unset so the original Host header is preserved,
// or set it explicitly to the same custom domain bound to the Function App.
}
}
A static hostName override can get you through a single-host lab, but it tends to fight platform routing as soon as you introduce multiple hosts, slot swaps, or blue/green deployments. App Service authentication also supports the X-Original-Host header for forwarded scenarios. Test the full authentication loop through the Application Gateway early. Don’t leave it until integration testing.
Locking Down the Network
The proxy pattern only works if the backend is unreachable from anywhere except the Function App.
Deploy a private endpoint for the Function App and disable public access. Application Gateway connects through the Function App’s private IP. VNet integration gives the Function outbound connectivity to the backend. If the backend is an App Service, use access restrictions to allow only the Function’s VNet integration subnet. If it’s an IIS VM, use NSG rules. The principle is the same: only the proxy can reach the backend.
Practical Issues
Health probes return 302 or 401. Application Gateway probes are unauthenticated. With Easy Auth enabled, the probe gets redirected to the Entra login page. Configure a custom probe that hits a path excluded from authentication:
{
"globalValidation": {
"unauthenticatedClientAction": "RedirectToLoginPage",
"excludedPaths": ["/api/health"]
}
}
Cookie and redirect rewriting. If the backend sets cookies with a domain or path that differs from the public-facing URL, the Function needs to rewrite Set-Cookie and Location headers. Straightforward string replacement, but it must be handled explicitly.
Horizontal scaling and token caches. Functions scale out to multiple instances. For OBO, MSAL’s distributed token cache (backed by Redis or SQL) prevents duplicate token requests. For the Kerberos pattern, ticket caches need the same treatment.
Key Vault reference refresh. Key Vault references in Function App settings refresh on a roughly 24-hour cycle. If you need faster rotation, read from Key Vault directly using the SDK with managed identity and implement your own cache with a shorter TTL.
Cold starts. For a reverse proxy that users hit interactively, consumption plan cold starts are too slow. Premium plan (EP1+) with always-ready instances eliminates this. Flex Consumption with one always-ready instance is a cost-effective middle ground.
HTTP timeout ceiling. Azure Functions enforces a hard 230-second maximum on synchronous HTTP responses regardless of the plan-level timeout setting. Any backend request that routinely runs longer than that is not a fit for this pattern; it needs an async handoff, a durable function, or a non-Functions proxy.
VNet integration subnet sizing. Flex Consumption and Elastic Premium both scale out and consume IPs from the VNet integration subnet as they do. Microsoft recommends sizing the subnet for peak instance count plus headroom, and undersized subnets or heavily shared gateways can produce latency spikes and intermittent timeouts on a path that otherwise looks healthy. For a proxy sitting between users and their backend, this is the kind of problem that is easy to miss in a lab and obvious in production.
When Kerberos Backends Are Involved

For Kerberos-dependent backends (IIS with Windows Authentication, SPNEGO/Negotiate), things become much more complex. Entra Application Proxy remains the cleaner and more Microsoft-native option for this scenario, and for most teams it is the right answer.
The rest of this section describes an experimental lab pattern, not a supported build path. Microsoft’s resource-based KCD documentation for Entra Domain Services is framed around domain-joined VMs and computer accounts in a custom OU, administered from a joined management VM. What follows extrapolates from that surface to substitute a Function App plus a service-account keytab for the joined machine, and it is not something Microsoft explicitly documents. Treat it as a pattern to validate in a throwaway environment, not as a production blueprint.
The approach simulates a domain join using a service account in Entra Domain Services with a keytab stored in Key Vault.
The setup: create a service account in Entra Domain Services in a custom OU, register an SPN on the account, export a keytab, store it in Key Vault, and configure resource-based KCD on the backend server. At runtime, the Function retrieves the keytab via managed identity, performs S4U2Self and S4U2Proxy to obtain a Kerberos service ticket, and sends the SPNEGO token to the backend.
This is what we informally call the “fake domain join” because the Function behaves as if it were a domain-joined machine from the backend’s perspective, without any actual machine account. In proof-of-concept environments it can be made to work, but it is also the most complex part of this architecture, the part most likely to break during Entra Domain Services platform updates, and the part that deserves the most honest testing before anyone relies on it. If your only reason for this architecture is Kerberos, the connector-VM route is almost certainly the better engineering choice.
For most organisations, the Kerberos workaround is a special case, and not a recommended one. The core value of this pattern is the header-bridging and OBO scenarios, where the operational savings over connector VMs are clearest and the implementation is well-trodden.
When to Use This Pattern
Use it when all of these are true:
- You need Entra ID pre-authentication for an internal application or API
- The backend uses header-based auth or modern OAuth (Kerberos is out of scope for production use)
- You do not want to manage connector VMs
- Your backend is reachable from an Azure VNet
- The application traffic is request/response, not long-lived connections
Skip it when any of these apply:
- The backend requires NTLM authentication
- You need WebSocket passthrough or responses that may exceed the 230-second HTTP ceiling
- You are publishing dozens of diverse apps (Application Proxy’s multi-app model is simpler at scale)
- The backend depends on Kerberos or NTLM, in which case Application Proxy is still the Microsoft-native answer
For teams already running Application Gateway for other purposes, adding a Function App proxy layer is a small incremental cost. The Function itself is lightweight. The real investment is in the network plumbing and the authentication configuration, both of which you would need regardless of which approach you choose.
Microsoft docs: Entra Application Proxy overview · App Service authentication · VNet integration · On-Behalf-Of flow · Entra Domain Services KCD
Need help with your Azure security posture?
We help enterprises design and tune Azure security controls: WAF policies, Sentinel ingestion, Defender for Cloud, identity governance, and NIS2/DORA readiness.
More from the blog
Zero-Credential Architectures: How Managed Identity Changes Everything
Your Service Principals Are a Bigger Blast Radius Than Your VMs