Saturday, February 28, 2026

Connecting Azure to MongoDB Atlas via Private Endpoints

In this post, let's see how we can set up Azure Private Endpoints to connect to a MongoDB Atlas cluster.

When we connect to a MongoDB Atlas cluster, we typically use a connection string like this:

mongodb+srv://<CLUSTER_NAME>.bzmphh.mongodb.net

This goes over the public internet. If we are on Azure and want traffic to stay private and routed through Azure's backbone network, we need a Private Endpoint.

Setting up a Private Endpoint between Azure and Atlas involves both sides:
  • Atlas
    • A Private Link Service that our Azure VNet can connect to
  • Azure
    • A Private Endpoint in our VNet's subnet that gets a private IP
    • An Azure Private DNS Zone so our apps resolve the Atlas hostname to the private IP instead of the public one

Let's walk through how to achieve this step by step using PowerShell. Please note that we need az cli and atlas cli for this.

$atlasProjectId = "<ATLAS_PROJECT_ID>"
$atlasClusterName = "<ATLAS_CLUSTER_NAME>"
$atlasRegion = "<ATLAS_REGION>"

$subscription = "<AZURE_SUBSCRIPTION_ID>"
$resourceGroup = "<RESOURCE_GROUP>"
$location = "<REGION>"
$vnetName = "<VNET>"
$subnetName = "<SNET_FOR_PRIVATE_ENDPOINT>"

$peName = "<PE_NAME>"
$peNicName = "<PE_NAME>_nic"
First, we need to create the Private Link Service in Atlas. This is the resource that Azure will connect to.
$peService = atlas privateEndpoints azure create `
    --projectId $atlasProjectId `
    --region $atlasRegion -o json | ConvertFrom-Json

$endpointServiceId = $peService.id
$privateLinkServiceResourceId = $peService.privateLinkServiceResourceId
Now we create the Private Endpoint in our Azure VNet. 
az account set --subscription $subscription

$pe = az network private-endpoint create `
    --resource-group $resourceGroup `
    --location $location `
    --name $peName `
    --nic-name $peNicName `
    --vnet-name $vnetName `
    --subnet $subnetName `
    --private-connection-resource-id $privateLinkServiceResourceId `
    --connection-name "$peName-connection" `
    --manual-request true | ConvertFrom-Json

$peResourceId = $pe.id
Note the "--manual-request true" flag is required because Atlas needs to accept the connection on their side.

The Private Endpoint creates a NIC in our subnet. We need its private IP for our next step.
$pePrivateIp = az network nic show `
    --subscription $subscription `
    --resource-group $resourceGroup `
    --name $peNicName `
    --query "ipConfigurations[0].privateIPAddress" -o tsv
Now we need to ask Atlas to accept the connection.
atlas privateEndpoints azure interfaces create $endpointServiceId `
    --privateEndpointId $peResourceId `
    --privateEndpointIpAddress $pePrivateIp `
    --projectId $atlasProjectId
Now we need a Private DNS Zone so that apps inside the VNet resolve Atlas hostnames to our private IP. We can derive the DNS Zone Name from the cluster's connection string.
$cluster = atlas clusters describe $atlasClusterName `
    --projectId $atlasProjectId -o json | ConvertFrom-Json

$srvHost = $cluster.connectionStrings.standardSrv -replace "mongodb\+srv://", ""
$dnsZoneName = $srvHost.Substring($srvHost.IndexOf('.') + 1)
# Result: bzmphh.mongodb.net

az network private-dns zone create `
    --resource-group $resourceGroup `
    --name $dnsZoneName
Note that we use a specific subdomain (bzmphh.mongodb.net) rather than mongodb.net. This avoids hijacking DNS resolution for all MongoDB Atlas clusters, only our cluster's traffic goes through the private endpoint.

The DNS zone needs to be linked to our VNet so resources inside it can resolve the private records.
$vnetResourceId = az network vnet show `
    --subscription $subscription `
    --resource-group $resourceGroup `
    --name $vnetName `
    --query "id" -o tsv

az network private-dns link vnet create `
    --resource-group $resourceGroup `
    --zone-name $dnsZoneName `
    --name $vnetName `
    --virtual-network $vnetResourceId `
    --registration-enabled false
After registering the PE, Atlas generates private endpoint-specific connection strings. We can find those using the following:
$connectionStrings = atlas clusters connectionStrings describe $atlasClusterName `
    --projectId $atlasProjectId `
    -o json | ConvertFrom-Json
atlas clusters connectionStrings describe
This gives us everything we need, the PE-specific hostnames, ports, and replica set info.

Now we parse the Atlas connection strings and create the DNS records in our Private DNS Zone.
$peConnStr = $connectionStrings.privateEndpoint[0]

$srvHostFull = $peConnStr.srvConnectionString -replace "mongodb\+srv://", ""
$srvPrefix = $srvHostFull.Split('.')[0]

$connPart = ($peConnStr.connectionString -replace "mongodb://", "").Split('?')[0].TrimEnd('/')
$hostPortEntries = $connPart.Split(',')
$aRecordHostFull = $hostPortEntries[0].Split(':')[0]
$aRecordName = $aRecordHostFull -replace "\.$dnsZoneName$", ""
$ports = $hostPortEntries | ForEach-Object { $_.Split(':')[1] }

$queryParams = ($peConnStr.connectionString -split '\?')[1]
$txtValue = ($queryParams -split '&' | Where-Object {
    $_ -match "authSource|replicaSet"
}) -join '&'
Now create the actual records:
# A Record
az network private-dns record-set a add-record `
    --resource-group $resourceGroup `
    --zone-name $dnsZoneName `
    --record-set-name $aRecordName `
    --ipv4-address $pePrivateIp

# SRV Records
foreach ($port in $ports) {
    az network private-dns record-set srv add-record `
        --resource-group $resourceGroup `
        --zone-name $dnsZoneName `
        --record-set-name "_mongodb._tcp.$srvPrefix" `
        --target $aRecordHostFull `
        --priority 0 --weight 0 --port $port
}

# TXT Record
az network private-dns record-set txt add-record `
    --resource-group $resourceGroup `
    --zone-name $dnsZoneName `
    --record-set-name $srvPrefix `
    --value "`"$txtValue`""
Once done, I can see something like this in our Private DNS Zone.
Private DNS Zone: Recordsets

Why Not Just an A Record?

If the private endpoint gives us a single private IP, why do we need three types of DNS records?

When our app uses a connection string like:
mongodb+srv://<CLUSTER_NAME>-pl-0.bzmphh.mongodb.net
The MongoDB driver doesn't just do a simple hostname lookup. It performs three DNS queries:
  • SRV Record
    • This tells the driver which hosts and ports to connect to. For private endpoints, the port is 1024 (not the standard 27017). The SRV record returns:
<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1024
<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1025
<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1026
Without SRV records, the driver wouldn't know which port to use and would default to 27017, which won't work over Private Link.
  • TXT Record
    • This provides the replica set name and auth database:
authSource=admin&replicaSet=atlas-5x58u7-shard-0
Without this, the driver wouldn't know which replica set to join or where to authenticate.
  • A Record
    • This resolves the hostname to the Private IP address of our Private Endpoint. This is what actually routes traffic through Azure Private Link instead of the public internet.
What If You Skip SRV and TXT?

You could technically use a "mongodb://" connection string instead of "mongodb+srv://" and hardcode everything:
mongodb://<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1024,<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1025,<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1026/?authSource=admin&replicaSet=atlas-5x58u7-shard-0
But that means your application configuration now contains infrastructure details, ports, replica set names, and host entries. If anything changes on Atlas's side, you'd need to update and redeploy your app. With "mongodb+srv://", your app connection string is just "mongodb+srv://<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net", clean and stable. If something changes, you update the DNS records (infrastructure), not the app config.

So all three are required for "mongodb+srv://" to work over Private Link.

Once everything is set up, you can run the following and verify inside the VNet:
# Check DNS resolution
nslookup <CLUSTER_NAME>-pl-0.bzmphh.mongodb.net

# Test connection
mongosh 'mongodb+srv://<CLUSTER_NAME>-pl-0.bzmphh.mongodb.net' \
    --username dbadmin --password 'yourpassword'
The A record should resolve to your private IP, and mongosh should connect without going over the public internet.

Note: The domain (bzmphh.mongodb.net), -pl-x suffix, and port numbers shown here are examples. Update them to match the values for your own Atlas cluster and Private Endpoint setup.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, February 25, 2026

Connecting to Azure Managed Redis Instance from Redis Insight

Redis Insight is one of the must-have tools if you are working with Redis. It provides a nice GUI to browse keys, run commands, and monitor your Redis instance.

Few years ago, I have previously blogged about Connecting to Azure Cache for Redis Instance from RedisInsight

Azure Cache for Redis is scheduled for retirement on September 30, 2028. Microsoft recommends migrating existing Azure Cache for Redis instances to Azure Managed Redis well before the retirement date to ensure continued support and access to the latest features and improvements.

In this post, let's see how to connect to Azure Managed Redis instance from Redis Insight.

First, you need to configure your Azure Managed Redis instance to allow the connection. If you are connecting from your local machine, Navigate to your Redis cluster in Azure Portal. Go to Networking  and nake sure you have Public Access enabled.

Under Authentication -> Access Keys, make sure Access Keys Authentication is enabled and copy the Access Key. It's going to be the password for Redis Insight connection.
Connection settings: General
Enable Use TLS.
Connection settings: Security
Now test the connection.
Success
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, February 4, 2026

Sending messages to Azure Service Bus from Azure API Management

With Azure API Management (APIM), now you can send/publish messages directly to Azure Service Bus. Note: at the time of writing this post, this feature is still in preview. I am very excited for this feature, because now we can have a simple architecture, we don't need a separate backend code/service to send/publish messages.

This is done via the send-service-bus-message policy.

Let's have a look.

I have a Service Bus namespace and a user assigned Managed Identity. The Managed Identity has the Azure Service Bus Data Sender role assigned to the namespace. In the Service Bus namespace, I have a topic: sbt-apim

In my APIM, I have an API with an operation configured with the following policy.
<!-- Add policies as children to the <inbound>, <outbound>, <backend>, and <on-error> elements -->
<policies>
  <!-- Throttle, authorize, validate, cache, or transform the requests -->
  <inbound>
    <send-service-bus-message 
      topic-name="sbt-apim" 
      namespace="<ServiceBusNamespaceName>.servicebus.windows.net" 
      client-id="<ManagedIdentityClientId>">
      <message-properties>
        <message-property name="TenantId">@(context.Request.Headers.GetValueOrDefault("x-tenantId",""))</message-property>
      </message-properties>
      <payload>
        @(context.Request.Body.As<string>(preserveContent: true))
      </payload>
    </send-service-bus-message>
    <return-response>
      <set-status code="202" reason="Accepted" />
      <set-header name="Content-Type" exists-action="override">
        <value>application/json</value>
      </set-header>
      <set-body>{ "status": "Message Accepted" }</set-body>
    </return-response>
  </inbound>
  <!-- Control if and how the requests are forwarded to services  -->
  <backend>
    <base />
  </backend>
  <!-- Customize the responses -->
  <outbound>
    <base />
  </outbound>
  <!-- Handle exceptions and customize error responses  -->
  <on-error>
    <base />
  </on-error>
</policies>
Here I am using the send-service-bus-message policy in the inbound section to publish messages to a Service Bus topic. The client-id refers to the user assigned Managed Identity. You can also add custom message properties. Here I am passing the value of x-tenantId header as a custom property for demo purposes. Since we don't need a backend service, I am using return-response to immediately return a 202 Accepted response.

Let's test this.
curl 'https://<APIM_BASE_URL>/messaging/publish' `
--header 'x-tenantId: cfecb449-b568-4d0f-b24f-ea9762fb2bf6' `
--header 'Content-Type: application/json' `
--data '{
    "firstName": "John",
    "lastName": "Doe"
}'
Test cURL
And I can see a message published under the topic.
Published Message
Hope this helps.

More read:

Happy Coding.

Regards,
Jaliya

Monday, February 2, 2026

.NET Isolated Azure Functions: Dependency Injection in Durable Entities

In this post, let's have a look at how we can use Dependency Injection (DI) with Azure Durable Entities in .NET 10 Isolated Durable Functions

Say we have the following service.
public interface ISomeService
{
    Task DoSomething();
}

public class SomeService : ISomeService
{
    public async Task DoSomething()
    {
        await Task.Delay(1000);
    }
}
And in Program.cs, this service is registered in DI container.

Using TaskEntity<T> with Constructor Injection


The TaskEntity<T> base class from Microsoft.Azure.Functions.Worker.Extensions.DurableTask provides a clean way to implement entities with DI support and is the recommended approach.

We can have a State and Entity like the following.
public class HelloHistoryState
{
    [JsonPropertyName("messages")]
    public List<string> Messages { get; set; } = [];
}

public class HelloHistoryEntity : TaskEntity<HelloHistoryState>
{
    private readonly ISomeService _someService;

    public HelloHistoryEntity(ISomeService cacheService)
    {
        _someService = cacheService;
    }

    public async Task Add(string message)
    {
        await _someService.DoSomething();

        State.Messages.Add(message);
    }

    // Async for future proofing
    public async Task Reset()
    {
        State = new HelloHistoryState();
    }

    // Async for future proofing
    public Task<HelloHistoryState> GetState() => Task.FromResult(State);

    [Function(nameof(HelloHistoryEntity))]
    public Task DispatchAsync([EntityTrigger] TaskEntityDispatcher dispatcher)
    {
        return dispatcher.DispatchAsync(this);
    }
}
In this approach:
  • The entity class inherits from TaskEntity<T> where T is the state type
  • Constructor injection works as expected
  • The State property provides access to the entity's state
  • The DispatchAsync method uses dispatcher.DispatchAsync(this) to dispatch to the entity instance
And we can call the entity as follows.
[Function(nameof(RunOrchestrator))]
public static async Task<HelloHistoryState> RunOrchestrator(
    [OrchestrationTrigger] TaskOrchestrationContext context)
{
    EntityInstanceId entityId = new(nameof(HelloHistoryEntity), "helloHistory");

    await context.Entities.CallEntityAsync(entityId,
        nameof(HelloHistoryEntity.Reset));

    string result = await context.CallActivityAsync<string>(nameof(SayHello), "Tokyo");
    await context.Entities.CallEntityAsync(entityId,
        nameof(HelloHistoryEntity.Add),
        result);

    result = await context.CallActivityAsync<string>(nameof(SayHello), "Seattle");
    await context.Entities.CallEntityAsync(entityId,
        nameof(HelloHistoryEntity.Add),
        result);

    result = await context.CallActivityAsync<string>(nameof(SayHello), "London");
    await context.Entities.CallEntityAsync(entityId,
        nameof(HelloHistoryEntity.Add),
        result);

    HelloHistoryState state =
        await context.Entities.CallEntityAsync<HelloHistoryState>(entityId, nameof(HelloHistoryEntity.GetState));

    return state;
}

[Function(nameof(SayHello))]
public static string SayHello([ActivityTrigger] string name,
    FunctionContext executionContext)
{
    return $"Hello {name}!";
}

Using TaskEntity<T> with ActivatorUtilities


Now let's look at another approach where we need to pass additional parameters that are not registered in DI container. For example, CancellationToken from FunctionContext.

We can use ActivatorUtilities.CreateInstance<T>() to create the entity instance. It will resolve services from DI and use any manual arguments for the rest.
public class HelloHistoryEntity : TaskEntity<HelloHistoryState>
{
    private readonly ISomeService _someService;
    private readonly CancellationToken _cancellationToken;

    // Constructor: ISomeService from DI, CancellationToken as manual parameter
    public HelloHistoryEntity(ISomeService someService, CancellationToken cancellationToken)
    {
        _someService = someService;
        _cancellationToken = cancellationToken;
    }

    public async Task Add(string message)
    {
        _cancellationToken.ThrowIfCancellationRequested();

        await _someService.DoSomething();

        State.Messages.Add(message);
    }

    // Async for future proofing
    public async Task Reset()
    {
        State = new HelloHistoryState();
    }

    // Async for future proofing
    public Task<HelloHistoryState> GetState() => Task.FromResult(State);

    [Function(nameof(HelloHistoryEntity))]
    public static Task Run([EntityTrigger] TaskEntityDispatcher dispatcher, FunctionContext functionContext)
    {
        // Create entity using ActivatorUtilities
        // Resolving services from DI using FunctionContext and also can pass any manual args
        HelloHistoryEntity entity = ActivatorUtilities.CreateInstance<HelloHistoryEntity>(
            functionContext.InstanceServices,
            functionContext.CancellationToken);

        return dispatcher.DispatchAsync(entity);
    }
}
Note that the function method is now static. When the function method is non-static (like in the previous approach), the runtime needs to create an instance of the class to call the method, so it uses DI to resolve constructor dependencies. With a static method, no instance is needed, so we manually create it using ActivatorUtilities.

Hope this helps.

Happy Coding.

Regards,
Jaliya