Wednesday, April 8, 2026

Getting Started with Microsoft Agent Framework 1.0.0 in .NET

Microsoft Agent Framework 1.0.0 was released few days ago with support for both .NET and Python. Agent Framework is the direct successor to both Semantic Kernel and AutoGen and includes many features such as Persistence, Monitoring, Humans in the Loop etc.

In this post, let's see how we can create a simple agent with multiple tools in .NET.

I have created a Console App and the first step is adding the new Microsoft.Agents.AI.Foundry package.
dotnet add package Microsoft.Agents.AI.Foundry --version 1.0.0
An AI Agent uses an LLM to run tools in a loop to achieve it's goal. So first let's define some tools that our agent can use. In Agent Framework, tools are plain C# methods decorated with [Description] attributes. The framework automatically generates the tool schema for the LLM from the attribute.
[Description("Returns weather data for a given city, including temperature (in Celsius) and description.")]
static WeatherResult GetWeather(
    [Description("The city to get the weather for.")] string city)
{
    Console.WriteLine($"[Tool] Getting weather for '{city}'.");

    return new WeatherResult(18, "Rainy");
}

[Description("Returns a list of leisure activities for a given city and date, each with a name and location.")]
static List<LeisureActivity> GetActivities(
    [Description("The city to get activities for.")] string city,
    [Description("The date to get activities for in format YYYY-MM-DD.")] string date)
{
    Console.WriteLine($"[Tool] Getting activities for '{city}' on '{date}'.");

    return
    [
        new("Hiking", city),
        new("Beach", city),
        new("Museum", city)
    ];
}

[Description("Gets the current date from the system and returns as a string in format YYYY-MM-DD.")]
static string GetCurrentDate()
{
    Console.WriteLine("[Tool] Getting current date.");

    return DateTime.Now.ToString("yyyy-MM-dd");
}

record WeatherResult(int Temperature, string Description);

record LeisureActivity(string Name, string Location);
Note: these are mock implementations returning hardcoded data, but in a real application, these tools would call actual APIs. Here we are using typed records (WeatherResultLeisureActivity) instead of raw JSON strings. The framework serializes these to JSON automatically. The [Description] attributes on the methods should describe the return shape since the framework only generates input parameter schemas, the LLM won't see the return type otherwise (.NET [Feature]: Consider including return type schema in generated tool definitions)

Now let's create an agent that uses these tools.
using Azure.AI.Projects;
using Azure.Identity;
using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;
using System.ComponentModel;

string endpoint = "https://<FOUNDRY_RESOURCE>.services.ai.azure.com/";
string deploymentName = "<DEPLOYMENT_NAME>";

// For local development, using AzureCliCredential
var credential = new AzureCliCredential();

AIAgent agent = new AIProjectClient(new Uri(endpoint), credential)
    .AsAIAgent(
        model: deploymentName,
        name: "weekend-planner",
        instructions: """
            You help users plan their weekends and choose the best activities for the given weather.
            If an activity would be unpleasant in weather, don't suggest it.
            Include date of the weekend in response.
            """,
        tools: [
            AIFunctionFactory.Create(GetWeather),
            AIFunctionFactory.Create(GetActivities),
            AIFunctionFactory.Create(GetCurrentDate)
        ]);

string userInput = "What should I do this weekend in Auckland?";
Console.WriteLine(await agent.RunAsync(userInput));
The key things to note here:
  • AsAIAgent() is an extension method on AIProjectClient provided by the Microsoft.Agents.AI.Foundry package.
  • AIFunctionFactory.Create() inspects the method's [Description] attributes and parameter types to automatically generate the tool schema that gets sent to the LLM. The LLM then decides which tools to call and with what arguments.
  • RunAsync() handles the entire tool calling loop for you. It sends the prompt, processes tool call requests from the LLM, invokes the matching local functions, sends results back, and returns the final response.
Now when I run this, I can see something like this.
Output
Isn't it nice.

Hope this helps.

More read:

Happy Coding.

Regards,
Jaliya

Tuesday, April 7, 2026

Azure Content Understanding: Custom Usage Tracking with APIM

I had a requirement to track LLM and Content Understanding token usage within a multi-tenant application for downstream customer billing, rather than relying solely on Application Insights.

Thought of using AI gateway in Azure API Management in front of Azure OpenAI / Foundry endpoints.

Specifically:
  • Expose AI endpoints via APIM (e.g., Language Model APIs / Foundry)
  • Use policies such as llm-emit-token-metric  (but this seems tightly coupled to App Insights). 
  • Worst case: custom policies to intercept responses, capture token usage metadata (prompt, completion, total tokens) and emit usage events to Event Hub from APIM via log-to-eventhub
    • Then process these events via a worker to persist usage records to our billing datastore.
Believed this is a common requirement, but couldn't find any better solution. Proceeded with custom policies.

Thought of giving it a try with Content Understanding first as it felt a bit challenging.

Didn't go through the AI gateway in Azure API Management path, instead just added a REST API to APIM.
Create an HTTP API
Then added 2 Operations POST: /* and GET: /*.
{
    "openapi": "3.0.1",
    "info": {
        "title": "Test Foundry API",
        "description": "",
        "version": "1.0"
    },
    "servers": [{
        "url": "https://<SOME_APIM>.com/test-foundry-api"
    }],
    "paths": {
        "/{*path}": {
            "post": {
                "summary": "POST",
                "operationId": "post",
                "parameters": [{
                    "name": "*path",
                    "in": "path",
                    "required": true,
                    "schema": {
                        "type": ""
                    }
                }],
                "responses": {
                    "200": {
                        "description": ""
                    }
                }
            },
            "get": {
                "summary": "GET",
                "operationId": "get",
                "parameters": [{
                    "name": "*path",
                    "in": "path",
                    "required": true,
                    "schema": {
                        "type": ""
                    }
                }],
                "responses": {
                    "200": {
                        "description": ""
                    }
                }
            }
        }
    },
    "components": {
        "securitySchemes": {
            "apiKeyHeader": {
                "type": "apiKey",
                "name": "Ocp-Apim-Subscription-Key",
                "in": "header"
            },
            "apiKeyQuery": {
                "type": "apiKey",
                "name": "subscription-key",
                "in": "query"
            }
        }
    },
    "security": [{
        "apiKeyHeader": []
    }, {
        "apiKeyQuery": []
    }]
}
Now the most important part. Added the following All Operations policy. Here instead of sending messages to Event Hub, I am sending to Service Bus using send-service-bus-message (Sending messages to Azure Service Bus from Azure API Management) for testing purposes.
<policies>
    <inbound>
        <base />
        <set-variable name="tenantId" value="@(context.Request.Headers.GetValueOrDefault("x-tenant-id", "unknown"))" />
        <set-backend-service base-url="https://<SOME_FOUNDRY>.services.ai.azure.com" />
    </inbound>
    <backend>
        <forward-request buffer-request-body="true" />
    </backend>
    <outbound>
        <base />
        <set-header name="Operation-Location" exists-action="override">
            <value>@{
                var location = context.Response.Headers.GetValueOrDefault("Operation-Location", "");
                if (string.IsNullOrEmpty(location)) 
                {
                    return location;
                }
                
                var uri = new Uri(location);
                var req = context.Request.OriginalUrl;
                return req.Scheme + "://" + req.Host + "/" + context.Api.Path + uri.PathAndQuery;
            }</value>
        </set-header>
        <choose>
            <when condition="@(context.Response.StatusCode >= 200 && context.Response.StatusCode < 300)">
                <set-variable name="body" value="@(context.Response.Body.As<string>(preserveContent: true))" />
                <choose>
                    <when condition="@{
                        var text = (string)context.Variables["body"];
                        if (string.IsNullOrEmpty(text) || !text.TrimStart().StartsWith("{"))
                        {
                            return false;
                        }

                        var json = Newtonsoft.Json.Linq.JObject.Parse(text);
                        var statusToken = json["status"];
                        var status = statusToken == null ? string.Empty : ((string)statusToken).ToLowerInvariant();

                        return status == "succeeded" || status == "completed" || status == "failed";
                    }">
                        <send-service-bus-message 
                          topic-name="sbt-test-usage-tracking" 
                          namespace="<SOME_SERVICEBUS_NAMESPACE>.servicebus.windows.net" 
                          client-id="<SOME_MANAGED_IDENTITY_CLIENT_ID>">
                            <payload>@{
                                var json = Newtonsoft.Json.Linq.JObject.Parse((string)context.Variables["body"]);
                                var operationIdToken = json["id"];
                                var analyzerIdToken = json["result"]?["analyzerId"];
                                var statusToken = json["status"];

                                return new Newtonsoft.Json.Linq.JObject(
                                    new Newtonsoft.Json.Linq.JProperty("tenantId", (string)context.Variables["tenantId"]),
                                    new Newtonsoft.Json.Linq.JProperty("eventType", "cu-analysis-completed"),
                                    new Newtonsoft.Json.Linq.JProperty("requestId", context.RequestId.ToString()),
                                    new Newtonsoft.Json.Linq.JProperty("operationId", operationIdToken == null ? string.Empty : (string)operationIdToken),
                                    new Newtonsoft.Json.Linq.JProperty("analyzerId", analyzerIdToken == null ? string.Empty : (string)analyzerIdToken),
                                    new Newtonsoft.Json.Linq.JProperty("status", statusToken == null ? string.Empty : (string)statusToken),
                                    new Newtonsoft.Json.Linq.JProperty("usage", json["usage"] ?? new Newtonsoft.Json.Linq.JObject()),
                                    new Newtonsoft.Json.Linq.JProperty("timestamp", DateTime.UtcNow.ToString("o"))
                                ).ToString();
                            }</payload>
                        </send-service-bus-message>
                    </when>
                </choose>
            </when>
        </choose>
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>
Important points:
  • Forward Request: buffer-request-body="true": Needed for binary PDF forwarding
  • Header Operation-Location Rewrite: Routes SDK polling back through APIM so the outbound policy fires
Then used the Azure Content Understanding .NET Client (Azure Content Understanding Client Library for .NET) to trigger an analysis and polled for the result.
// NOTE: Endpoint is now APIM    
string endpoint = "https://<SOME_APIM>.com/test-foundry-api";

ContentUnderstandingClientOptions contentUnderstandingClientOptions = new();
contentUnderstandingClientOptions.AddPolicy(new TenantHeaderPolicy("<SOME_TENANT_ID>"), HttpPipelinePosition.PerCall);

ContentUnderstandingClient contentUnderstandingClient =
    new(new Uri(endpoint), new DefaultAzureCredential(), contentUnderstandingClientOptions);

// TODO: Trigger analysis and poll
// REFER:
https://jaliyaudagedara.blogspot.com/2026/03/azure-content-understanding-client.html sealed class TenantHeaderPolicy(string tenantId) : HttpPipelineSynchronousPolicy { public override void OnSendingRequest(HttpMessage message) { Console.WriteLine($"Calling: {message.Request.Method} {message.Request.Uri}"); message.Request.Headers.SetValue("x-tenant-id", tenantId); message.Request.Headers.SetValue("Ocp-Apim-Trace", "true"); } }
Looked promising.
Service Bus Message
I still don't know whether there is a better option. But this seems to be doing what's required.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, March 24, 2026

Azure Content Understanding Client Library for .NET

In this post, let's have a look at the new Azure Content Understanding Client Library for .NET.

Until now, when working with Azure Content Understanding, we had to use the Azure Content Understanding REST API directly. That meant building requests ourselves, handling responses manually, and taking care of the long-running operation flow on our own.

Now there is a proper NET client library, and that makes the experience much nicer. It follows the familiar Azure SDK for .NET patterns such as Response<T> and Operation<T>, so if you have worked with other Azure SDKs before, this will feel very natural.

As of today, this is the latest package and APIs will likely evolve over time. But even at this stage, the SDK already gives a much cleaner developer experience compared to calling the REST endpoints directly.

For the rest of this post, let's go through a simple example.

First step is installing the Azure.AI.ContentUnderstanding NuGet package.

dotnet add package Azure.AI.ContentUnderstanding
Then I have the following simple example.
using Azure;
using Azure.AI.ContentUnderstanding;
using Azure.Identity;
using System.Net.Mime;

string endpoint = "<Endpoint>";
string analyzerId = "<AnalyzerId>";
string filePath = @"<sample-file>.pdf";

BinaryData fileData = BinaryData.FromBytes(await File.ReadAllBytesAsync(filePath));
string contentType = Path.GetExtension(filePath).ToLowerInvariant() switch
{
    ".pdf" => MediaTypeNames.Application.Pdf,
    _ => throw new NotSupportedException($"File type {Path.GetExtension(filePath)} is not supported.")
};

ContentUnderstandingClient contentUnderstandingClient = new(new Uri(endpoint), new DefaultAzureCredential());

Operation<AnalysisResult> operation = await contentUnderstandingClient.AnalyzeBinaryAsync(
    WaitUntil.Started,
    analyzerId,
    fileData,
    contentType: contentType);

Console.WriteLine($"Operation Id: {operation.Id}, Started.");

while (!operation.HasCompleted)
{
    Console.WriteLine($"Operation Id: {operation.Id}, Running.");
    await Task.Delay(TimeSpan.FromSeconds(3));
    await operation.UpdateStatusAsync();
}

Console.WriteLine($"Operation Id: {operation.Id}, Completed.");

AnalysisResult = operation.Value;
foreach (AnalysisContent? item in analysisResult.Contents)
{ foreach (KeyValuePair<string, ContentField> field in item.Fields) { Console.WriteLine($"Field: {field.Key}: Value: {field.Value.Value}"); Console.WriteLine(); } } Console.WriteLine("Done");

Hope this helps.

Read more:
   Azure Content Understanding client library for .NET

Happy Coding.

Regards,
Jaliya

Friday, March 20, 2026

EF Core 11.0: Create and Apply Migrations in a Single Command

When working with migrations in Entity Framework Core, what we usually do is first create the migration and then apply it to the database.

It has always been a two-step process, and that can be a bit annoying when you are continuously developing.

dotnet ef

dotnet ef migrations add Initial
dotnet ef database update
Package Manager Console/PowerShell
Add-Migration Initial
Update-Database

But with EF Core 11.0, we can create and apply the migration in a single step.

dotnet ef  

dotnet ef database update Initial --add
Note the new --add argument.

Package Manager Console/PowerShell
Update-Database -Migration Initial -Add
This will create a migration named Initial and apply it to the database in one go. The migration files will still be created and saved, so you can push it along with your code

Since EF Core 11.0 is still in preview, if you are using the global EF tool, make sure it is updated to the latest version.
dotnet ef --version
dotnet tool update --global dotnet-ef
Hope this helps.

Read more:

Happy Coding.

Regards,
Jaliya

Tuesday, March 17, 2026

EF Core 11.0: Complex Types and JSON Columns on Entity Types with TPT/TPC Inheritance

In this post, let's have a look at a nice improvement in EF Core 11 around complex types, JSON columns, and inheritance mapping.

If you tried to use a complex type as a JSON column on an entity hierarchy that uses TPT (Table-per-Type) or TPC (Table-per-Concrete-Type) in EF Core 10, you would have noticed that it was not working as expected. With EF Core 11, that limitation is now gone.

Let's see how this works.

Consider the following Db Context.

public abstract class Person
{
    public int Id { get; set; }

    public required string Name { get; init; }

    public required Address Address { get; set; }
}

public class Student : Person
{
    public required string School { get; set; }
}

public class Employee : Person
{
    public required string Employer { get; set; }
}

[ComplexType]
public class Address
{
    public required string AddressLine1 { get; set; }

    public required string City { get; set; }

    public required string State { get; set; }
}

public class MyDbContext : DbContext
{
    public DbSet<Student> Students { get; set; }

    public DbSet<Employee> Employees { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        // Note: Database compatibility level 170 (Microsoft SQL Server 2025)
        optionsBuilder
            .UseSqlServer(@"<Connection_String>");
    }

    override protected void OnModelCreating(ModelBuilder modelBuilder)
    {
        // TODO: Configure
    }
}

TPT (Table-per-Type)

EF Core 10

Let's first see TPT with EF Core 10.

override protected void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Person>()
        .UseTptMappingStrategy();
}

This would generate 3 tables: Person, Students and Employees.

EF 10 TPT: Person, Students and Employees
Which is good, working as expected.
 
Now if we try using Complex Types:

override protected void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Person>()
        .UseTptMappingStrategy()
        .ComplexProperty(a => a.Address, b => b.ToJson());
}

This would be an error.

EF Core 11

With EF Core 11.0, above would create: again 3 tables: Person, Students and Employees.

EF 11 TPT: Person, Students and Employees: Identical to EF Core 10.0
EF 11 TPT: Person, Students and Employees with JSON Columns
Note: here the JSON column should be created only in the Parent table.

TPC (Table-per-Concrete-Type)

EF Core 10

Now let's now see TPC with EF Core 10.

override protected void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Person>()
        .UseTpcMappingStrategy();
}

This would generate only 2 tables per concrete types: Students and Employees.

EF 10 TPC: Students and Employees
Note: Here the Address column is missing.

Now if we try using Complex Types:

override protected void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Person>()
        .UseTpcMappingStrategy()
        .ComplexProperty(a => a.Address, b => b.ToJson());
}

This again would be an error.

EF Core 11

With EF Core 11.0, above would create: again 2 tables: Students and Employees.

EF 11 TPC: Students and Employees
EF 11 TPC: Students and Employees with JSON Columns
Here on TPC, everything seems to be working as expected.

Hope this helps.

Read more:

Happy Coding.

Regards,
Jaliya

Tuesday, March 3, 2026

C# 15: Collection Expression Arguments

In this post, let's have a look at a new C# 15 feature: Collection Expression Arguments which is now available with .NET 11 Preview 1.0.

With C# 12, we got Collection Expressions which gave us a nice unified syntax to initialize collections using [...]. But one thing we couldn't do was pass arguments to the underlying collection's constructor. For example, if you wanted to set a capacity for a List<T> or pass in a comparer for a HashSet<T>, you had to fall back to the traditional way.

Now with C# 15, we can use with(...) as the first element in a collection expression to pass arguments to the collection's constructor.

Let's have a look at some examples.

Note: C# 15 is supported on .NET 11. You will need the latest Visual Studio 2026 or the .NET 11 SDK. And make sure to set the LangVersion to preview.

Consider the below code.

string[] items = ["one", "two", "three"];

// Pass capacity
List<string> names = [with(capacity: items.Length * 2), .. items];

names.AddRange(["four", "five", "six", "seven"]);

foreach (string name in names)
{
    Console.WriteLine(name);
}
Here we are creating a List<string> using a collection expression and passing in a capacity hint to the constructor using with(capacity: items.Length * 2). The capacity here would be 6.

The output would be,
one
two
three
four
five
six
seven
Now you might notice, we are adding 7 items in total which exceeds the capacity of 6 we specified. And that's perfectly fine. Capacity is not a max size limit, it's a performance hint. It pre-allocates the internal array to that size upfront, so the list doesn't need to repeatedly resize as you add items. If you exceed it, the list simply grows automatically.

Now, let's have a look at another interesting example.

This is useful when you have a rough idea of how many items you'll have. Without it,
// Pass comparer
HashSet<string> set = [with(StringComparer.OrdinalIgnoreCase), "Hello", "HELLO", "hello"];

foreach (string name in set)
{ 
    Console.WriteLine(name); 
}
Here the output would be,
Hello
The set contains only one element because all three strings are equal when compared with OrdinalIgnoreCase. Previously, if you wanted to do this with collection expressions, you couldn't, you had to use the constructor directly. Now it's all nice and clean in a single expression.

Love it.

Read more:

Happy Coding.

Regards,
Jaliya

Saturday, February 28, 2026

Connecting Azure to MongoDB Atlas via Private Endpoints

In this post, let's see how we can set up Azure Private Endpoints to connect to a MongoDB Atlas cluster.

When we connect to a MongoDB Atlas cluster, we typically use a connection string like this:

mongodb+srv://<CLUSTER_NAME>.bzmphh.mongodb.net

This goes over the public internet. If we are on Azure and want traffic to stay private and routed through Azure's backbone network, we need a Private Endpoint.

Setting up a Private Endpoint between Azure and Atlas involves both sides:
  • Atlas
    • A Private Link Service that our Azure VNet can connect to
  • Azure
    • A Private Endpoint in our VNet's subnet that gets a private IP
    • An Azure Private DNS Zone so our apps resolve the Atlas hostname to the private IP instead of the public one

Let's walk through how to achieve this step by step using PowerShell. Please note that we need az cli and atlas cli for this.

$atlasProjectId = "<ATLAS_PROJECT_ID>"
$atlasClusterName = "<ATLAS_CLUSTER_NAME>"
$atlasRegion = "<ATLAS_REGION>"

$subscription = "<AZURE_SUBSCRIPTION_ID>"
$resourceGroup = "<RESOURCE_GROUP>"
$location = "<REGION>"
$vnetName = "<VNET>"
$subnetName = "<SNET_FOR_PRIVATE_ENDPOINT>"

$peName = "<PE_NAME>"
$peNicName = "<PE_NAME>_nic"
First, we need to create the Private Link Service in Atlas. This is the resource that Azure will connect to.
atlas privateEndpoints azure create `
    --projectId $atlasProjectId `
    --region $atlasRegion 
--output json | ConvertFrom-Json
# Wait for some time before running below
$peServices = atlas privateEndpoints azure list
--projectId $atlasProjectId | ConvertFrom-Json

$peService = peServices[0]
$endpointServiceId = $peService.id $privateLinkServiceResourceId = $peService.privateLinkServiceResourceId
Now we create the Private Endpoint in our Azure VNet. 
az account set --subscription $subscription

$pe = az network private-endpoint create `
    --resource-group $resourceGroup `
    --location $location `
    --name $peName `
    --nic-name $peNicName `
    --vnet-name $vnetName `
    --subnet $subnetName `
    --private-connection-resource-id $privateLinkServiceResourceId `
    --connection-name "$peName-connection" `
    --manual-request true | ConvertFrom-Json

$peResourceId = $pe.id
Note the "--manual-request true" flag is required because Atlas needs to accept the connection on their side.

The Private Endpoint creates a NIC in our subnet. We need its private IP for our next step.
$pePrivateIp = az network nic show `
    --subscription $subscription `
    --resource-group $resourceGroup `
    --name $peNicName `
    --query "ipConfigurations[0].privateIPAddress" -o tsv
Now we need to ask Atlas to accept the connection.
atlas privateEndpoints azure interfaces create $endpointServiceId `
    --privateEndpointId $peResourceId `
    --privateEndpointIpAddress $pePrivateIp `
    --projectId $atlasProjectId
Now we need a Private DNS Zone so that apps inside the VNet resolve Atlas hostnames to our private IP. We can derive the DNS Zone Name from the cluster's connection string.
$cluster = atlas clusters describe $atlasClusterName `
    --projectId $atlasProjectId -o json | ConvertFrom-Json

$srvHost = $cluster.connectionStrings.standardSrv -replace "mongodb\+srv://", ""
$dnsZoneName = $srvHost.Substring($srvHost.IndexOf('.') + 1)
# Result: bzmphh.mongodb.net

az network private-dns zone create `
    --resource-group $resourceGroup `
    --name $dnsZoneName
Note that we use a specific subdomain (bzmphh.mongodb.net) rather than mongodb.net. This avoids hijacking DNS resolution for all MongoDB Atlas clusters, only our cluster's traffic goes through the private endpoint.

The DNS zone needs to be linked to our VNet so resources inside it can resolve the private records.
$vnetResourceId = az network vnet show `
    --subscription $subscription `
    --resource-group $resourceGroup `
    --name $vnetName `
    --query "id" -o tsv

az network private-dns link vnet create `
    --resource-group $resourceGroup `
    --zone-name $dnsZoneName `
    --name $vnetName `
    --virtual-network $vnetResourceId `
    --registration-enabled false
After registering the PE, Atlas generates private endpoint-specific connection strings. We can find those using the following:
$connectionStrings = atlas clusters connectionStrings describe $atlasClusterName `
    --projectId $atlasProjectId `
    -o json | ConvertFrom-Json
atlas clusters connectionStrings describe
This gives us everything we need, the PE-specific hostnames, ports, and replica set info.

Now we parse the Atlas connection strings and create the DNS records in our Private DNS Zone.
$peConnStr = $connectionStrings.privateEndpoint[0]

$srvHostFull = $peConnStr.srvConnectionString -replace "mongodb\+srv://", ""
$srvPrefix = $srvHostFull.Split('.')[0]

$connPart = ($peConnStr.connectionString -replace "mongodb://", "").Split('?')[0].TrimEnd('/')
$hostPortEntries = $connPart.Split(',')
$aRecordHostFull = $hostPortEntries[0].Split(':')[0]
$aRecordName = $aRecordHostFull -replace "\.$dnsZoneName$", ""
$ports = $hostPortEntries | ForEach-Object { $_.Split(':')[1] }

$queryParams = ($peConnStr.connectionString -split '\?')[1]
$txtValue = ($queryParams -split '&' | Where-Object {
    $_ -match "authSource|replicaSet"
}) -join '&'
Now create the actual records:
# A Record
az network private-dns record-set a add-record `
    --resource-group $resourceGroup `
    --zone-name $dnsZoneName `
    --record-set-name $aRecordName `
    --ipv4-address $pePrivateIp

# SRV Records
foreach ($port in $ports) {
    az network private-dns record-set srv add-record `
        --resource-group $resourceGroup `
        --zone-name $dnsZoneName `
        --record-set-name "_mongodb._tcp.$srvPrefix" `
        --target $aRecordHostFull `
        --priority 0 --weight 0 --port $port
}

# TXT Record
az network private-dns record-set txt add-record `
    --resource-group $resourceGroup `
    --zone-name $dnsZoneName `
    --record-set-name $srvPrefix `
    --value "`"$txtValue`""
Once done, I can see something like this in our Private DNS Zone.
Private DNS Zone: Recordsets

Why Not Just an A Record?

If the private endpoint gives us a single private IP, why do we need three types of DNS records?

When our app uses a connection string like:
mongodb+srv://<CLUSTER_NAME>-pl-0.bzmphh.mongodb.net
The MongoDB driver doesn't just do a simple hostname lookup. It performs three DNS queries:
  • SRV Record
    • This tells the driver which hosts and ports to connect to. For private endpoints, the port is 1024 (not the standard 27017). The SRV record returns:
<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1024
<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1025
<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1026
Without SRV records, the driver wouldn't know which port to use and would default to 27017, which won't work over Private Link.
  • TXT Record
    • This provides the replica set name and auth database:
authSource=admin&replicaSet=atlas-5x58u7-shard-0
Without this, the driver wouldn't know which replica set to join or where to authenticate.
  • A Record
    • This resolves the hostname to the Private IP address of our Private Endpoint. This is what actually routes traffic through Azure Private Link instead of the public internet.
What If You Skip SRV and TXT?

You could technically use a "mongodb://" connection string instead of "mongodb+srv://" and hardcode everything:
mongodb://<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1024,<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1025,<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1026/?authSource=admin&replicaSet=atlas-5x58u7-shard-0
But that means your application configuration now contains infrastructure details, ports, replica set names, and host entries. If anything changes on Atlas's side, you'd need to update and redeploy your app. With "mongodb+srv://", your app connection string is just "mongodb+srv://<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net", clean and stable. If something changes, you update the DNS records (infrastructure), not the app config.

So all three are required for "mongodb+srv://" to work over Private Link.

Once everything is set up, you can run the following and verify inside the VNet:
# Check DNS resolution
nslookup <CLUSTER_NAME>-pl-0.bzmphh.mongodb.net

# Test connection
mongosh 'mongodb+srv://<CLUSTER_NAME>-pl-0.bzmphh.mongodb.net' \
    --username dbadmin --password 'yourpassword'
The A record should resolve to your private IP, and mongosh should connect without going over the public internet.

Note: The domain (bzmphh.mongodb.net), -pl-x suffix, and port numbers shown here are examples. Update them to match the values for your own Atlas cluster and Private Endpoint setup.

Hope this helps.

Happy Coding.

Regards,
Jaliya