Tuesday, April 14, 2026

Microsoft Agent Frameworks' Agent Middleware with .NET

In this post, let's have a look at Microsoft Agent Framework's Agent Middleware with .NET. In my previous post, we looked at getting started with Microsoft Agent Framework in .NET by building an agent with tools. In this post, let's see how we can use middleware to intercept and inspect agent runs, function calls, and chat client calls.
 
Microsoft Agent Framework supports three types of middleware.
  • Agent Run Middleware: Allows interception of all agent runs, so that input and output can be inspected and/or modified as needed.
  • Function Calling Middleware: Allows interception of all function calls executed by the agent, so that input and output can be inspected and modified as needed.
  • Chat Client Middleware: Allows interception of calls to an IChatClient implementation, where an agent is using IChatClient  for inference calls, for example, when using ChatClientAgent.
Let's see how we can set up each of these middleware types.

The middleware is registered using the builder pattern. Agent Run Middleware and Function Calling Middleware are applied to the AIAgent via AsBuilder()Chat Client Middleware is applied to the IChatClient  instance via the clientFactory parameter.
AIAgent agent = new AIProjectClient(new Uri(endpoint), credential)
    .AsAIAgent(
        model: deploymentName,
        name: "weekend-planner",
        instructions: """
            You help users plan their weekends and choose the best activities for the given weather.
            If an activity would be unpleasant in weather, don't suggest it.
            Include date of the weekend in response.
            """,
        tools: [
            AIFunctionFactory.Create(GetWeather),
            AIFunctionFactory.Create(GetActivities),
            AIFunctionFactory.Create(GetCurrentDate)
        ],
        clientFactory: (chatClient) => chatClient
            .AsBuilder()
            // Chat Client Middleware
            .Use(getResponseFunc: CustomChatClientMiddleware, getStreamingResponseFunc: CustomChatClientStreamlingMiddleware)
            .Build())
    .AsBuilder()
    // Agent Run Middleware
    .Use(runFunc: CustomAgentRunMiddleware, runStreamingFunc: CustomAgentRunStreamingMiddleware)
    // Function Calling Middleware
    .Use(CustomFunctionCallingMiddleware)
    .Build();


Agent Run Middleware


The Agent Run Middleware intercepts all agent runs. Here we are logging the message count before and after the agent run. The middleware calls agent.RunAsync() to invoke the next middleware in the chain.
async Task<AgentResponse> CustomAgentRunMiddleware(IEnumerable<ChatMessage> messages,
    AgentSession? session,
    AgentRunOptions? options,
    AIAgent agent,
    CancellationToken cancellationToken)
{
    Console.WriteLine($"[AgentRun] Message Count: '{messages.Count()}'.");

    AgentResponse response = await agent.RunAsync(messages, session, options, cancellationToken)
        .ConfigureAwait(false);

    Console.WriteLine($"[AgentRun] Response Message Count: '{response.Messages.Count}'.");

    return response;
}
It also supports a streaming variant using IAsyncEnumerable<AgentResponseUpdate>.
async IAsyncEnumerable<AgentResponseUpdate> CustomAgentRunStreamingMiddleware(IEnumerable<ChatMessage> messages,
    AgentSession? session,
    AgentRunOptions? options,
    AIAgent agent,
    [EnumeratorCancellation] CancellationToken cancellationToken)
{
    Console.WriteLine($"[AgentRunStreaming] Message Count: '{messages.Count()}'.");

    List<AgentResponseUpdate> updates = [];
    await foreach (AgentResponseUpdate update in agent.RunStreamingAsync(messages, session, options, cancellationToken))
    {
        updates.Add(update);
        yield return update;
    }

    Console.WriteLine($"[AgentRunStreaming] Response Message Count: '{updates.ToAgentResponse().Messages.Count}'.");
}

Function Calling Middleware


The Function Calling Middleware intercepts all function calls executed by the agent. Here we are logging the function name being called and the result returned. The middleware uses the next delegate to invoke the next middleware in the chain.
async ValueTask<object?> CustomFunctionCallingMiddleware(
    AIAgent agent,
    FunctionInvocationContext context,
    Func<FunctionInvocationContext, CancellationToken, ValueTask<object?>> next,
    CancellationToken cancellationToken)
{
    Console.WriteLine($"      [FunctionCall] Calling: '{context!.Function.Name}'.");

    object? result = await next(context, cancellationToken);

    Console.WriteLine($"      [FunctionCall] Result: '{context!.Function.Name}' = '<OMITTED>'.");

    return result;
}

Chat Client Middleware


The Chat Client Middleware intercepts calls to IChatClient . This is useful when you want to inspect the raw messages being sent to and received from the underlying LLM. Here we are logging all messages with their content types, distinguishing between TextContent, FunctionCallContent, and FunctionResultContent.
async Task<ChatResponse> CustomChatClientMiddleware(IEnumerable<ChatMessage> messages,
    ChatOptions? options,
    IChatClient client,
    CancellationToken token)
{
    LogChatClientMessages("Messages", messages);

    ChatResponse response = await client.GetResponseAsync(messages, options, token)
        .ConfigureAwait(false);

    LogChatClientMessages("Response", response.Messages);

    return response;
}
And the streaming variant.
async IAsyncEnumerable<ChatResponseUpdate> CustomChatClientStreamlingMiddleware(IEnumerable<ChatMessage> messages,
    ChatOptions? options,
    IChatClient client,
    [EnumeratorCancellation] CancellationToken token)
{
    LogChatClientMessages("Messages", messages);

    List<ChatResponseUpdate> updates = [];
    await foreach (ChatResponseUpdate update in client.GetStreamingResponseAsync(messages, options, token))
    {
        updates.Add(update);
        yield return update;
    }

    LogChatClientMessages("Response", updates.ToChatResponse().Messages);
}
The helper method LogChatClientMessages logs the messages with their content types.
void LogChatClientMessages(string label, IEnumerable<ChatMessage> messages)
{
    Console.WriteLine();
    Console.WriteLine($"   [ChatClient] {label}:");
    foreach (ChatMessage message in messages)
    {
        foreach (AIContent content in message.Contents)
        {
            string detail = content switch
            {
                TextContent text => text.Text,
                FunctionCallContent fc => $"Call: {fc.Name}({string.Join(", ", fc.Arguments?.Select(a => $"{a.Key}={a.Value}") ?? [])})",
                FunctionResultContent fr => $"Result: {fr.CallId} = '<OMITTED>'",
                _ => content.GetType().Name
            };
            Console.WriteLine($"      [{message.Role}] {detail}");
        }
    }
}
When I run this, I could see something like below:
Agent Middleware Output
Complete source code:
   https://github.com/jaliyaudagedara/maf-samples/blob/main/dotnet/samples/02-agent-middleware/Program.cs

Hope this helps.

More read:

Happy Coding.

Regards,
Jaliya

Wednesday, April 8, 2026

Getting Started with Microsoft Agent Framework 1.0.0 in .NET

Microsoft Agent Framework 1.0.0 was released few days ago with support for both .NET and Python. Agent Framework is the direct successor to both Semantic Kernel and AutoGen and includes many features such as Persistence, Monitoring, Humans in the Loop etc.

In this post, let's see how we can create a simple agent with multiple tools in .NET.

I have created a Console App and the first step is adding the new Microsoft.Agents.AI.Foundry package.
dotnet add package Microsoft.Agents.AI.Foundry --version 1.0.0
An AI Agent uses an LLM to run tools in a loop to achieve it's goal. So first let's define some tools that our agent can use. In Agent Framework, tools are plain C# methods decorated with [Description] attributes. The framework automatically generates the tool schema for the LLM from the attribute.
[Description("Returns weather data for a given city, including temperature (in Celsius) and description.")]
static WeatherResult GetWeather(
    [Description("The city to get the weather for.")] string city)
{
    Console.WriteLine($"[Tool] Getting weather for '{city}'.");

    return new WeatherResult(18, "Rainy");
}

[Description("Returns a list of leisure activities for a given city and date, each with a name and location.")]
static List<LeisureActivity> GetActivities(
    [Description("The city to get activities for.")] string city,
    [Description("The date to get activities for in format YYYY-MM-DD.")] string date)
{
    Console.WriteLine($"[Tool] Getting activities for '{city}' on '{date}'.");

    return
    [
        new("Hiking", city),
        new("Beach", city),
        new("Museum", city)
    ];
}

[Description("Gets the current date from the system and returns as a string in format YYYY-MM-DD.")]
static string GetCurrentDate()
{
    Console.WriteLine("[Tool] Getting current date.");

    return DateTime.Now.ToString("yyyy-MM-dd");
}

record WeatherResult(int Temperature, string Description);

record LeisureActivity(string Name, string Location);
Note: these are mock implementations returning hardcoded data, but in a real application, these tools would call actual APIs. Here we are using typed records (WeatherResultLeisureActivity) instead of raw JSON strings. The framework serializes these to JSON automatically. The [Description] attributes on the methods should describe the return shape since the framework only generates input parameter schemas, the LLM won't see the return type otherwise (.NET [Feature]: Consider including return type schema in generated tool definitions)

Now let's create an agent that uses these tools.
using Azure.AI.Projects;
using Azure.Identity;
using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;
using System.ComponentModel;

string endpoint = "https://<FOUNDRY_RESOURCE>.services.ai.azure.com/";
string deploymentName = "<DEPLOYMENT_NAME>";

// For local development, using AzureCliCredential
var credential = new AzureCliCredential();

AIAgent agent = new AIProjectClient(new Uri(endpoint), credential)
    .AsAIAgent(
        model: deploymentName,
        name: "weekend-planner",
        instructions: """
            You help users plan their weekends and choose the best activities for the given weather.
            If an activity would be unpleasant in weather, don't suggest it.
            Include date of the weekend in response.
            """,
        tools: [
            AIFunctionFactory.Create(GetWeather),
            AIFunctionFactory.Create(GetActivities),
            AIFunctionFactory.Create(GetCurrentDate)
        ]);

string userInput = "What should I do this weekend in Auckland?";
Console.WriteLine(await agent.RunAsync(userInput));
The key things to note here:
  • AsAIAgent() is an extension method on AIProjectClient provided by the Microsoft.Agents.AI.Foundry package.
  • AIFunctionFactory.Create() inspects the method's [Description] attributes and parameter types to automatically generate the tool schema that gets sent to the LLM. The LLM then decides which tools to call and with what arguments.
  • RunAsync() handles the entire tool calling loop for you. It sends the prompt, processes tool call requests from the LLM, invokes the matching local functions, sends results back, and returns the final response.
Now when I run this, I can see something like this.
Output
Isn't it nice.

Tuesday, April 7, 2026

Azure Content Understanding: Custom Usage Tracking with APIM

I had a requirement to track LLM and Content Understanding token usage within a multi-tenant application for downstream customer billing, rather than relying solely on Application Insights.

Thought of using AI gateway in Azure API Management in front of Azure OpenAI / Foundry endpoints.

Specifically:
  • Expose AI endpoints via APIM (e.g., Language Model APIs / Foundry)
  • Use policies such as llm-emit-token-metric  (but this seems tightly coupled to App Insights). 
  • Worst case: custom policies to intercept responses, capture token usage metadata (prompt, completion, total tokens) and emit usage events to Event Hub from APIM via log-to-eventhub
    • Then process these events via a worker to persist usage records to our billing datastore.
Believed this is a common requirement, but couldn't find any better solution. Proceeded with custom policies.

Thought of giving it a try with Content Understanding first as it felt a bit challenging.

Didn't go through the AI gateway in Azure API Management path, instead just added a REST API to APIM.
Create an HTTP API
Then added 2 Operations POST: /* and GET: /*.
{
    "openapi": "3.0.1",
    "info": {
        "title": "Test Foundry API",
        "description": "",
        "version": "1.0"
    },
    "servers": [{
        "url": "https://<SOME_APIM>.com/test-foundry-api"
    }],
    "paths": {
        "/{*path}": {
            "post": {
                "summary": "POST",
                "operationId": "post",
                "parameters": [{
                    "name": "*path",
                    "in": "path",
                    "required": true,
                    "schema": {
                        "type": ""
                    }
                }],
                "responses": {
                    "200": {
                        "description": ""
                    }
                }
            },
            "get": {
                "summary": "GET",
                "operationId": "get",
                "parameters": [{
                    "name": "*path",
                    "in": "path",
                    "required": true,
                    "schema": {
                        "type": ""
                    }
                }],
                "responses": {
                    "200": {
                        "description": ""
                    }
                }
            }
        }
    },
    "components": {
        "securitySchemes": {
            "apiKeyHeader": {
                "type": "apiKey",
                "name": "Ocp-Apim-Subscription-Key",
                "in": "header"
            },
            "apiKeyQuery": {
                "type": "apiKey",
                "name": "subscription-key",
                "in": "query"
            }
        }
    },
    "security": [{
        "apiKeyHeader": []
    }, {
        "apiKeyQuery": []
    }]
}
Now the most important part. Added the following All Operations policy. Here instead of sending messages to Event Hub, I am sending to Service Bus using send-service-bus-message (Sending messages to Azure Service Bus from Azure API Management) for testing purposes.
<policies>
    <inbound>
        <base />
        <set-variable name="tenantId" value="@(context.Request.Headers.GetValueOrDefault("x-tenant-id", "unknown"))" />
        <set-backend-service base-url="https://<SOME_FOUNDRY>.services.ai.azure.com" />
    </inbound>
    <backend>
        <forward-request buffer-request-body="true" />
    </backend>
    <outbound>
        <base />
        <set-header name="Operation-Location" exists-action="override">
            <value>@{
                var location = context.Response.Headers.GetValueOrDefault("Operation-Location", "");
                if (string.IsNullOrEmpty(location)) 
                {
                    return location;
                }
                
                var uri = new Uri(location);
                var req = context.Request.OriginalUrl;
                return req.Scheme + "://" + req.Host + "/" + context.Api.Path + uri.PathAndQuery;
            }</value>
        </set-header>
        <choose>
            <when condition="@(context.Response.StatusCode >= 200 && context.Response.StatusCode < 300)">
                <set-variable name="body" value="@(context.Response.Body.As<string>(preserveContent: true))" />
                <choose>
                    <when condition="@{
                        var text = (string)context.Variables["body"];
                        if (string.IsNullOrEmpty(text) || !text.TrimStart().StartsWith("{"))
                        {
                            return false;
                        }

                        var json = Newtonsoft.Json.Linq.JObject.Parse(text);
                        var statusToken = json["status"];
                        var status = statusToken == null ? string.Empty : ((string)statusToken).ToLowerInvariant();

                        return status == "succeeded" || status == "completed" || status == "failed";
                    }">
                        <send-service-bus-message 
                          topic-name="sbt-test-usage-tracking" 
                          namespace="<SOME_SERVICEBUS_NAMESPACE>.servicebus.windows.net" 
                          client-id="<SOME_MANAGED_IDENTITY_CLIENT_ID>">
                            <payload>@{
                                var json = Newtonsoft.Json.Linq.JObject.Parse((string)context.Variables["body"]);
                                var operationIdToken = json["id"];
                                var analyzerIdToken = json["result"]?["analyzerId"];
                                var statusToken = json["status"];

                                return new Newtonsoft.Json.Linq.JObject(
                                    new Newtonsoft.Json.Linq.JProperty("tenantId", (string)context.Variables["tenantId"]),
                                    new Newtonsoft.Json.Linq.JProperty("eventType", "cu-analysis-completed"),
                                    new Newtonsoft.Json.Linq.JProperty("requestId", context.RequestId.ToString()),
                                    new Newtonsoft.Json.Linq.JProperty("operationId", operationIdToken == null ? string.Empty : (string)operationIdToken),
                                    new Newtonsoft.Json.Linq.JProperty("analyzerId", analyzerIdToken == null ? string.Empty : (string)analyzerIdToken),
                                    new Newtonsoft.Json.Linq.JProperty("status", statusToken == null ? string.Empty : (string)statusToken),
                                    new Newtonsoft.Json.Linq.JProperty("usage", json["usage"] ?? new Newtonsoft.Json.Linq.JObject()),
                                    new Newtonsoft.Json.Linq.JProperty("timestamp", DateTime.UtcNow.ToString("o"))
                                ).ToString();
                            }</payload>
                        </send-service-bus-message>
                    </when>
                </choose>
            </when>
        </choose>
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>
Important points:
  • Forward Request: buffer-request-body="true": Needed for binary PDF forwarding
  • Header Operation-Location Rewrite: Routes SDK polling back through APIM so the outbound policy fires
Then used the Azure Content Understanding .NET Client (Azure Content Understanding Client Library for .NET) to trigger an analysis and polled for the result.
// NOTE: Endpoint is now APIM    
string endpoint = "https://<SOME_APIM>.com/test-foundry-api";

ContentUnderstandingClientOptions contentUnderstandingClientOptions = new();
contentUnderstandingClientOptions.AddPolicy(new TenantHeaderPolicy("<SOME_TENANT_ID>"), HttpPipelinePosition.PerCall);

ContentUnderstandingClient contentUnderstandingClient =
    new(new Uri(endpoint), new DefaultAzureCredential(), contentUnderstandingClientOptions);

// TODO: Trigger analysis and poll
// REFER:
https://jaliyaudagedara.blogspot.com/2026/03/azure-content-understanding-client.html sealed class TenantHeaderPolicy(string tenantId) : HttpPipelineSynchronousPolicy { public override void OnSendingRequest(HttpMessage message) { Console.WriteLine($"Calling: {message.Request.Method} {message.Request.Uri}"); message.Request.Headers.SetValue("x-tenant-id", tenantId); message.Request.Headers.SetValue("Ocp-Apim-Trace", "true"); } }
Looked promising.
Service Bus Message
I still don't know whether there is a better option. But this seems to be doing what's required.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, March 24, 2026

Azure Content Understanding Client Library for .NET

In this post, let's have a look at the new Azure Content Understanding Client Library for .NET.

Until now, when working with Azure Content Understanding, we had to use the Azure Content Understanding REST API directly. That meant building requests ourselves, handling responses manually, and taking care of the long-running operation flow on our own.

Now there is a proper NET client library, and that makes the experience much nicer. It follows the familiar Azure SDK for .NET patterns such as Response<T> and Operation<T>, so if you have worked with other Azure SDKs before, this will feel very natural.

As of today, this is the latest package and APIs will likely evolve over time. But even at this stage, the SDK already gives a much cleaner developer experience compared to calling the REST endpoints directly.

For the rest of this post, let's go through a simple example.

First step is installing the Azure.AI.ContentUnderstanding NuGet package.

dotnet add package Azure.AI.ContentUnderstanding
Then I have the following simple example.
using Azure;
using Azure.AI.ContentUnderstanding;
using Azure.Identity;
using System.Net.Mime;

string endpoint = "<Endpoint>";
string analyzerId = "<AnalyzerId>";
string filePath = @"<sample-file>.pdf";

BinaryData fileData = BinaryData.FromBytes(await File.ReadAllBytesAsync(filePath));
string contentType = Path.GetExtension(filePath).ToLowerInvariant() switch
{
    ".pdf" => MediaTypeNames.Application.Pdf,
    _ => throw new NotSupportedException($"File type {Path.GetExtension(filePath)} is not supported.")
};

ContentUnderstandingClient contentUnderstandingClient = new(new Uri(endpoint), new DefaultAzureCredential());

Operation<AnalysisResult> operation = await contentUnderstandingClient.AnalyzeBinaryAsync(
    WaitUntil.Started,
    analyzerId,
    fileData,
    contentType: contentType);

Console.WriteLine($"Operation Id: {operation.Id}, Started.");

while (!operation.HasCompleted)
{
    Console.WriteLine($"Operation Id: {operation.Id}, Running.");
    await Task.Delay(TimeSpan.FromSeconds(3));
    await operation.UpdateStatusAsync();
}

Console.WriteLine($"Operation Id: {operation.Id}, Completed.");

AnalysisResult = operation.Value;
foreach (AnalysisContent? item in analysisResult.Contents)
{ foreach (KeyValuePair<string, ContentField> field in item.Fields) { Console.WriteLine($"Field: {field.Key}: Value: {field.Value.Value}"); Console.WriteLine(); } } Console.WriteLine("Done");

Hope this helps.

Read more:
   Azure Content Understanding client library for .NET

Happy Coding.

Regards,
Jaliya

Friday, March 20, 2026

EF Core 11.0: Create and Apply Migrations in a Single Command

When working with migrations in Entity Framework Core, what we usually do is first create the migration and then apply it to the database.

It has always been a two-step process, and that can be a bit annoying when you are continuously developing.

dotnet ef

dotnet ef migrations add Initial
dotnet ef database update
Package Manager Console/PowerShell
Add-Migration Initial
Update-Database

But with EF Core 11.0, we can create and apply the migration in a single step.

dotnet ef  

dotnet ef database update Initial --add
Note the new --add argument.

Package Manager Console/PowerShell
Update-Database -Migration Initial -Add
This will create a migration named Initial and apply it to the database in one go. The migration files will still be created and saved, so you can push it along with your code

Since EF Core 11.0 is still in preview, if you are using the global EF tool, make sure it is updated to the latest version.
dotnet ef --version
dotnet tool update --global dotnet-ef
Hope this helps.

Read more:

Happy Coding.

Regards,
Jaliya

Tuesday, March 17, 2026

EF Core 11.0: Complex Types and JSON Columns on Entity Types with TPT/TPC Inheritance

In this post, let's have a look at a nice improvement in EF Core 11 around complex types, JSON columns, and inheritance mapping.

If you tried to use a complex type as a JSON column on an entity hierarchy that uses TPT (Table-per-Type) or TPC (Table-per-Concrete-Type) in EF Core 10, you would have noticed that it was not working as expected. With EF Core 11, that limitation is now gone.

Let's see how this works.

Consider the following Db Context.

public abstract class Person
{
    public int Id { get; set; }

    public required string Name { get; init; }

    public required Address Address { get; set; }
}

public class Student : Person
{
    public required string School { get; set; }
}

public class Employee : Person
{
    public required string Employer { get; set; }
}

[ComplexType]
public class Address
{
    public required string AddressLine1 { get; set; }

    public required string City { get; set; }

    public required string State { get; set; }
}

public class MyDbContext : DbContext
{
    public DbSet<Student> Students { get; set; }

    public DbSet<Employee> Employees { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        // Note: Database compatibility level 170 (Microsoft SQL Server 2025)
        optionsBuilder
            .UseSqlServer(@"<Connection_String>");
    }

    override protected void OnModelCreating(ModelBuilder modelBuilder)
    {
        // TODO: Configure
    }
}

TPT (Table-per-Type)

EF Core 10

Let's first see TPT with EF Core 10.

override protected void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Person>()
        .UseTptMappingStrategy();
}

This would generate 3 tables: Person, Students and Employees.

EF 10 TPT: Person, Students and Employees
Which is good, working as expected.
 
Now if we try using Complex Types:

override protected void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Person>()
        .UseTptMappingStrategy()
        .ComplexProperty(a => a.Address, b => b.ToJson());
}

This would be an error.

EF Core 11

With EF Core 11.0, above would create: again 3 tables: Person, Students and Employees.

EF 11 TPT: Person, Students and Employees: Identical to EF Core 10.0
EF 11 TPT: Person, Students and Employees with JSON Columns
Note: here the JSON column should be created only in the Parent table.

TPC (Table-per-Concrete-Type)

EF Core 10

Now let's now see TPC with EF Core 10.

override protected void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Person>()
        .UseTpcMappingStrategy();
}

This would generate only 2 tables per concrete types: Students and Employees.

EF 10 TPC: Students and Employees
Note: Here the Address column is missing.

Now if we try using Complex Types:

override protected void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Person>()
        .UseTpcMappingStrategy()
        .ComplexProperty(a => a.Address, b => b.ToJson());
}

This again would be an error.

EF Core 11

With EF Core 11.0, above would create: again 2 tables: Students and Employees.

EF 11 TPC: Students and Employees
EF 11 TPC: Students and Employees with JSON Columns
Here on TPC, everything seems to be working as expected.

Hope this helps.

Read more:

Happy Coding.

Regards,
Jaliya