Tuesday, February 20, 2024

.NET 8.0 Isolated Azure Functions: Binding Expressions that uses Azure App Configuration

In this post let's see how we can use binding expressions in an .NET 8.0 Isolated Azure Function and how to consume the binding expression values from Azure App Configuration (AAC).

Binding expressions are basically something like this. Let's take a simple ServiceBus trigger function. 
[Function(nameof(ServiceBusTrigger))]
public static void ServiceBusTrigger( [ServiceBusTrigger("%Messaging:Topic%", "%Messaging:Subscription%")] ServiceBusReceivedMessage serviceBusReceivedMessage)
{
    // TODO: Process the received message
}
Here the  %Messaging:Topic% and %Messaging:Subscription% are binding expressions and its value doesn't have to be a compile time constant.

In In-Process Azure functions, it's pretty straightforward, you can just add Azure App Configuration as another configuration provider in the Startup, and it will work.

But in Isolated functions at least as of today (20th February 2024), you can't do that (Support expression resolution from configuration sources registered by the worker #1253). While it's a bit disappointing (after having Isolated functions available for a couple of years), you can use the following workaround.

Let's say I have the following values in my Azure App Configuration.
Azure App Configuration
Azure App Configuration Values
I can use the following notation to access AAC values.
@Microsoft.AppConfiguration(Endpoint=https://aac-temp-001.azconfig.io; Key=<key>)
// if you want to choose a particular Label
@Microsoft.AppConfiguration(Endpoint=https://aac-temp-001.azconfig.io; Key=<key>; Label=<label>)
So I can update Function App settings in Azure as follows. Make sure the identity of the function app (system-assigned managed identity or user-assigned managed identity) can read the configuration from AAC.
Function App Configuration
More read:
   Use App Configuration references for App Service and Azure Functions (preview)

Friday, February 9, 2024

Azure DevOps Self-hosted Agent: NETSDK1045: The current .NET SDK does not support targeting .NET 8.0

Recently I have faced this issue in one of our Self-hosted agents in Azure DevOps when a pipeline is trying to build a .NET 8.0 application.
C:\vsts-agent\_work\_tool\dotnet\sdk\5.0.405\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.TargetFrameworkInference.targets(141,5): 

error NETSDK1045: The current .NET SDK does not support targeting .NET 8.0.  
Either target .NET 5.0 or lower, or use a version of the .NET SDK that supports .NET 8.0.  [C:\vsts-agent\_work\67\s\xxxxx.csproj]
The error was happening in a NuGetCommand@2 task while doing a restore. I replaced that with a DotNetCoreCLI@2. Then that step succeeded but eventually failed again in a VSBuild@1 task (that was using vsVersion: '17.0' which is the latest) for the same reason. 

This was strange because the pipeline was specifically requesting for .NET 8.0.
task: UseDotNet@2
  displayName: Use .NET
  inputs:
    packageType: 'sdk'
    version: '8.0.x'
The pipeline had no reason to use .NET SDK 5.0.405 and had no idea where this specific version was coming from.

Then I started digging, and after scratching my head for a couple of hours, noticed the following in agent worker logs (usually inside C:\vsts-agent\_diag). To my surprise, the pipeline is getting executed with the following.
{
  ...
  "variables": {
    "DOTNET_MSBUILD_SDK_RESOLVER_SDKS_DIR": {
      "value""C:\\vsts-agent\\_work\\_tool\\dotnet\\sdk\\5.0.405\\Sdks"
    },
    "DOTNET_MSBUILD_SDK_RESOLVER_SDKS_VER": {
      "value""5.0.405"
    },
    ...
  }
  ...
}
DOTNET_MSBUILD_SDK_RESOLVER_* are .NET environment variables that are used to force the resolved SDK tasks and targets to come from a given base directory and report a given version to MSBuild.
  • DOTNET_MSBUILD_SDK_RESOLVER_SDKS_DIR: Overrides the .NET SDK directory.
  • DOTNET_MSBUILD_SDK_RESOLVER_SDKS_VER: Overrides the .NET SDK version.
  • DOTNET_MSBUILD_SDK_RESOLVER_CLI_DIR: Overrides the dotnet.exe directory path.
And that kind of answered where  .NET SDK 5.0.405 was coming from, but the question remains why. Submitted an Issue #19520: Self hosted agent uses incorrect DOTNET_MSBUILD_SDK_RESOLVER_SDKS_*.

To get past the issue, I had to override these variables. To test the concept, I have overridden these variables by passing .NET 8.0 counterpart values to the pipeline execution.
Passing variables to the pipeline execution
and that finally worked. But we can't be manually overriding these for each run, so I have overridden them in YAML as follows.
variables:
name: DOTNET_MSBUILD_SDK_RESOLVER_SDKS_DIR
  value: 'C:\vsts-agent\_work\_tool\dotnet\sdk\8.0.101\Sdks'
name: DOTNET_MSBUILD_SDK_RESOLVER_SDKS_VER
  value: '8.0.101'
...
Now the pipeline builds and publishes .NET 8 apps successfully, but I still have no idea why the older SDK was being forced.

Hopefully, we will find it here soon:

Hope this helps.

Happy Coding.

Regards,
Jaliya

Saturday, February 3, 2024

Azure AD B2C: Validating Output Claim from a Non-Self-Asserted Technical Profile

I had a requirement where I wanted to do an additional validation on a boolean claim value in an AAD B2C user journey. If the boolean claim value is true, I wanted to move forward in the user journey. If the value is false, I wanted to short circuit the user journey and return an error. 

I couldn't use Validation Technical Profiles, because the output claim I am validating upon was in a non-self-asserted technical profile (the claim was retrieved by calling an external REST endpoint)  and Validation Technical Profiles doesn't support non-self-asserted technical profiles.

In such cases, we can add an additional OrchestrationStep, do a Precondition in that particular step, assert and navigate the user to a self-asserted technical profile and display the error there.

So how do we do that? 

1. Define a ClaimType for a self-asserted technical profile.

<BuildingBlocks>
  <ClaimsSchema>
    ...
    <ClaimType Id="errorMessage">
      <DisplayName>Please contact support.</DisplayName>
      <DataType>string</DataType>
      <UserInputType>Paragraph</UserInputType>
    </ClaimType>
  </ClaimsSchema>
  ...
</BuildingBlocks>

2. Define a ClaimsTransformation.

<BuildingBlocks>
  ...
  <ClaimsTransformations> ...
    <ClaimsTransformation Id="CreateApplicationUserNotActiveErrorMessage" TransformationMethod="CreateStringClaim">
      <InputParameters>
        <InputParameter Id="value" DataType="string" Value="Application user is not active." />
      </InputParameters>
      <OutputClaims>
        <OutputClaim ClaimTypeReferenceId="errorMessage" TransformationClaimType="createdClaim" />
      </OutputClaims>
    </ClaimsTransformation>
  </ClaimsTransformations>
</BuildingBlocks>

3. Define a self-asserted TechnicalProfile. Use the above ClaimsTransformation as a InputClaimsTransformation. Reference the ClaimType created in the first step.

<ClaimsProviders>
  <ClaimsProvider>
    <DisplayName>...</DisplayName>
    <TechnicalProfiles> ...
      <TechnicalProfile Id="SelfAsserted-ApplicationUserNotActiveError">
        <DisplayName>Error message</DisplayName>
        <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.SelfAssertedAttributeProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/>
        <Metadata>
          <Item Key="ContentDefinitionReferenceId">api.selfasserted</Item>
          <Item Key="setting.showContinueButton">false</Item>
          <Item Key="setting.showCancelButton">true</Item>
        </Metadata>
        <InputClaimsTransformations>
          <InputClaimsTransformation ReferenceId="CreateApplicationUserNotActiveErrorMessage" />
        </InputClaimsTransformations>
        <InputClaims>
          <InputClaim ClaimTypeReferenceId="errorMessage"/>
        </InputClaims>
        <OutputClaims>
          <OutputClaim ClaimTypeReferenceId="errorMessage"/>
        </OutputClaims>
      </TechnicalProfile>
    </TechnicalProfiles>
  </ClaimsProvider>
</ClaimsProviders>

4. Introduce an additional OrchestrationStep with a Precondition before the last the OrchestrationStep. If the condition is not satisfied, use the created self-asserted TechnicalProfile.

<UserJourneys>
  ...
  <UserJourney Id="...">
    <OrchestrationSteps>
      ...
      <OrchestrationStep Order="9" Type="ClaimsExchange">
        <Preconditions>
          <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
            <Value>isActive</Value> <!-- this claim is forwarded from a previous step -->
            <Value>True</Value>
            <Action>SkipThisOrchestrationStep</Action>
          </Precondition>
        </Preconditions>
        <ClaimsExchanges>
          <ClaimsExchange Id="SelfAssertedApplicationUserNotActiveError" TechnicalProfileReferenceId="SelfAsserted-ApplicationUserNotActiveError" />
        </ClaimsExchanges>
      </OrchestrationStep>
      ...
      <OrchestrationStep Order="11" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
    </OrchestrationSteps>
  </UserJourney>
  ...
</UserJourneys>

And this is what happens when isActive claim is false. When it's true, the above OrchestrationStep will get skipped and the user journey will continue.
Self-Asserted Technical Profile
Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, February 1, 2024

.NET 8.0 Isolated Azure Durable Functions: Preserve Stack Order When Passing Between Orchestrators, Activities etc

In this post let's see how we can preserve Stack<T> order when it's getting passed between Orchestrators/Activities in a .NET Isolated Azure Durable Function. 

In Durable Functions in the .NET isolated worker, the Serialization default behavior has changed from Newtonsoft.Json to System.Text.Json.

I have already written a post about preserving Stack Order in an In-Process Azure Durable Functionshere. I am using the same code example, instead converted it to isolated worker. So I am not going to write down the entire example code to describe the issue here, you can have a look at the previous post.

You can see in the below screenshot, the order of Stack<T> is not preserved with default Serializer options.

Incorrect Result
With Isolated Durable Functions, we can easily configure the JsonSerializerOptions. We need to add a custom JsonConverter that correctly serializes and deserializes a Stack<T>. There is already JsonConverterFactoryForStackOfT shared by the .NET team that we can use in our Isolated Durable Function as follows.
using DurableFunctions.Isolated.StackSerialization.Converters;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using System.Text.Json;

IHost host = new HostBuilder()
    .ConfigureFunctionsWorkerDefaults()
    .ConfigureServices(services =>
    {
        services.Configure<JsonSerializerOptions>(options =>
        {
// Add custom converter to serialize and deserialize a Stack<T>
            options.Converters.Add(new JsonConverterFactoryForStackOfT());
        });
    })
    .Build();

host.Run();
And now once the Serializer options are configured, we can see Stack<T> is getting serialized/deserialized correctly.
Correct Result

Hope this helps.

Happy Coding.

Regards,
Jaliya

Monday, January 22, 2024

Monitoring Azure Durable Functions using Durable Functions Monitor

In this post let's have a look at a cool project that you might want to use if you are working on Azure Durable Functions. 

The project is DurableFunctionsMonitor, it provides a UI for monitoring, managing, and debugging orchestration instances in an Azure Durable Function App.
Durable Functions Monitor (DFM)
Durable Functions Monitor (DFM): Orchestration Sequence Diagram
I have been wanting to use this for several months now but only got to use this recently, and I like it.

This is basically an Azure Function App. The most important thing for Durable Functions Monitor (DFM) is, that it needs to know your Durable Function Apps' storage details, so it can pull the details and display them. So far I have only used it with Azure Storage, but it seems to support Netherite and Microsoft SQL Server.

 DFM can run in the following ways,
  • Injected mode: Install a NuGet package and expose the UI through your existing .NET Azure Functions project
  • Standalone: Since this is an Azure Function App, you can run it in the same ways as a typical Azure Function
    • Create a separate Azure Function App and install a NuGet package
    • Docker Container using the image scaletone/durablefunctionsmonitor
    • Deploy to your Kubernetes cluster
  • VS Code Extension
We can also customize things like Authentication for UI, its endpoints, etc.

Do give it a try:
   DurableFunctionsMonitor

Happy Coding.

Regards,
Jaliya

Wednesday, January 17, 2024

ASP.NET Core 8.0: Output Caching using Redis

With ASP.NET Core 8.0, now you can use Redis as your cache provider when cashing responses. The default is in-memory cache.

In this post, let's see how you can use Redis for output caching.

First, we need to install Microsoft.AspNetCore.OutputCaching.StackExchangeRedis NuGet package.

And then call AddStackExchangeRedisOutputCache to add Redis output caching services to the service collection.

// Add Redis output caching
builder.Services.AddStackExchangeRedisOutputCache(options =>
{
    options.Configuration =
        builder.Configuration.GetConnectionString("Redis");
});

And that's pretty much it.

The complete code looks like below.

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);

// Add Redis output caching services
builder.Services.AddStackExchangeRedisOutputCache(options =>
{
    options.Configuration =
        builder.Configuration.GetConnectionString("Redis");
});
// Add output caching services
builder.Services.AddOutputCache(options =>
{
    options.AddBasePolicy(builder => builder.Cache());
});

WebApplication app = builder.Build();

app.UseHttpsRedirection();

// Add output caching middleware to the request pipeline
app.UseOutputCache();

var summaries = new[]
{
    "Freezing", "Bracing", "Chilly", "Mild", "Balmy", "Scorching"
};

app
    .MapGet("/weatherforecast", () =>
    {
        WeatherForecast[] forecast = Enumerable.Range(1, 5)
            .Select(index => new WeatherForecast
            (
                DateOnly.FromDateTime(DateTime.Now.AddDays(index)),
                Random.Shared.Next(-20, 55),
                summaries[Random.Shared.Next(summaries.Length)]
            ))
            .ToArray();
        return forecast;
    })
    .CacheOutput(); // enable output caching for this endpoint

app.Run();

internal record WeatherForecast(DateOnly Date, int TemperatureC, string? Summary)
{
    public int TemperatureF => 32 + (int)(TemperatureC / 0.5556);
}

And you can see responses are getting cached.

Redis
More read:
   Output caching middleware in ASP.NET Core

Hope this helps.

Happy Coding.

Regards,
Jaliya

Sunday, January 14, 2024

HttpClient: Dynamically Add HttpMessageHandlers

In this post, let's have a look at how we can create a HttpClient with HttpMessageHandlers that gets created at run time.

I was hoping IHttpClientFactory would support that, but at least I couldn't find a way to get IHttpClientFactory to work with dynamic HttpMessageHandlers.

But luckily we have HttpClientFactory static class and there we have HttpClientFactory.Create(HttpMessageHandler, DelegatingHandler[])

The basic logic is we are supplying the innermost handler and other handlers, and internally it would get wired as a pipeline, something like below.

public static HttpMessageHandler CreatePipeline(HttpMessageHandler innerHandler, IEnumerable<DelegatingHandler> handlers)
{
HttpMessageHandler httpMessageHandler = innerHandler;
    foreach (DelegatingHandler item in handlers.Reverse())
    {
        item.InnerHandler = httpMessageHandler;
        httpMessageHandler = item;
    }

return httpMessageHandler;
}

And then it would create a HttpClient with the chained HttpMessageHandler as follows.

new HttpClient(CreatePipeline(innerHandler, handlers));

Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, January 4, 2024

Reading Configuration using Different Options Patterns in ASP.NET Core

A few days ago I saw an interesting question where someone was asking what's the preferred way to read some configuration, something like below.
{
  "SomeSetting": {
    "Config""Value"
  }
}
Would you use, 

1. IConfiguration
app.MapGet("/config", (IConfiguration configuration) =>
{
    return configuration.GetValue<string>("SomeSetting:Config");
});
2. Options Pattern
builder.Services.Configure<MyOptions>(builder.Configuration.GetSection("SomeSetting"));

app.MapGet("/config", (IOptions<MyOptions> optionsAccessor) =>
{
    return optionsAccessor.Value.Config;
});

public record MyOptions
{
    public string Config { get; set; }
}
And that kind of prompted me to write this post.

As we all know, the more magic strings we maintain, the more trouble we ask for. But personally, if it's something that's hardly going to change, but still warrants being maintained as a configuration and does not use more than a couple of magic strings, I'd go for IConfiguration.

Saying that there are a lot of benefits to using Options Pattern. It uses strongly typed access and reduces the use of magic strings. We can also do Options Validation using Data Annotations, and we can even ask for early validation so that at the application start, we will know if something is missing or incorrectly configured rather than knowing it later when the app is. 

On top of those, there are different Options Interfaces that can be really handy.

  • It's a Singleton and CAN be injected into any service lifetime.
  • Supports Named Options.
  • Does not support reloadable changes. For new configuration values to be reflected, the app DOES need to be restarted.
  • Usage:
builder.Services.Configure<MyOptions>(builder.Configuration.GetSection("SomeSetting"));
..

app.MapGet("/config", (IOptions<MyOptions> optionsAccessor) =>
{
    return optionsAccessor.Value.Config;
});
  • It's Scoped and CAN NOT be injected into a Singleton service.
  • Supports Named Options.
  • Supports reloadable changes. For new configuration values to be reflected, the app DOES NOT need to be restarted. It's recomputed on every request and cached for the lifetime of the request.
  • Usage:
builder.Services.Configure<MyOptions>(builder.Configuration.GetSection("SomeSetting"));
..

app.MapGet("/config", (IOptionsSnapshot<MyOptions> snapshotOptionsAccessor) =>
{
    return snapshotOptionsAccessor.Value.Config;
});
  • It's a Singleton and CAN be injected into any service lifetime.
  • Supports Named Options.
  • Supports reloadable changes. For new configuration values to be reflected, the app DOES NOT need to be restarted. It's recomputed on every request and cached for the lifetime of the request.
  • Supports change notifications. That is if we want we can register a listener to be called whenever a named TOptions changes.
  • Usage:
builder.Services.Configure<MyOptions>(builder.Configuration.GetSection("SomeSetting"));
...

app.MapGet("/config", (IOptionsMonitor<MyOptions> optionsDelegate) =>
{
    return optionsDelegate.CurrentValue.Config;
});
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, December 20, 2023

.NET 8.0: [LogProperties] Attribute

There are a lot of improvements to Logging in .NET 8.0, and in this post, let's have a look at the new LogProperties attribute.

Now we can use LogProperties attribute in log methods attributed with LoggerMessage attribute (introduced with .NET 6.0). It's available through Microsoft.Extensions.Telemetry.Abstractions NuGet package.

Consider the below sample console application code.

using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.Extensibility;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

using var channel = new InMemoryChannel();

try
{
// Setup Application Insights
    IServiceCollection services = new ServiceCollection();
    services.Configure<TelemetryConfiguration>(config => config.TelemetryChannel = channel);
    services.AddLogging(builder =>
    {
        builder.AddApplicationInsights(
            configureTelemetryConfiguration: (config) =>
            {
                config.ConnectionString = "<ConnectionString>";
            },
            configureApplicationInsightsLoggerOptions: (options) =>
            {
            }
        );
    });

    IServiceProvider serviceProvider = services.BuildServiceProvider();
    ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>();
    User user = new("John""Doe");
    Console.WriteLine($"Hello {user.FirstName}!");     // Structured logging with [LogProperties]
    logger.SaidHello(user.FirstName, user);
}
finally
{
    // Explicitly call Flush() followed by Delay, as required in console apps.
    // This ensures that even if the application terminates, telemetry is sent to the back end.

    channel.Flush();

    await Task.Delay(TimeSpan.FromMilliseconds(1000));
}

public record User(string FirstName, string LastName);

public static partial class LoggerExtensions
{
    [LoggerMessage(EventId = 1,  Level = LogLevel.Information,  Message = "Saying hello to {firstName}.")]
    public static partial void SaidHello(this ILogger logger,  string firstName,  [LogProperties] User user);
}

Here you can see the usage of [LogProperties] inside the LoggerExtensions.SaidHello method.

And this one would get logged in Application Insights as follows (in that case in any telemetry collector):
Structured Logging with LogProperties

More read:
   High-performance logging in .NET
   Watch: Improving your application telemetry using .NET 8 and Open Telemetry | .NET Conf 2023

Hope this helps.

Happy Coding.

Regards.
Jaliya

Tuesday, December 12, 2023

LINQ: let Clause

In this post, let's see what let is in LINQ query-syntax queries. I think it's an overlooked feature in LINQ.

Let's consider the following query.
IQueryable<string> wfhEmployees = from e in context.Employees
                                  where context.Departments
                                      .Where(d => d.IsWfhAllowed)
                                      .Select(d => d.Id)
                                      .Contains(e.DepartmentId)
                                  select e.FirstName + " " + e.LastName;
Here some might find it hard to understand the where condition immediately, we can use the let clause to make it more readable.
IQueryable<string> wfhEmployees = from e in context.Employees
                                  let wfhDepartments = context.Departments
                                      .Where(d => d.IsWfhAllowed)
                                      .Select(d => d.Id)
                                      .ToList()
                                  where wfhDepartments
                                      .Contains(e.DepartmentId)
                                  let fullName = e.FirstName + " " + e.LastName
                                  select fullName;
Here I have used let to store the result of a subexpression to use it in subsequent clauses. This is really handy when you have complex queries, we can break it into multiple sub-expressions.

Hope this helps!

Happy Coding.

Regards,
Jaliya