Thursday, December 2, 2021

.NET 6: DateOnly and TimeOnly Types

In this post, let's have a look at two new types that got added to .NET Base Class Library (core set) in .NET 6 and that is DateOnly and TimeOnly. These were added to System namespace just like other DateTime types.

Let's consider the following DateTime.

DateTime datetime = new(2021, 12, 15);
Console.WriteLine(datetime);                 // 15/12/2021 12:00:00 am

Since it's a DateTime, it has the Date component and a Time component which makes perfect sense. Now let's say you want to extract the Date component and the Time component separately. That's where the new DateOnly and TimeOnly types comes into the picture.

DateOnly dateonly = DateOnly.FromDateTime(datetime);
Console.WriteLine(dateonly);                 // 15/12/2021
 
TimeOnly timeonly = TimeOnly.FromDateTime(datetime);
Console.WriteLine(timeonly);                 // 12:00 am

You can construct DateOnly and TimeOnly as you construct DateTime. And you have all the different constructor overloads to create DateOnly and TimeOnly the way you want. For example,

DateOnly dateonly = new(2021, 12, 15);
Console.WriteLine(dateonly);                 // 15/12/2021
 
TimeOnly timeonly = new(13, 30);
Console.WriteLine(timeonly);                 // 1:30 pm

DateOnly and TimeOnly types have their own Properties and Methods that you can use. For example,

// Some of DateOnly Properties
Console.WriteLine(dateonly.Day);             // 15
Console.WriteLine(dateonly.DayOfWeek);       // Wednesday Console.WriteLine(dateonly.Month);           // 12 Console.WriteLine(dateonly.Year);            // 2021 // Some of TimeOnly Properties Console.WriteLine(timeonly.Hour);            // 13 Console.WriteLine(timeonly.Minute);          // 30 Console.WriteLine(timeonly.Second);          // 0 // Some of DateOnly Methods Console.WriteLine(dateonly.AddDays(30));     // 14/01/2022 Console.WriteLine(dateonly.AddYears(1));     // 15/12/2022 // Some of TimeOnly Methods Console.WriteLine(timeonly.AddHours(4));     // 5:30 pm Console.WriteLine(timeonly.AddMinutes(270)); // 6:00 pm

These two new types are going to be super useful, specially DateOnly. There are a lot of scenarios we don't care about the Time component, so it's going to be very handy.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, November 18, 2021

Public Preview: Azure Container Apps

In this post let's have a look at one of the newest additions to Azure especially to its Container Options and that is Azure Container Apps.  Azure Container Apps is a fully managed serverless environment for running Containers, especially focused on Microservices. It was announced earlier this month at Microsoft Ignite, and as of today, it's in its Public Preview. 

Azure Container Apps is backed by Kubernetes behind the scene and lets us, the developers focus more on the business logic, rather than managing the infrastructure. It's completely serverless and can dynamically scale based on HTTP traffic, event-driven processing (message broker), CPU or memory load, or any KEDA Scalers.

First, let's create a Container App and see how it looks like. From Azure, search for Container App.
Container App
Click on Create.
Create Container App
We need to give the Container App a name and associate with a Container App Environment. Container App Environment is being used to maintain a boundary around groups of container apps. Container Apps in the same environment are deployed in the same virtual network and write logs to the same Log Analytics workspace.
Create Container App Envrionement
We need to select a Region, currently only available regions are Canada Central and North Europe.
Create Container App Envrionement
Then we need to associate a Log Analytics workspace.

Once that is done, we can proceed with configuring the Container.
Container App Settings
I am just proceeding with the default Quickstart image, if you unchecked that, you can select your own container and change the resource specs.

And that's it. You can proceed with the creation and once created, you should be able to get its URL and have your first look at Container Apps.
First Container App
If you want more control over things, you can install the Azure CLI containerapp extension.
az extension add `
  --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.0-py2.py3-none-any.whl `
  --yes 
I wanted to test with a custom docker image, and I can easily do something like below.
az containerapp update `
    --name capp-web-blazor-demo `
    --resource-group rg-container-apps-demo `
    --image <something>.azurecr.io/containers-apps-demo/web/blazor:latest `
    --registry-login-server <something>.azurecr.io `
    --registry-username  <username> `     --registry-password <password> `     --debug
Custome Container
Hope this helps to get yourself started on exploring Azure Container Apps.

More read,

Happy Coding.

Regards,
Jaliya

Monday, November 15, 2021

Introducing Azure Functions OpenAPI Extension

In this post, let's have a look at a nice feature that's now available with Azure Functions. That is being able to expose HTTP trigger functions via an Open API specification endpoint.

Let's go with an example. I already have an In-Process .NET Azure Function created with a single HTTP Trigger function. 

The first step to enable Open API specification is to install the following NuGet package.

dotnet add package Microsoft.Azure.WebJobs.Extensions.OpenApi -v 1.0.0
Once that is done, if we run the Function App locally, we should be seeing something interesting.
Additional Endpoints
Suddenly we have 4 new endpoints. If I navigated to the SwaggerUI endpoint, I can see this.
OpenAPI Document
That looks good. Now we just need to decorate the function to enrich the OpenAPI specification.
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Extensions.OpenApi.Core.Attributes;
using Microsoft.Azure.WebJobs.Extensions.OpenApi.Core.Enums;
using Microsoft.Extensions.Logging;
using Microsoft.OpenApi.Models;
using System.Net;
 
namespace FunctionApp1;
 
public static class Function1
{
    [FunctionName("Function1")]
    [OpenApiOperation(operationId: "Run", tags: new[] { "GETTERS" })]
    [OpenApiSecurity("function_key", SecuritySchemeType.ApiKey, Name = "code", In = OpenApiSecurityLocationType.Query)]
    [OpenApiParameter(name: "name", 
        In = ParameterLocation.Query, 
        Required = true, 
        Type = typeof(string), 
        Description = "The **Name** parameter")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, 
        contentType: "text/plain", 
        bodyType: typeof(string), 
        Description = "The OK response")]
    public static IActionResult Run(
        [HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req,
        ILogger log)
    {
        string name = req.Query["name"];
 
        string responseMessage = $"Hello, {name}. This HTTP triggered function executed successfully.";
 
        return new OkObjectResult(responseMessage);
    }
}
And now if I run the function app, I should be seeing my HTTP Trigger function.
OpenAPI Document
That looks nice. I can test my function from the Swagger UI itself.
OpenAPI Document
That's quite nice, isn't it? With Visual Studio 2022, you can get yourself a head start by using this template.
.NET 6 (In-Process) - Http trigget with OpenAPI
That was enabling OpenAPI specification for In-Process .NET Functions. 

If you want to enable Open API specification for Isolated (Out-of-process) .NET Azure Functions, you need to install the following NuGet package.
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.OpenApi -v 1.0.0
Once that's done, you need to update the Program.cs as follows.

Program.cs
using Microsoft.Azure.Functions.Worker.Extensions.OpenApi.Extensions;
using Microsoft.Extensions.Hosting;
 
namespace FunctionApp3;
 
public class Program
{
    public static void Main()
    {
        IHost host = new HostBuilder()
            .ConfigureFunctionsWorkerDefaults(worker => worker.UseNewtonsoftJson())
            .ConfigureOpenApi()
            .Build();
 
        host.Run();
    }
}
And then decorate the Functions the same way as in In-Process model.
using Microsoft.Azure.Functions.Worker;
using Microsoft.Azure.Functions.Worker.Http;
using Microsoft.Azure.WebJobs.Extensions.OpenApi.Core.Attributes;
using Microsoft.Azure.WebJobs.Extensions.OpenApi.Core.Enums;
using Microsoft.Extensions.Logging;
using Microsoft.OpenApi.Models;
using System.Collections.Specialized;
using System.Net;
using System.Web;
 
namespace FunctionApp3;
 
public class Function1
{
    private readonly ILogger _logger;
 
    public Function1(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger<Function1>();
    }
 
    [Function("Function1-Isolated")]
    [OpenApiOperation(operationId: "Run", tags: new[] { "GETTERS" })]
    [OpenApiSecurity("function_key", SecuritySchemeType.ApiKey, Name = "code", In = OpenApiSecurityLocationType.Query)]
    [OpenApiParameter(name: "name",
        In = ParameterLocation.Query,
        Required = true,
        Type = typeof(string),
        Description = "The **Name** parameter")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK,
        contentType: "text/plain",
        bodyType: typeof(string),
        Description = "The OK response")]
    public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequestData req)
    {
        NameValueCollection query = HttpUtility.ParseQueryString(req.Url.Query);
        string name = query["name"];
 
        HttpResponseData response = req.CreateResponse(HttpStatusCode.OK);
        response.Headers.Add("Content-Type""text/plain; charset=utf-8");
 
        string responseMessage = $"Hello, {name}. This HTTP triggered function executed successfully.";
 
        response.WriteString(responseMessage);
 
        return response;
    }
}
That's about it. Now if we run this, it should be working the same way as in In-Process mode.
OpenAPI Document
Hope this helps.

Happy Coding.

Regards.
Jaliya

Sunday, November 14, 2021

.NET 6 and Azure Functions

Last week during the .NET Conf 2021, a thrilling announcement was made related to Azure Functions, that is the General Availability (GA) of Azure Functions Runtime 4.0. With that we can now target .NET 6 for .NET Azure Functions that were written following both the In-Process and Isolated (Out-of-process) models.

If you are wondering what are In-Process and Isolated (Out-of-process) models in .NET Azure Functions, let me give a quick overview.

Isolated (Out-of-process) is a relatively new approach for writing .NET Azure Functions. But it has been the way for writing Azure Functions with other programming languages (Python, JavaScript, Java etc). It got introduced back in 2020 with .NET 5. Up to that time, we could only create .NET Functions following the In-Process model. With the In-Process model, our functions apps were class libraries and had a tight coupling to the functions host.  Because of that, .NET functions were required to be on the same version of .NET as the Azure Functions Runtime. And that's why we couldn't target .NET 5 for In-Process Azure Functions before. 

To get out of this constraint, Microsoft has introduced a new programming model which is Isolated (Out-of-process), and there, the functions apps are console applications (not class libraries) and they are running in it's own host (hence the name). So now we can target different .NET versions for our functions that were not supported by the Functions runtime. There are feature and functionality differences between In-Process and Isolated (Out-of-process) models. For example, Durable Functions are not supported (as of today, but it's going to be coming in 2022) with Isolated (Out-of-process) model. But it follows the same programming model as in ASP.NET Core, so we have more control over the things such as being able to write our own Middleware which isn't possible with In-Process model.

So now we know what's In-Process and Isolated (Out-of-process) models in .NET Azure Functions. So how is the roadmap of .NET Azure Functions going to be like?

Azure Functions .NET Roadmap
As shown in the above, the Isolated (Out-of-process) model is going to be the way forward. As of today, Azure Functions Runtime 4.0 is the recommended runtime version for functions in all languages. If you have .NET Azure Functions written following In-Process model and using features that aren't supported in Isolated (Out-of-process) model (Durable Functions etc), you can upgrade your functions to be on Azure Functions Runtime 4.0 and target .NET 6 still following the same programming model. And if all your features have support in Isolated (Out-of-process) model, you should update the programming model to Isolated (Out-of-process) model.

With Visual Studio 2022, you have now all these nice options to create your.NET Function App, do give it a try.

.NET Function Apps
More read,
   Announcement: Azure Functions 4.0 and .NET 6 support are now generally available
   Guide for running C# Azure Functions in an isolated process
   Azure Functions runtime versions overview

Happy Coding.

Regards,
Jaliya

Tuesday, November 9, 2021

Visual Studio 2022 and CI/CD Using GitHub Actions Workflows

Finally Visual Studio 2022 (the first-ever 64-bit version of Visual Studio) is released with great features, such as Hot Reload for .NET and C++ (Web, WPF, Windows Forms, you name it), cross-platform testing, etc.

In this post let's see the seamless integration Visual Studio 2022 provides with GitHub actions.

I have a repo hosted in GitHub and I wanted to enable CI/CD to deploy to an Azure App Service. I can do that with very few steps and even without leaving Visual Studio. Let's see how. 

First I am right-clicking on the project and selecting Publish.
Publish
Then it's just following the steps in the wizard. I am including all the steps, so we kind of have a complete guide.

I am deploying to Azure App Service. I am selecting Azure and Clicking on Next.
Azure
I already have a Linux Azure App Service which I am going to use. So I am selecting Azure App Service (Linux) and clicking on Next.
Azure App Service (Linux)
I am selecting my target App Service and clicking on Next.
Select the App Service
I am not going to be using any API Management, so I am skipping this step and going to Next.
API Management - Skipping
Now comes the interesting step. Visual Studio identifies my repo is on GitHub and gives me an option to add GitHub Actions.
CI/CD using GitHub Actions workflows (generated yml file)
So I am going ahead with selecting CI/CD using GitHub Actions workflows (generated yml file) and that's mostly it. 
Workflow File Created and Ready to Push
Visual Studio has already created the workflow yml file with the publishing profile saved as an Action secret. Now I just need to commit and push the changes to remote.
CI/CD
And that is super easy. 

I have this image from back in 2012, it's so true.
Do more of what you love
If you haven't started using Visual Studio 2022, do it now.

You can find the sample code here,

Happy Coding.

Regards,
Jaliya

Tuesday, November 2, 2021

Azure Service Bus Topic Filters

In this post, let's have a look at a nice feature that comes with Azure Service Bus Topics/Subscriptions and that is Topic Filters

Usually, when we publish a message under a Topic, all its subscribers will receive a copy of the message. But sometimes, we want to route specific messages only to a specific subscription based on the message. For example, let's say we have a Topic and a Subscription, and the Subscription is session-enabled. And once sessions are enabled in a queue/subscription, it can decrease the throughput. And for some messages, we might not really need to use Sessions, because we don't care about the order of processing. In that case, it makes sense to use a different subscription without sessions and route the messages which don't require sessions there. And Topic Filters can be quite handy here.

Now let's see the things in action.

First, let's go ahead and create a Topic.
using Azure.Messaging.ServiceBus;
using Azure.Messaging.ServiceBus.Administration;
 
var connectionString = "<ServiceBus_ConnectionString>";
 
var TopicName = "sbt-test-topic";
var SimpleSubscriptionName = "sbs-simple-subscription";
var SessionEnabledSubscriptionName = "sbs-session-enabled-subscription";
 
var serviceBusClient = new ServiceBusClient(connectionString);
var serviceBusAdministrationClient = new ServiceBusAdministrationClient(connectionString);
 
if (!await serviceBusAdministrationClient.TopicExistsAsync(TopicName))
{
    await serviceBusAdministrationClient.CreateTopicAsync(TopicName);
}
And now I am going to create two subscriptions, one with Session enabled and one without.
if (!await serviceBusAdministrationClient.SubscriptionExistsAsync(TopicName, SimpleSubscriptionName))
{
    await serviceBusAdministrationClient.CreateSubscriptionAsync(
        new CreateSubscriptionOptions(TopicName, SimpleSubscriptionName),
        new CreateRuleOptions
        {
            Name = "RequiresSessionRule",
            Filter = new SqlRuleFilter("RequiresSession IS NULL OR RequiresSession = false")
        });
}
 
if (!await serviceBusAdministrationClient.SubscriptionExistsAsync(TopicName, SessionEnabledSubscriptionName))
{
    await serviceBusAdministrationClient.CreateSubscriptionAsync(
        new CreateSubscriptionOptions(TopicName, SessionEnabledSubscriptionName)
        {
            RequiresSession = true
        },
        new CreateRuleOptions
        {
            Name = "RequiresSessionRule",
            Filter = new SqlRuleFilter("RequiresSession = true")
        });
}
The important part is the CreateRuleOptions parameter. Here for the subscriptions, I have added Filters, which is of type Sql Filter. There are another two types of filters which are Boolean filters and Correlation Filters (we aren't going to use them in this post). Now you might wonder where this RequiresSession property is. One of the most important things to note with Topic Filters is "Filters can evaluate only message properties. Filters can't evaluate the message body". So we need to add RequiresSession property to the ServiceBusMessage that we are publishing (not to the body).
List<ServiceBusMessage> CreateMessages()
{
    return new()
    {
        CreateMessage(new("Message 1"false)),
        CreateMessage(new("Message 2"false)),
        CreateMessage(new("Message 3")),
        CreateMessage(new("Message 4")),
        CreateMessage(new("Session Required Message 1-1"true"Session1")),
        CreateMessage(new("Session Required Message 1-2"true"Session1")),
    };
}
 
ServiceBusMessage CreateMessage(MyMessage message)
{
    ServiceBusMessage serviceBusMessage = new()
    {
        Subject = message.ToString(),
        SessionId = message.SessionId
    };
    serviceBusMessage.ApplicationProperties.Add("Subject", message.Subject);
 
    if (message.RequiresSession.HasValue)
    {
        serviceBusMessage.ApplicationProperties.Add("RequiresSession", message.RequiresSession);
    }
 
    return serviceBusMessage;
}
 
record MyMessage(string SubjectboolRequiresSession = nullstring SessionId = "");
So here, I have created some test messages and note I have added RequiresSession property into ServiceBusMessages' ApplicationProperties.
 
And now, let's publish these messages out.
List<ServiceBusMessage> messages = CreateMessages();
ServiceBusSender serviceBusSender = serviceBusClient.CreateSender(TopicName);
await serviceBusSender.SendMessagesAsync(messages);
And I can see my messages got routed to the correct subscription.
Subscriptions
We can create/edit filters for subscriptions using Azure Portal as well.
Add Filter using Azure Portal
And if you are using a tool like Service Bus Explorer, you can even query the messages using a Filter.
Filter
Filtered Message
Read More:

Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, October 21, 2021

Sending Per User Notifications using Azure Functions SignalR Service Output Binding

In this post, let's have a look at how to send user Notifications using Azure Functions SignalR Service output binding.

I already have an Azure SignalR Service and Azure Function setup. My client app is an Angular application.

In my Azure Function, I have the Negotiate function as follows.

public static class NegotiateFunction
{
    [FunctionName(FunctionNames.Negotiate)]
    public static SignalRConnectionInfo Run(
        [HttpTrigger(AuthorizationLevel.Function, "get""post", Route = null)] HttpRequest request,
        [SignalRConnectionInfo(
            HubName = "%SignalR_HubName%",
            UserId = "{headers.x-application-user-id}",
            ConnectionStringSetting = "SignalR_ConnectionString")] SignalRConnectionInfo connectionInfo)
    {
        return connectionInfo;
    }
}

Note I am setting the UserId to a custom header (x-application-user-id) I am expecting the client app to send.

Alternatively, you can use imperative binding instead of declarative binding (which is done above). This is helpful when you have clients who aren't sending the header you are looking for UserId. With declarative binding, if the header x-application-user-id isn't set, Negotiate function will throw an error.

Imperative binding can be done as follows.

public class NegotiateFunction
{
    private readonly IConfiguration _configuration;
 
    public NegotiateFunction(IConfiguration configuration)
    {
        _configuration = configuration;
    }
 
    [FunctionName(FunctionNames.Negotiate)]
    public async Task<SignalRConnectionInfo> Run(
        [HttpTrigger(AuthorizationLevel.Function, "get""post", Route = null)] HttpRequest request, 
        IBinder binder)
    {
        string userId = request.Headers["x-application-user-id"];
 
        SignalRConnectionInfoAttribute attribute = new SignalRConnectionInfoAttribute
        {
            HubName = _configuration.GetValue<string>("SignalR_HubName"),
            UserId = userId,
            ConnectionStringSetting = "SignalR_ConnectionString"
        };
        SignalRConnectionInfo connectionInfo = await binder.BindAsync<SignalRConnectionInfo>(attribute);
 
        return connectionInfo;
    }
}

And on my client app, I am building the hubConnection as follows.

this.hubConnection = new signalR.HubConnectionBuilder()
    .withUrl("<azure-functions-base-url>", {
        headers: {
            "x-functions-key""<function key>",
            "x-application-user-id""<user-id>"
        }
    })
    .withAutomaticReconnect()
    .configureLogging(signalR.LogLevel.Information)
    .build();

Once that is done and the connection is started, we should be seeing the UserId in the access token when the client negotiates with Azure SignalR Service (after negotiating with Function App and receiving the connection info).

Access Token

Now we just need to set the UserId when adding a SignalRMessage via the output binding in our Function App, something like this.

public static class SendApplicationUserNotificationFunction
{
    [FunctionName(FunctionNames.SendApplicationUserNotification)]
    public static async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest request,
        [SignalR(
            HubName = "%SignalR_HubName%",
            ConnectionStringSetting = "SignalR_ConnectionString")]IAsyncCollector<SignalRMessage> signalRMessageCollector)
    {
        var requestBody = await new StreamReader(request.Body).ReadToEndAsync();
        ApplicationUserNotification applicationUserNotification = JsonConvert.DeserializeObject<ApplicationUserNotification>(requestBody);
 
        await signalRMessageCollector.AddAsync(new SignalRMessage
        {
            Target = SignalRTargets.AddApplicationUserNotification,
            Arguments = new[] { applicationUserNotification },
            UserId = applicationUserNotification.ApplicationUserId,
        });
 
        return new OkResult();
    }
}

So I have two users logged in to my client application, UserId: 1 and 110. And I am sending a message to UserId: 1.

I can see UserId: 1 received the message.

UserId: 1 received the message
But the UserId: 110 hasn't.

UserId: 110 didn't receive the message
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, October 20, 2021

EF Core 6.0: Introducing Pre-convention Model Configuration

In this post let's have a look at another feature EF Core 6.0 brings to the table.

Consider in your context, you have this specific type, and you want to maintain a similar configuration for that property throughout all your entities.

Let's go by a small example. Say I have multiple decimal properties in my entities, and I want all of them to be decimal(18,4) instead of decimal(18,2) which is the default.
public class OrderItem
{
    public int Id { getset; }
 
    public int OrderId { getset; }
 
    public decimal Amount { getset; }
 
    public decimal Discount { getset; }
}
Here I have two decimal properties in the same Entity. If I want to configure these to be decimal(18,4), I need to update the configuration for each property.

EF Core 6.0 exposes this new method ConfigureConventions(ModelConfigurationBuilder configurationBuilder) in DbContext which we can now override.
protected override void ConfigureConventions(ModelConfigurationBuilder configurationBuilder)
{
    configurationBuilder
       .Properties<decimal>()
       .HavePrecision(18, 4);
}
And if we create a migration for this, we can see EF is using decimal(18,4) for all my decimal properties without me having to configure for each of the properties. 
ConfigureConventions
And of course, if you want, you can still override this behavior per property.
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    // other configuration
 
    modelBuilder.Entity<OrderItem>(builder =>
    {
        builder
            .Property(x => x.Discount)
            .HasPrecision(18, 2);
    });
}
And if you add in a migration now,
Override per property

You can see only for the Discount property Scale is changed.

There are other great use cases, you can read more on,
   Pre-convention model configuration

Happy Coding.

Regards,
Jaliya

Sunday, October 17, 2021

EF Core 6.0: Introducing Support for SQL Server Temporal Tables

In this post, let's have a look at one of the nicest features coming with EF Core 6.0 (Actually I shouldn't be saying it's coming, it's already available. As of the day I am writing this post, we now have EF Core 6.0 RC2, support for SQL Server Temporal Tables was initially introduced with EF Core 6.0 RC1). It's one of the Highly requested features in EF Core 6.0.

Consider I have the following Entity.
public class Category
{
    public int Id { getset; }

    public string Name { getset; }

    public string Description { getset; }
}
We can enable Temporal Tables through the ModelBuilder or IEntityTypeConfiguration<TEntity> (these are just things you already know and have been there forever). For simplicity, I will just use the ModelBuilder.
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Category>(builder =>
    {
        builder
            .ToTable(x => x.IsTemporal());
    });
}
If we add in a migration for this, we will get something like this.
Create Table with Temporal Table Support
Here I am just creating a Category table with Temporal Table support from the first place. Oftentimes, we will already have the entities which we need to enable Temporal Table support. It's possible as well, just update the table with IsTemporal() .
Alter Table to support Temporal Tables
You can see the Period Columns and History table has default names. If you want, you can override those through TemporalTableBuilder as follows.
builder
    .ToTable(x => x.IsTemporal(x =>
        {
            x.HasPeriodStart("PeriodFrom");
            x.HasPeriodEnd("PeriodTo");
            x.UseHistoryTable($"{nameof(Category)}HistoricalData");
        })
    );
For simplicity, I am going to use the defaults.

And once the migration is applied, you can see your table now has Temporal Table support.

EF Core 6.0, also provides support for querying historical data which personally I am finding very much helpful.

Consider the following.
Category category = new() { Name = "Category A", Description = "Category A Description" };
await context.Categories.AddAsync(category);
await context.SaveChangesAsync();

await Task.Delay(5000);

category.Description = "Category A Description Updated";
context.Categories.Update(category);
await context.SaveChangesAsync();
Here I am just creating a new Category and then after 5 seconds, I am just doing an update to the Category Description.

Now I can query the history data as follows.
var categoryHistories = await context.Categories
    .TemporalAll()
    .Where(x => x.Id == category.Id)
    .OrderBy(x => EF.Property<DateTime>(x, "PeriodStart"))
    .Select(x => new
    {
        Employee = x,
        PeriodStart = EF.Property<DateTime>(x, "PeriodStart"),
        PeriodEnd = EF.Property<DateTime>(x, "PeriodEnd")
    })
    .ToListAsync();

foreach (var item in categoryHistories)
{
    Console.WriteLine($"Description: {item.Employee.Description}, " + $"Start: {item.PeriodStart} - End: {item.PeriodEnd}");
}
This would give an output something like the following.
Querying Historical Data
The important part here is the TemporalAll() method. We have a couple of options there.
  • TemporalAll
    • Returns all rows in the historical data.
  • TemporalAsOf
    • Returns a single row that was active (current) at the given UTC time
  • TemporalFromTo
    • Returns all rows that were active between two given UTC times
  • TemporalBetween
    • The same as TemporalFromTo, except that rows are included that became active on the upper boundary
  • TemporalContainedIn
    • Returns all rows that started being active and ended being active between two given UTC times.
Hope this helps.

Happy Coding.

Regards,
Jaliya