Monday, September 12, 2022

C# 11.0: Required Members

Just a couple of months to go for .NET 7 final release, as you might already know, some of the C# 11.0 features are already available. 

In this post, let's go through a new feature that is ready to use from Visual Studio 2022 version 17.3, and that is the feature to specify required members in a class, struct or record (including record struct). And that's by using the brand new modifier: required.

Let's go by an example. I will be using a class for simplicity. Consider the following Person class.
public class Person
{
    public string FirstName { getset; }
 
    public string LastName { getset; }
 
    public DateOnly? DateOfBirth { getset; }
}
Generally, it considers a bad practice if you are letting someone to create an object without satisfying the minimum requirements of that particular object. For example here in our scenario, we shouldn't let someone create a Person without specifying FirstName and LastName. Those are kind of mandatory for every person.

So here usually what we do is introduce a constructor and specify the parameters that are needed to create a valid object.
public class Person
{
    public string FirstName { getset; }
 
    public string LastName { getset; }
 
    public DateOnly? DateOfBirth { getset; }
 
    public Person(string firstNamestring lastName)
    {
        FirstName = firstName;
        LastName = lastName;
    }
}
Now, this is looking good. But what if we remove the constructor. The caller has no idea what basic properties are needed to be set when creating a Person object. So we need to have some other way to declare which properties are required.

So C# 11.0 introduced this brand new modifier: required which can be used as follows.
public class Person
{
    public required string FirstName { getset; }
 
    public required string LastName { getset; }
 
    public DateOnly? DateOfBirth { getset; }
}
Now the caller can create a Person object like below using object initialization.
Person person = new()
{
    FirstName = "John",
    LastName = "Doe"
};
And this is nice, isn't it? Now we don't even need to declare a constructor to accept the required parameters. I personally prefer object initialization instead of using a constructor, because say you have a lot of required properties, then in your constructor, you are going to have a lengthy parameters list.

And if you attempted to create a Person without specifying the required parameters, the compiler will emit an error.
Person person = new();
// Required member 'Person.FirstName' must be set in the object initializer or attribute constructor.
// Required member 'Person.LastName' must be set in the object initializer or attribute constructor.
Now say, you have some already written code where you are using a constructor to set required properties and you have updated your required properties with required modifier.
public class Person
{
    public required string FirstName { getset; }
 
    public required string LastName { getset; }
 
    public DateOnly? DateOfBirth { getset; }
 
    public Person(string firstNamestring lastName)
    {
        FirstName = firstName;
        LastName = lastName;
    }
}
And the existing callers would be creating an object like below.
Person person = new("John""Doe");
// Above will throw a compile error
Now here you are going to get a compile error because the compiler doesn't know that from your constructor you are setting values to required properties.  In this case, you need to attribute the constructor with [SetsRequiredMembersattribute like below.
[SetsRequiredMembers]
public Person(string firstNamestring lastName)
{
    FirstName = firstName;
    LastName = lastName;
}
Note: This [SetsRequiredMembersattribute needs to be used with care.

Let's say for some reason, later you have decided DateOfBirth is going to be a required property. Basically something like below.
Person person = new("John""Doe");
// Person is getting created using the constructor, but required DateOfBirth isn't being set
// No compile errors here because [SetsRequiredMembers] is masking the error
 
public class Person
{
    public required string FirstName { getset; }
 
    public required string LastName { getset; }
 
    public required DateOnly DateOfBirth { getset; }
 
    [SetsRequiredMembers]
    public Person(string firstNamestring lastName)
    {
        FirstName = firstName;
        LastName = lastName;
    }
}
This code will get compiled just fine, but logically it isn't correct. [SetsRequiredMembers] attribute is masking the expected error which is DateOfBirth isn't set. So that's something to keep in mind.

Most of the time, required properties shouldn't be allowed to be mutated later, so we can write a more complete code something like below. Here for the FirstName and LastName properties, I have used init keyword (introduced with C# 9.0) to specify that the required parameters should get set only upon the object construction.
Person person = new()
{
    FirstName = "John",
    LastName = "Doe"
};
 
public class Person
{
    public required string FirstName { getinit; }
 
    public required string LastName { getinit; }
 
    public DateOnly? DateOfBirth { getset; }
}
Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, September 6, 2022

Azure Logic Apps: Using HTTP Webhooks

In this post, let's see how we can use a HTTP Webhook in an Azure Logic App to wait for an external event. Note: I am using Azure Logic App Standard model for the purpose of this post.

Imagine we have the following requirement.

  • There's a parent workflow and it publishes a Webhook, where consumers can call
    • Once the Webhook gets called by a consumer, the parent workflow needs to resume its execution
  • There is a consumer, once the consumer received the Webhook information, they can invoke the Webhook at their own decretion,

Let's start by creating the consumer side of things first. Here, in this case, the consumer is another workflow that has a Trigger of type "Request".
Consumer Workflow: When a HTTP request is received
So here my Consumer Workflow accepts a payload that contains the above JSON Schema. The webhookUrl property expects the URL that this consumer will call. Then I have some dummy Delay and then finally I have a HTTP action to make the call to the Webhook.
Consumer Workflow: HTTP Action to call the Webhook
Code
{
  "inputs": {
    "method""POST",
    "uri""@{triggerBody()?['webhookUrl']}",
    "body": {
      "messageFromWebhookConsumer""Name formatting is completed",
      "fullName""@concat(triggerBody()?['reqObject']?['firstName'], ' ', triggerBody()?['reqObject']?['lastName'])"
    }
  }
}
Here when the Consumer calls the Webhook, I am doing some data transformation and sending that as the payload for the Webhook.

Now we are all good with the consumer workflow.

Next, let's create the parent workflow.
Parent Workflow: When a HTTP request is received
Here in the Parent Workflow, I again have an HTTP trigger. It accepts a payload of the above schema. And the idea is, that we pass this data to the Webhook Consumer, and wait for our Webhook to be called after doing its job.

Now let's add the HTTP Webhook action.
Parent Workflow: HTTP Webhook
Code
{
  "inputs": {
    "subscribe": {
      "method""POST",
      "uri""https://logic-ju-test-001.azurewebsites.net:443/api/Webhook-Consumer/triggers/manual/invoke?api-version=2022-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=4rklm9-OkOiRHtj2A_cbW-Lifq6jd",
      "body": {
        "reqObject""@triggerBody()",
        "webhookUrl""@listCallbackUrl()"
      }
    },
    "unsubscribe": {}
  }
}
Here for the Subscribe - URI, I have specified the URI for the Consumer Workflow Trigger, and I am passing through the request object that was received for the current workflow along with the Webhook URL (callbackUrl), so the Consumer knows what to call. 

To make things simpler, I am not using any Unsubscribe settings.

Now finally I am introducing a dummy variable assignment to capture the request received to the Webhook.
Parent Workflow: Initialize variable
Code
{
  "inputs": {
    "variables": [
      {
        "name""reqReceivedToWebhook",
        "type""object",
        "value""@body('HTTP_Webhook')"
      }
    ]
  }
}

And that's all about it. 

Now we can call the Parent Workflow via an HTTP Request and see how things are integrating together.
Parent Workflow: Run with payload
Now I can see the Parent Workflow is running for some time. Because it has to call the Consumer and wait for it to call the exposed Webhook.
Parent Workflow: Running
And when the Webhook is called, Parent workflow will complete the execution.
Parent Workflow: Succeeded
And now when we checked the Parent workflow run, we can see the final output.
Parent Workflow: Run Details
Hope this helps.

Happy Coding.

Regards,
Jaliya

Saturday, September 3, 2022

Azure PowerShell: Utility Script to Add Current IP to Given Set of Azure SQL Servers

I recently had my public IP address refreshed and the moment I saw that I knew a pain that was coming.

I have a couple of Azure SQL Databases sitting on different Azure directories/tenants that I am connecting to regularly from my local SSMS/Azure Data Studio instances. Now I am going to have to remove the existing firewall rule for my previous public IP address and add a new firewall rule for the new IP address in each of these Azure SQL Servers. That's something I was very much frustrated with.

Finally decided to write a script to make things easier for me. Pushed it to a repo and all the information is listed there (the link is at the bottom of the post).

And no longer I have to worry about getting my public IP address refreshed (in terms of accessing Azure SQL Servers of course).
Execution
Script and instructions:

Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, September 1, 2022

Introducing Built-In Container Support for the .NET SDK

In this post, let's see how we can create a containerized version of our application by doing just dotnet publish. Note: In order to do this, you’ll need .NET SDK 7.0.100, preview 7 or greater  installed and currently only supported for Web Projects (Microsoft.NET.Sdk.Web).

Let's start with a simple example. Here I am creating a Blazor Server App using the default template.

dotnet new blazorserver -n hello-blazor-container
cd .\hello-blazor-container\

Next, we need to install this new package: Microsoft.NET.Build.Containers. This is the package that lets us build container image from the project.

# add a reference to package that creates the container
dotnet add package Microsoft.NET.Build.Containers

And now this is what our .csproj looks like.

<Project Sdk="Microsoft.NET.Sdk.Web">
 
  <PropertyGroup>
    <TargetFramework>net7.0</TargetFramework>
    <Nullable>enable</Nullable>
    <ImplicitUsings>enable</ImplicitUsings>
    <RootNamespace>hello_blazor_container</RootNamespace>
  </PropertyGroup>
 
 <ItemGroup>
    <PackageReference Include="Microsoft.NET.Build.Containers" Version="0.1.8" />
  </ItemGroup>
  
</Project>

And that's mostly it. All we need to do now is running dotnet publish, setting PublishProfile=DefaultContainer.

# publish
dotnet publish --os linux --arch x64 -p:PublishProfile=DefaultContainer
docker publish
Alternatively, you can update the csproj file as follows and then you can just run dotnet publish without supplying any arguments.
<Project Sdk="Microsoft.NET.Sdk.Web">
 
  <PropertyGroup>
    <TargetFramework>net7.0</TargetFramework>
    <Nullable>enable</Nullable>
    <ImplicitUsings>enable</ImplicitUsings>
    <RootNamespace>hello_blazor_container</RootNamespace>
 
    <RuntimeIdentifier>linux-x64</RuntimeIdentifier>
    <PublishProfile>DefaultContainer</PublishProfile>
  </PropertyGroup>
 
 <ItemGroup>
    <PackageReference Include="Microsoft.NET.Build.Containers" Version="0.1.8" />
  </ItemGroup>
  
</Project>
Now if we examine our local docker images, I can see the new image is created.
docker images
Now let's create a container and run it.
# run the application as a container
docker run -it --rm -p 5000:80 hello-blazor-container:1.0.0
docker run
Now if I open up http://localhost:5000, I can see the application is up and running.
Running Container
You can customize the generated container through MSBuild properties. 

To learn more about this feature, read:

Happy Coding.

Regards,
Jaliya

Wednesday, August 24, 2022

Connecting to Azure Cosmos DB API for MongoDB using Azure Data Studio

Did you know that we can connect to an Azure Cosmos DB API for MongoDB using Azure Data Studio. There is an extension: Azure Cosmos DB API for MongoDB for Azure Data Studio, it's still in it's preview stage, but works great.

This extension in Azure Data Studio includes features such as,

  • Connection manager & query editor
  • Provisioning and scaling containers
  • Integrated terminal

Let's have a look at how easy it is to use this extension.

First, you need to download and install Azure Data Studio (if you haven't already). Then search for mongo under extensions. 
Install Azure Cosmos DB API for MongoDB extension
Select Azure Cosmos DB API for MongoDB, install it and then reload/restart the Azure Data Studio. Once that's done, click on New Connection and select Mongo account as the Connection Type.
Connect to MongoDB account
Now copy and paste the Connection string of your Azure Cosmos DB API for MongoDB account and click on Connect. Once you are connected. you will see all your databases.
Connected to Azure Cosmos DB API for MongoDB account
You can right-click on the connected account and select Manage to see more options. 
Manage Mongo account
Here I am opening an existing database and going to Open Mongo Shell to run a simple query on that database.
Executing queries on Mongo Shell
Do try it out, you are going to love it.

Happy Coding.

Regards,
Jaliya

Tuesday, August 16, 2022

Transitive Dependencies in Visual Studio 2022

Visual Studio 2022 17.3 and the latest preview of Visual Studio 2022 which is Visual Studio 2022 Version 17.4 Preview 1 were released last week.

I am seeing this fantastic feature in the latest preview, I can now see all the Transitive Dependencies of the installed packages from the NuGet Package Manager.
Transitive Dependencies in Visual Studio
This is very helpful in understanding the dependency tree of the packages. When I hovered over a Transitive Dependency, I can see which Top-level package brought that in.
Transitive Dependency in Visual Studio
And say one of the Transitive Dependencies has a vulnerability, and if we have an updated version of that, we can install it and promote it to a Top-level dependency.

According to the NuGet team, this feature was released as an experimental feature in Visual Studio 2022 17.3 and if you’re a part of the Visual Studio experiment group, you’ll be able to see your transitive dependencies whenever you’re managing your NuGet packages in Visual Studio. I am surprised I haven't seen this in a previous preview build, only started seeing this in Visual Studio 2022 Version 17.4 Preview 1. Can't check whether it's there in Visual Studio 2022 17.3, I am only using the latest preview always. Maybe this feature is available on Visual Studio 2022 Version 17.4 Preview 1 or I am part of the Visual Studio experiment group 😍.

Anyway, try to be on the latest Visual Studio Preview, you are going to love it.

Happy Coding.

Regards,
Jaliya

Monday, August 15, 2022

EF Core: OnDelete DeleteBehavior ClientSetNull Vs. SetNull in an Optional Relationship

In this post, let's have a look at the difference between DeleteBehavior.ClientSetNull and DeleteBehavior.SetNull when configuring OnDelete action in an Optional EF Core Relationship.  I am finding this important because sometimes it can confuse people including myself. 

Note: this post considers Microsoft SQL Server as the database provider.

So what's an Optional Relationship? Let's consider the following example.

public class Category
{
    public int Id { getset; }
 
    public string Name { getset; }
 
    public List<ToDo> ToDos { getset; }
}
 
public class ToDo
{
    public int Id { getset; }
 
    public string Title { getset; }
 
    public int? CategoryId { getset; }
 
    public Category Category { getset; }
}

Here I have a simple ToDo context, I have ToDos and a ToDo can optionally have a Category. And for a given Category, I can find all ToDos under that particular Category. Now our scenario is, what needs to happen if someone Deletes a Category? Here Category entity is the Principal/Parent and ToDo entity is the Dependent/Child.

We can configure the OnDelete behavior on Principal/Parent as follows.

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Category>()
        .HasMany(x => x.ToDos)
        .WithOne(x => x.Category)
        .HasForeignKey(x => x.CategoryId)
        .OnDelete(/* DeleteBehavior */);
}

Here since the Relationship is optional, we have the option not to delete Dependents/Children while deleting a Principal/Parent or on severing the relationship from the Principal/Parent (we can severe the relationship by clearing out all the Dependents/Children from the Principal/Parent or by setting the Principal/Parent navigation property to null in each Dependent/Child).

The Default OnDelete behavior for Optional Relationships is DeleteBehavior.ClientSetNull.

First, let's see how the Dependent/Child table is created when we configure the OnDelete behavior with DeleteBehavior.ClientSetNull and DeleteBehavior.SetNull

DeleteBehavior.ClientSetNull

CREATE TABLE [ToDos] (
    [Id] int NOT NULL IDENTITY,
    [Title] nvarchar(max) NULL,
    [CategoryId] int NULL,
    CONSTRAINT [PK_ToDos] PRIMARY KEY ([Id]),
    CONSTRAINT [FK_ToDos_Categories_CategoryId] FOREIGN KEY ([CategoryId]) REFERENCES [Categories] ([Id])
);

DeleteBehavior.SetNull

CREATE TABLE [ToDos] (
    [Id] int NOT NULL IDENTITY,
    [Title] nvarchar(max) NULL,
    [CategoryId] int NULL,
    CONSTRAINT [PK_ToDos] PRIMARY KEY ([Id]),
    CONSTRAINT [FK_ToDos_Categories_CategoryId] FOREIGN KEY ([CategoryId]) REFERENCES [Categories] ([Id]) ON DELETE SET NULL
);

Here you can see, that there is an important difference when DeleteBehavior.SetNull is used, that is from the database side ON DELETE behavior is configured.

Now let's attempt to delete the Principal/Parent, in this context, it's a Category.

Category category = await context.Categories
    .FirstOrDefaultAsync();

context.Categories.Remove(category);
await context.SaveChangesAsync();

The query being generated is as follows.

--Executed DbCommand (6ms) [Parameters=[@p0='1'], CommandType='Text', CommandTimeout='30']
SET IMPLICIT_TRANSACTIONS OFF;
SET NOCOUNT ON;

DELETE FROM [Categories]
OUTPUT 1
WHERE [Id] = @p0;

When DeleteBehavior.ClientSetNull:

Here we are going to get the SqlException which makes perfect sense. If you have a look at how the Dependent/Child table got created above. when DeleteBehavior.ClientSetNull, from the database side, there was no ON DELETE behavior configured on the Foreign Key. So from the database side, it's not happy, because there are Dependents/Children associated with the Principal/Parent getting deleted and the database doesn't know what to do with them.

When DeleteBehavior.SetNull:

Here, the Delete operation will proceed without any errors, because the Foreign Key in Dependent/Child table is configured with ON DELETE SET NULL and the database will take care of the associated Dependents/Children by setting the FK to null.

Now in order to have a successful Delete on Principal/Parent when DeleteBehavior.ClientSetNull, we need to explicitly load the Dependents/Children to the DbContext. 

Category category = await context.Categories
    .Include(x => x.ToDos)
    .FirstOrDefaultAsync();

context.Categories.Remove(category);
await context.SaveChangesAsync();

Now this will cause EF Core to issue the following query when deleting the Principal/Parent.

--Executed DbCommand (4ms) [Parameters=[@p1='1', @p0=NULL (DbType = Int32), @p2='1'], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;

UPDATE [ToDos] SET [CategoryId] = @p0
OUTPUT 1
WHERE [Id] = @p1;

DELETE FROM [Categories]
OUTPUT 1
WHERE [Id] = @p2;

Now EF Core will first UPDATE Dependents/Children setting the Foreign Key to NULL and then DELETE the Principal/Parent. So now, the Delete operation will succeed.

Hope this helps.

More read:
   Cascade Delete

Happy Coding.

Regards,
Jaliya

Saturday, August 13, 2022

ASP.NET Core: Rate Limiting with .NET 7 Preview 6

With the upcoming .NET 7, there's a new NuGet package System.Threading.RateLimiting coming to manage the Rate Limiting. We can use rate limiting to protect a resource from getting overwhelmed. There is a very detailed post Announcing Rate Limiting for .NET that you definitely should read.

In this post, let's see how can we use Rate Limiting quite easily with an ASP.NET Core Application. You need to install Microsoft.AspNetCore.RateLimiting NuGet package that provides ASP.NET Core middleware for enforcing rate limiting in an application using System.Threading.RateLimiting.

Once that's installed, we need to start by defining RateLimiterOptions.
RateLimiterOptions rateLimiterOptions = new();
Here you can set up variety of things such as defining a GlobalLimiter, adding individual rate limiters and customizing the behavior when a request is rejected from a Rate Limiter.

Now let's start by adding ConcurrencyLimiter, one of the inbuilt rate limiters.
rateLimiterOptions.AddConcurrencyLimiter(policyName: "ConcurrencyLimit",
    new ConcurrencyLimiterOptions(permitLimit: 5,
        queueProcessingOrder: QueueProcessingOrder.OldestFirst,
        queueLimit: 3)); 
Next, we need to enable Rate Limiting middleware and use the RateLimiterOptions we just created.
app.UseRateLimiter(rateLimiterOptions);
Next, we can use rate limiters we have configured in our endpoints, something like below.
app.MapGet("/ToDos", () =>
{
    return TypedResults.Ok();
 
})
.RequireRateLimiting("ConcurrencyLimit");
Here note:  the global limiter will be executed first, followed by the endpoint-specific limiter, if one exists.

Now let's try to hit the rate limit on the above endpoint by making parallel requests (a lazy way of course).
HttpClient _httpClient = new HttpClient
{
    BaseAddress = new Uri("https://localhost:44378"),
};
 
List<Task<HttpResponseMessage>> tasks = new List<Task<HttpResponseMessage>>();
 
for (int i = 0; i < 10; i++)
{
    Task<HttpResponseMessage> task = _httpClient.GetAsync("/Todos");
    tasks.Add(task);
}
 
var results = new List<HttpResponseMessage>(await Task.WhenAll(tasks));
 
foreach (HttpResponseMessage item in results)
{
    Console.WriteLine(item.StatusCode);
}
And I can see an output like below.
ServiceUnavailable
If I want to change the Response Code when the Rate limit is hit (default is 503: ServiceUnavailable), I can do something like the below.
rateLimiterOptions.RejectionStatusCode = (int)HttpStatusCode.TooManyRequests;
TooManyRequests
Hope this helps.

Most importantly, a must-read.
Happy Coding.

Regards,
Jaliya

Wednesday, July 20, 2022

Entity Framework Core 7 Preview 6: Performance Improvements

Entity Framework Core 7 (EF7) Preview 6 was released and it contains a bunch of performance improvements by reducing unnecessary roundtrips to the database and by improved queries. Please note: most of the improvements here are tight to Microsoft SQL  Server.

First, let's start with a simple example.

Inserting a single record

Let's consider the following code that inserts a single record into the database.

ToDo toDo = new() { Title = "ToDo 1" };
await context.ToDos.AddAsync(toDo);
await context.SaveChangesAsync();

Now let's have a look at the SQL statements that are being executed when using both EF Core 6.0 and 7.0.

EF Core 6.0

dbug: 20/07/2022 07:49:14.672 RelationalEventId.TransactionStarted[20200] (Microsoft.EntityFrameworkCore.Database.Transaction)
      Began transaction with isolation level 'ReadCommitted'.
...
info: 20/07/2022 07:49:14.748 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
      Executed DbCommand (24ms) [Parameters=[@p0='ToDo 1' (Size = 4000)], CommandType='Text', CommandTimeout='30']
      SET NOCOUNT ON;
      INSERT INTO [ToDos] ([Title])
      VALUES (@p0);
      SELECT [Id]
      FROM [ToDos]
      WHERE @@ROWCOUNT = 1 AND [Id] = scope_identity();
...
dbug: 20/07/2022 07:49:14.779 RelationalEventId.TransactionCommitted[20202] (Microsoft.EntityFrameworkCore.Database.Transaction)
      Committed transaction.

So what's happening here.

  • Trip to the database to begin a transaction
  • Trip to the database with 2 SQL statements (I am ignoring SET NOCOUNT ON;)
    • Insert the record into the database
    • Select the Id of the inserted record (that's because when the Primary Key is an integer, by default it's set up as an IDENTITY column and we may require it for application logic)
  • Trip to the database to commit the transaction
A couple of interesting questions here. Do we need a transaction here that initiates 2 roundtrips to the database? Since there is only one atomic operation with a single statement, we really don't need it. It's either going to pass or fail and not going to cause any data integrity issue.

Now let's run the same code using EF Core 7.0.

EF Core 7.0

info: 20/07/2022 07:53:51.959 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
      Executed DbCommand (25ms) [Parameters=[@p0='ToDo 1' (Size = 4000)], CommandType='Text', CommandTimeout='30']
      SET IMPLICIT_TRANSACTIONS OFF;
      SET NOCOUNT ON;
      INSERT INTO [ToDos] ([Title])
      OUTPUT INSERTED.[Id]
      VALUES (@p0);

There's clearly a difference between the above SQL statement and the SQL statements generated when using EF Core 6.0.

  • A single trip to the database and there's no transaction
    • A new statement SET IMPLICIT_TRANSACTIONS OFF is executed. SQL Server has an opt-in “Implicit Transactions” mode, where executing a statement outside of a transaction won’t auto-commit, but instead implicitly start a new transaction. EF is disabling it so that the changes are getting saved. 
    • Instead of inserting and then selecting the IDENTITY value, the new SQL uses an “OUTPUT clause” to tell SQL Server to send the value directly from the INSERT.

Now, let's have a look at a bit more advanced example. That is what if we are doing multiple inserts?

Inserting multiple records

for (int i = 1; i <= 4; i++)
{
    ToDo toDo = new() { Title = $"ToDo {i}" };
    await context.ToDos.AddAsync(toDo);
}
 
await context.SaveChangesAsync();

Now again let's compare the SQL Statements from both EF Core 6.0 and 7.0.

EF Core 6.0

dbug: 20/07/2022 09:22:18.194 RelationalEventId.TransactionStarted[20200] (Microsoft.EntityFrameworkCore.Database.Transaction)
      Began transaction with isolation level 'ReadCommitted'.
...
info: 20/07/2022 09:22:18.259 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
      Executed DbCommand (27ms) [Parameters=[@p0='ToDo 1' (Size = 4000), @p1='ToDo 2' (Size = 4000), @p2='ToDo 3' (Size = 4000), @p3='ToDo 4' (Size = 4000)], CommandType='Text', CommandTimeout='30']
      SET NOCOUNT ON;
      DECLARE @inserted0 TABLE ([Id] int, [_Position] [int]);
      MERGE [ToDos] USING (
      VALUES (@p0, 0),
      (@p1, 1),
      (@p2, 2),
      (@p3, 3)) AS i ([Title], _Position) ON 1=0
      WHEN NOT MATCHED THEN
      INSERT ([Title])
      VALUES (i.[Title])
      OUTPUT INSERTED.[Id], i._Position
      INTO @inserted0;
 
      SELECT [i].[Id] FROM @inserted0 i
      ORDER BY [i].[_Position];
...
dbug: 20/07/2022 09:22:18.318 RelationalEventId.TransactionCommitted[20202] (Microsoft.EntityFrameworkCore.Database.Transaction)
      Committed transaction.

So what's happening here.

  • Trip to the database to begin a transaction
  • Trip to the database with 3 SQL statements
    • Declare a temporary table @inserted0
    • Single MERGE statement that uses an “OUTPUT clause” to insert the INSERTED.[Id] to temporary table @inserted0.
    • SELECT the Ids from the temporary table @inserted0
  • Trip to the database to commit the transaction

EF Core 7.0

info: 20/07/2022 09:25:56.651 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
      Executed DbCommand (22ms) [Parameters=[@p0='ToDo 1' (Size = 4000), @p1='ToDo 2' (Size = 4000), @p2='ToDo 3' (Size = 4000), @p3='ToDo 4' (Size = 4000)], CommandType='Text', CommandTimeout='30']
      SET IMPLICIT_TRANSACTIONS OFF;
      SET NOCOUNT ON;
      MERGE [ToDos] USING (
      VALUES (@p0, 0),
      (@p1, 1),
      (@p2, 2),
      (@p3, 3)) AS i ([Title], _Position) ON 1=0
      WHEN NOT MATCHED THEN
      INSERT ([Title])
      VALUES (i.[Title])
      OUTPUT INSERTED.[Id], i._Position;

Here again, we don't need a Transaction.

  • A single trip to the database and there's no transaction
    • A new statement SET IMPLICIT_TRANSACTIONS OFF is executed for the same reason mentioned in the initial scenario.
    • No use of temporary tables here. A single MERGE statement that uses an OUTPUT clause to tell SQL Server to send back all the INSERTED.[Id].

As you can see, with EF Core 7 unnecessary roundtrips to the database are reduced, and generated SQL statements are improved giving us an improvement in the performance.

Read More,
   Announcing Entity Framework Core 7 Preview 6: Performance Edition

Happy Coding.

Regards,
Jaliya