Wednesday, July 20, 2022

Entity Framework Core 7 Preview 6: Performance Improvements

Entity Framework Core 7 (EF7) Preview 6 was released and it contains a bunch of performance improvements by reducing unnecessary roundtrips to the database and by improved queries. Please note: most of the improvements here are tight to Microsoft SQL  Server.

First, let's start with a simple example.

Inserting a single record

Let's consider the following code that inserts a single record into the database.

ToDo toDo = new() { Title = "ToDo 1" };
await context.ToDos.AddAsync(toDo);
await context.SaveChangesAsync();

Now let's have a look at the SQL statements that are being executed when using both EF Core 6.0 and 7.0.

EF Core 6.0

dbug: 20/07/2022 07:49:14.672 RelationalEventId.TransactionStarted[20200] (Microsoft.EntityFrameworkCore.Database.Transaction)
      Began transaction with isolation level 'ReadCommitted'.
...
info: 20/07/2022 07:49:14.748 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
      Executed DbCommand (24ms) [Parameters=[@p0='ToDo 1' (Size = 4000)], CommandType='Text', CommandTimeout='30']
      SET NOCOUNT ON;
      INSERT INTO [ToDos] ([Title])
      VALUES (@p0);
      SELECT [Id]
      FROM [ToDos]
      WHERE @@ROWCOUNT = 1 AND [Id] = scope_identity();
...
dbug: 20/07/2022 07:49:14.779 RelationalEventId.TransactionCommitted[20202] (Microsoft.EntityFrameworkCore.Database.Transaction)
      Committed transaction.

So what's happening here.

  • Trip to the database to begin a transaction
  • Trip to the database with 2 SQL statements (I am ignoring SET NOCOUNT ON;)
    • Insert the record into the database
    • Select the Id of the inserted record (that's because when the Primary Key is an integer, by default it's set up as an IDENTITY column and we may require it for application logic)
  • Trip to the database to commit the transaction
A couple of interesting questions here. Do we need a transaction here that initiates 2 roundtrips to the database? Since there is only one atomic operation with a single statement, we really don't need it. It's either going to pass or fail and not going to cause any data integrity issue.

Now let's run the same code using EF Core 7.0.

EF Core 7.0

info: 20/07/2022 07:53:51.959 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
      Executed DbCommand (25ms) [Parameters=[@p0='ToDo 1' (Size = 4000)], CommandType='Text', CommandTimeout='30']
      SET IMPLICIT_TRANSACTIONS OFF;
      SET NOCOUNT ON;
      INSERT INTO [ToDos] ([Title])
      OUTPUT INSERTED.[Id]
      VALUES (@p0);

There's clearly a difference between the above SQL statement and the SQL statements generated when using EF Core 6.0.

  • A single trip to the database and there's no transaction
    • A new statement SET IMPLICIT_TRANSACTIONS OFF is executed. SQL Server has an opt-in “Implicit Transactions” mode, where executing a statement outside of a transaction won’t auto-commit, but instead implicitly start a new transaction. EF is disabling it so that the changes are getting saved. 
    • Instead of inserting and then selecting the IDENTITY value, the new SQL uses an “OUTPUT clause” to tell SQL Server to send the value directly from the INSERT.

Now, let's have a look at a bit more advanced example. That is what if we are doing multiple inserts?

Inserting multiple records

for (int i = 1; i <= 4; i++)
{
    ToDo toDo = new() { Title = $"ToDo {i}" };
    await context.ToDos.AddAsync(toDo);
}
 
await context.SaveChangesAsync();

Now again let's compare the SQL Statements from both EF Core 6.0 and 7.0.

EF Core 6.0

dbug: 20/07/2022 09:22:18.194 RelationalEventId.TransactionStarted[20200] (Microsoft.EntityFrameworkCore.Database.Transaction)
      Began transaction with isolation level 'ReadCommitted'.
...
info: 20/07/2022 09:22:18.259 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
      Executed DbCommand (27ms) [Parameters=[@p0='ToDo 1' (Size = 4000), @p1='ToDo 2' (Size = 4000), @p2='ToDo 3' (Size = 4000), @p3='ToDo 4' (Size = 4000)], CommandType='Text', CommandTimeout='30']
      SET NOCOUNT ON;
      DECLARE @inserted0 TABLE ([Id] int, [_Position] [int]);
      MERGE [ToDos] USING (
      VALUES (@p0, 0),
      (@p1, 1),
      (@p2, 2),
      (@p3, 3)) AS i ([Title], _Position) ON 1=0
      WHEN NOT MATCHED THEN
      INSERT ([Title])
      VALUES (i.[Title])
      OUTPUT INSERTED.[Id], i._Position
      INTO @inserted0;
 
      SELECT [i].[Id] FROM @inserted0 i
      ORDER BY [i].[_Position];
...
dbug: 20/07/2022 09:22:18.318 RelationalEventId.TransactionCommitted[20202] (Microsoft.EntityFrameworkCore.Database.Transaction)
      Committed transaction.

So what's happening here.

  • Trip to the database to begin a transaction
  • Trip to the database with 3 SQL statements
    • Declare a temporary table @inserted0
    • Single MERGE statement that uses an “OUTPUT clause” to insert the INSERTED.[Id] to temporary table @inserted0.
    • SELECT the Ids from the temporary table @inserted0
  • Trip to the database to commit the transaction

EF Core 7.0

info: 20/07/2022 09:25:56.651 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
      Executed DbCommand (22ms) [Parameters=[@p0='ToDo 1' (Size = 4000), @p1='ToDo 2' (Size = 4000), @p2='ToDo 3' (Size = 4000), @p3='ToDo 4' (Size = 4000)], CommandType='Text', CommandTimeout='30']
      SET IMPLICIT_TRANSACTIONS OFF;
      SET NOCOUNT ON;
      MERGE [ToDos] USING (
      VALUES (@p0, 0),
      (@p1, 1),
      (@p2, 2),
      (@p3, 3)) AS i ([Title], _Position) ON 1=0
      WHEN NOT MATCHED THEN
      INSERT ([Title])
      VALUES (i.[Title])
      OUTPUT INSERTED.[Id], i._Position;

Here again, we don't need a Transaction.

  • A single trip to the database and there's no transaction
    • A new statement SET IMPLICIT_TRANSACTIONS OFF is executed for the same reason mentioned in the initial scenario.
    • No use of temporary tables here. A single MERGE statement that uses an OUTPUT clause to tell SQL Server to send back all the INSERTED.[Id].

As you can see, with EF Core 7 unnecessary roundtrips to the database are reduced, and generated SQL statements are improved giving us an improvement in the performance.

Read More,
   Announcing Entity Framework Core 7 Preview 6: Performance Edition

Happy Coding.

Regards,
Jaliya

Friday, July 15, 2022

Visual Studio 2022 Version 17.3 Preview 3: IEnumerable Debugger Visualizer Improvements

In this post, let's have a look at one of the nicest features in Visual Studio 2022 Preview that got improved with Visual Studio 2022 version 17.3 Preview 3

Visual Studio 2022 version 17.3 Preview 2 introduced IEnumerable Debugger Visualizer that can be used to view .NET collections in a separate Window. The IEnumerable visualizer was displaying collections of objects in a table with a row per object and sortable columns for each field and property. And we were able to export the collection to an Excel file.

Visual Studio 2022 version 17.3 Preview 3 has some nice improvements to IEnumerable Debugger Visualizer, now we can search the collection within the Visualizer. And also we can export the collection to a CSV file as well.

Following is a quick tour of the newest feature of IEnumerable Debugger Visualizer. (please click on the image in order to have good visibility of the image)
IEnumerable Debugger Visualizer
Start using Visual Studio 2022 version 17.3 Preview 3 today.

Happy Coding.

Regards,
Jaliya

Wednesday, July 6, 2022

.NET 7 Preview 5: Using [AsParameters] Attribute for Parameter Binding in Minimal APIs

In this post let's go through this new attribute [AsParameters] that got introduced in .NET 7 Preview 5. This attribute is especially for Minimal APIs in ASP.NET Core.

Let's consider the following endpoint using Minimal APIs.

app.MapGet("ToDos"async (ToDoDbContext dbContextint offsetint limit, ILogger<Program> logger) =>
{
    logger.LogInformation("Getting ToDos with Offset: '{Offset}', Limit: '{Limit}'",
        offset,
        limit);
 
    List<ToDo> todos = await dbContext.ToDos
        .OrderBy(x => x.Id)
        .Skip(offset)
        .Take(limit)
        .ToListAsync();
 
    return TypedResults.Ok(todos);
});

With the new [AsParameters] attribute, we can move all the parameters into a POCO type.

app.MapGet("ToDos/WithRequest"async ([AsParameters] GetRequest request) =>
{
    request.Logger.LogInformation("Getting ToDos with Offset: '{Offset}', Limit: '{Limit}'",
        request.Offset,
        request.Limit);
 
    List<ToDo> todos = await request.DbContext.ToDos
        .OrderBy(x => x.Id)
        .Skip(request.Offset)
        .Take(request.Limit)
        .ToListAsync();
 
    return TypedResults.Ok(todos);
});
 
record struct GetRequest(ToDoDbContext DbContext    int Offset    int Limit, 
    ILogger<GetRequest> Logger);

Here I have used a record struct, but classes are also supported. The recommendation is to use record struct to avoid additional memory allocation.

And on top of that, we can apply other binding attributes (FromHeaderFromQuery, FromServices, etc.), something like below.

record struct GetRequest(ToDoDbContext DbContext,
    int Offset,
    int Limit,
    [FromHeader(Name = "X-OrganizationId")] string? OrganizationId,
    ILogger<GetRequest> Logger);

Here the OrganizationId will get picked from HTTP Request Header (if present). 

The following rules are applied during the parameter binding.

Structs

  • A declared public parameterless constructor will always be used if present.
  • A public parameterized constructor will be use if a single constructor is present and all arguments have a matching (case-insensitive) public property.
    • If a constructor parameter does not match with a property, InvalidOperationException will be thrown if binding is attempted.
  • Since struct always has a default constructor, the default constructor will be used if it is the only one present or more than one parameterized constructor is present.
  • When binding using a parameterless constructor all public settable properties will be bound.
Classes
  • A public parameterless constructor will be used if present.
  • A public parameterized constructor will be used if a single constructor is present and all arguments have a matching (case-insensitive) public property.
    • If a constructor parameter does not match with a property, InvalidOperationException will be thrown if binding is attempted.
  • Throw InvalidOperationException when more than one parameter is declared and the parameterless constructor is not present.
  • Throw InvalidOperationException if a suitable constructor cannot be found.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, July 5, 2022

Received Microsoft MVP Award in Developer Technologies

I am humbled and honored once again to receive the precious Microsoft Most Valuable Professional (MVP) Award for the ninth consecutive year.

This year, the announcement of new MVPs took a bit longer (usually it's on 07/01), but finally got the email.

As always looking forward to another great year on top of Microsoft Development Stack.
Microsoft Most Valuable Professional (MVP)
Thank you Microsoft for your appreciation and Thank you everyone for your continuous support.

Happy Coding.

Regards,
Jaliya

Tuesday, June 28, 2022

Session: Improve Developer Productivity and Code Quality with GitHub at the First Ever Microsoft Reactor Pop-Up, New Zealand

First-Ever Microsoft Reactor Pop-Up, New Zealand started today and it's a three-day event. 

Microsoft Reactor Pop-Up, New Zealand
Microsoft Reactor Pop-Up, New Zealand
Day 1 was targetting on Application Innovation with Azure, Day 2 on Innovating with Data and AI and the Day 3 is all about the new Azure NZ North region.

I got the opportunity to deliver a session on the first day and the session was about Improving Developer Productivity and Code Quality with GitHub. There were also some amazing sessions delivered by Microsoft FTEs and MVPs.

There in the session, I went through demoing GitHub Copilot using JavaScript, Python and C# examples. Then went through GitHub Advanced Security Features which are Code Scanning, Secret Scanning, Dependency Review, and Security Overview.

It was fun!

If you missed today, you can still join for Day 2 and Day 3 to learn from some amazing list of speakers.

Happy Coding.

Regards,
Jaliya

Thursday, June 16, 2022

.NET 7 Preview 5: Improved Polymorphic Serialization/Deserialization Support in System.Text.Json

.NET 7 Preview 5 is out and we now have omproved Polymorphic Serialization/Deserialization Support in System.Text.Json.

Consider the following classes.
public class Person
{
    public string Name { getset; }
}
 
public class Student : Person
{
    public int StudentId { getset; }
}
 
public class Employee : Person
{
    public int EmployeeId { getset; }
}
Now if I do something like the below, you can see it only serialized the properties that are in the Parent class. (wrote a post about this a couple of months back: System.Text.Json.JsonSerializer: Serialize Properties of Derived Classes).
JsonSerializerOptions options = new() { WriteIndented = true };
 
Person person = new Student
{
    Name = "John Doe",
    StudentId = 1
};

string jsonString = JsonSerializer.Serialize<Person>(person, options);
Console.WriteLine(jsonString);
 
//{
//  "Name": "John Doe"
//}
We can change this behavior now using JsonDerivedType attribute annotations.
[JsonDerivedType(typeof(Student))]
[JsonDerivedType(typeof(Employee))]
public class Person
{
    public string Name { getset; }
}
This configuration enables polymorphic serialization for Person, specifically when the runtime type is one of the derived classes.

Now if we do something like the below, we can see correct runtime types are getting serialized.
JsonSerializerOptions options = new() { WriteIndented = true };
 
Person person = new Student
{
    Name = "John Doe",
    StudentId = 1
};
string jsonString = JsonSerializer.Serialize<Person>(person, options);
Console.WriteLine(jsonString);
 
//{
//  "StudentId": 1,
//  "Name": "John Doe"
//}
 
person = new Employee
{
    Name = "Jane Doe",
    EmployeeId = 1
};
 
jsonString = JsonSerializer.Serialize<Person>(person, options);
Console.WriteLine(jsonString);
 
//{
//  "EmployeeId": 1,
//  "Name": "Jane Doe"
//}
If I try to serialize a derived type that's not annotated in the base class, I am going to see an exception like the one below.
System.NotSupportedException: Runtime type 'Teacher' is not supported by polymorphic type 'Person'
Now when deserializing, it would be as follows.
string jsonString = """
{
  "StudentId": 1,
  "Name": "John Doe"
}
""";
 
Person person = JsonSerializer.Deserialize<Person>(jsonString);
Console.WriteLine(person is Student); // false, polymorphic deserialization doesn't work
Here note that this does not enable polymorphic deserialization. 

In order to enable polymorphic deserialization, we need to specify a type discriminator.
[JsonDerivedType(typeof(Student), typeDiscriminator: "student")]
public class Person
{
    public string Name { getset; }
}
Now when we serialize, the JSON will include the type discriminator.
Person person = new Student
{
    Name = "John Doe",
    StudentId = 1
};
string jsonString = JsonSerializer.Serialize<Person>(person, options);
Console.WriteLine(jsonString);
 
//{
//  "$type": "student",
//  "StudentId": 1,
//  "Name": "John Doe"
//}
Now when deserializing, we can see polymorphic deserialization works.
string jsonString = """
{
  "$type": "student",
  "StudentId": 1,
  "Name": "John Doe"
}
""";
 
Person person = JsonSerializer.Deserialize<Person>(jsonString);
Console.WriteLine(person is Student); // true, polymorphic deserialization works
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, June 15, 2022

Azure Functions (In Process): Output Binding to a Dynamic Queue/Topic

In this post let's see how we can set up an Output Binding in an In Process Azure Function to send messages to a dynamic target Azure Service Bus Queue or Topic.

We usually use ICollector or IAsyncCollector to write values to an output binding. And often times, the target queue/topic is set up through attributes. Something like below,

[FunctionName(FunctionNames.SomeFunction)]
public async Task Run(
    [ServiceBusTrigger("%ServiceBus_ListeningQueue%", Connection = "ServiceBus_ConnectionString")] 
        ServiceBusReceivedMessage serviceBusReceivedMessage,
    [ServiceBus("%ServiceBus_TargetQueue%", Connection = "ServiceBus_ConnectionString")] 
        IAsyncCollector<MyOutputMessage> asyncCollector)
{
    // TODO: Some code

    await asyncCollector.AddAsync(new MyOutputMessage { });
}

The above function is listening to an Azure Service Bus Queue (ServiceBus_ListeningQueue) and outputting a message to another Queue (ServiceBus_TargetQueue). Here the target Queue is static and cannot be changed. 

What if we want to route the message to a target Queue or Topic based on a property of the incoming message (most of the time based on ServiceBusReceivedMessage.ReplyTo Property).

In these cases, we can use Azure Functions imperative binding instead of declarative binding (which is done above).

[FunctionName(FunctionNames.SomeFunction)]
public async Task Run(
    [ServiceBusTrigger("%ServiceBus_ListeningQueue%", Connection = "ServiceBus_ConnectionString")]
        ServiceBusReceivedMessage serviceBusReceivedMessage,
    IBinder binder)
{
    // TODO: Some code
 
    IAsyncCollector<MyOutputMessage> asyncCollector =
        await binder.BindAsync<IAsyncCollector<MyOutputMessage>>(new ServiceBusAttribute(serviceBusReceivedMessage.ReplyTo)
        {
            // Note: This should be the App Setting Name that contains the Service Bus connection string
            Connection = "ServiceBus_ConnectionString",
        });
 
    await asyncCollector.AddAsync(new MyOutputMessage { });
}

And that's it. You can change the EntityType property of ServiceBusAttribute to ServiceBusEntityType.Topic if you want to output into a Topic (default is Queue).

Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, June 8, 2022

Update Azure APIM API from an OpenAPI Definition using az apim Command Group

This is a quick post on how you can update Azure APIM API from an OpenAPI definition using az apim command group. 

Note: As of the day I am writing this post, apim Command group is experimental and under development.

The command we need to use is az apim api import.

$resourceGroup = "<Resource Group>"
$serviceName = "<APIM Name>"
$apiId = "<API Name>"
$path = "/<API URL suffix>"
$specificationUrl = "https://.../swagger.json"
 
az apim api import `
    --resource-group $resourceGroup `
    --service-name $serviceName `
    --api-id $apiId `
    --path $path `
    --specification-url $specificationUrl `
    --specification-format OpenApi

This will update the existing API while preserving existing policies that you may have on your API.

There is also az apim api update command and that is to be used if you want to update the properties of the API.

This approach can be used if you are facing 504 Gateway Timeout errors when updating APIs that contain a large schema using Azure Resource Manager (ARM) APIs and you are on a Developer Pricing Tier APIM: Azure API Management with Developer Pricing Tier: 504 Gateway Timeout.

Hope this helps.

Happy Coding.

Regards.
Jaliya

Monday, June 6, 2022

Playing Around with GitHub Copilot

I have been spending some quality time with GitHub Copilot and so far it has made me fall out of my chair a couple of times. It truly is amazing. Thought of writing this post to share some experience, so whoever hasn't tried it so far, can get amazed like me.

GitHub Copilot is a Pair Programmer, which does way more than auto-completion and it's completely powered with AI (Artificial Intelligence). I have been using Visual Studio for a long time, and probably since the day I started, we had IntelliSense which does the basic auto-completion. Then a couple of years back, Microsoft announced IntelliCode which is an AI-Assisted tool to suggest the next piece of code based on where we are, and it was great.

Now GitHub Copilot places things on a whole different level. It basically can write code based on the comments in Natural Language, can you believe that? So whether it's a comment, docstring, function name, or the code itself, GitHub Copilot uses those to synthesize code. Currently, it's still in its Technical Preview stage, and has first-class support for the following programming languages.

  • Python
  • JavaScript
  • TypeScript
  • Ruby
  • Java
  • Go

Same time it understands dozens of other languages and can help you find your way around almost anything. I have tried with C#, T-SQL and it's working super nicely.

As of today, GitHub Copilot  is available as an extension for 

This is a simple express app, I have created using Visual Studio Code, I only wrote the comments, and the code was completely written by GitHub Copilot.
// Create an Express application
var express = require('express');
 
// Expose an endpoint
var app = express();
app.get('/'function (reqres) {
    res.send('Hello World!');
});
 
// Expose another endpoint
app.get('/api'function (reqres) {
    // Send a response of current time
    res.send(new Date());
});
 
// Run express application
app.listen(3000, function () {
    console.log('Listening on port ' + 3000);
});
I have also created a simple ASP.NET Core Minimal API with CRUD operations, again I only wrote the comments on natural language, and GitHub Copilot wrote all the new code for creating a class, the DbContext, and exposing CRUD endpoints and that's with the functionality.

So how does this work?

First, let's start with a bit of history. It all started with an AI Research Startup called OpenAI founded by Elon Musk and Sam Altman. In July 2019, Microsoft partnered up with OpenAI. In June 2020, OpenAI announced GPT-3, a language model trained on trillions of words from the Internet. In June 2021, OpenAI Codex was introduced, a model based on GPT-3 to translate natural language to code (and vice-versa).

GitHub Copilot is powered with OpenAI Codex models. 
How GitHub Copilot Works
OpenAI Codex was trained on publicly available source code (including code in public repositories on GitHub) and natural language, so it understands both programming and human languages. The GitHub Copilot editor extension sends our comments and code to the GitHub Copilot service, which then uses OpenAI Codex to synthesize and suggest individual lines and/or whole functions.

The whole GitHub Copilot experience is beyond words, you have to try it for yourself. Sign up with the Technical Program now, you will have to be on the waitlist for some time, but I would definitely say it's worth the wait, because once you have access, it's going to blow your minds off.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, May 31, 2022

Visual Studio 2022 17.3.0 Preview 1.1: Introducing Port Tunneling for ASP.NET Core Projects

Last week, it's Microsoft Build week, and hope you have enjoyed it.  

There were some nice announcements and one of my favorites out of them all is this nice feature that got released with Visual Studio 2022 17.3.0 Preview 1.1. That's introducing the private preview of port tunneling in Visual Studio for ASP.NET Core projects. 

With this, I can run my Web Application locally and the URL it's running is public and can be accessed from outside of my local environment. With most of us working from home, I am finding this very helpful. I can do things like, 

  • Share the public URL with a colleague to test out the application. 
  • If it's a Frontend Web Application,  access the URL from my mobile, and see how it's behaving.
  • I don't have to deploy the application to test a Webhook with a third party etc
In order to use this feature, there are a couple of things you need to do first.

The first thing is obviously you need to download and install the latest preview of Visual Studio 2022. And that's Visual Studio 2022 17.3.0 Preview 1.1.

Next, you need to sign up for the private preview program of port tunneling in Visual Studio. Otherwise, you are going to get an error like below when you are going to try it.
Missing sign-up for Port Tunneling program
You can do it by filling out the form here: https://aka.ms/tunnels-signup. Something to note here is, it's going to take some time for access to be granted into the private program and at this time individual users will not be considered, only organizations with tenant IDs.

After signing up with the preview program, log in to Visual Studio with the email address you have used. Then under Tools -> Options -> Environment -> Preview Features, check Enable port tunneling for Web Applications.
Enable port tunneling for Web Applications
Now you are almost set.

Finally, create a new ASP.NET Core Web Application, and once the project is created, update the launchSettings.json as below.

launchSettings.json
{
  "$schema""https://json.schemastore.org/launchsettings.json",
  "iisSettings": {
    "windowsAuthentication"false,
    "anonymousAuthentication"true,
    "iisExpress": {
      "applicationUrl""http://localhost:4367",
      "sslPort": 44305
    }
  },
  "profiles": {
    "WebApplication1": {
      "commandName""Project",
      "dotnetRunMessages"true,
      "launchBrowser"true,
      "launchUrl""swagger",
      "applicationUrl""https://localhost:7015;http://localhost:5015",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT""Development"
      },
      "createTunnel"true
    },
    "IIS Express": {
      "commandName""IISExpress",
      "launchBrowser"true,
      "launchUrl""swagger",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT""Development"
      }
    }
  }
}
The only change here I have done here is I have set a new property "createTunnel"true under the profile I am using to run the Application.

And that's it. Now you can launch the application.
Port Tunneling
Happy Coding.

Regards,
Jaliya