Wednesday, June 30, 2021

PowerShell: Running Multiple Invoke-Sqlcmd Within a Transaction

I had a requirement where I want to run set of .SQL scripts in a folder, either all scripts should get executed or none should get executed. So this is a quick post on how you can run multiple Invoke-Sqlcmd commands and that's within a transaction using PowerShell. I was initially hoping Invoke-Sqlcmd will accept an argument for Transaction, but apparently not.

This is accomplished via a native System.Transactions. This is inspired by this StackOverflow answer.

$filePath = "$(System.DefaultWorkingDirectory)\scripts"
$files = Get-ChildItem $filePath -Filter "*.sql"

$options = [System.Transactions.TransactionScopeOption]::RequiresNew
$timeout = New-Object System.TimeSpan -ArgumentList @(0, 10, 0) #10 minutes
$scope = New-Object System.Transactions.TransactionScope -ArgumentList ($options, $timeout)

Write-Host " -> Starting a transaction."

try {
    foreach ($f in $files) {
        Write-Host " -> Executing script "$filePath\$f""

        Invoke-Sqlcmd `
            -ServerInstance "$(MsSqlServer)" `
            -Database "$(Database)" `
            -Username "$(Login)" `
            -Password "$(Password)" `
            -Inputfile "$filePath\$f" `
            -ConnectionTimeout 120 `
            -ErrorAction 'Stop'
    } 
    
    $scope.Complete()
    $scope.Dispose()

    Write-Host " -> Completed the transaction."
}
catch {
    Write-Host "Error Message: " $_.Exception.Message
    $scope.Dispose()
}

So here either all the changes in SQL scripts will get applied or if any errors, none will get applied.

Happy Coding.

Regards,
Jaliya

Tuesday, June 22, 2021

EF Core and MS SQL Server: Adding Migrations for a Table that has System-Versioning Enabled

Note: this post on based on EF Core 5.x.

In this post, let's see how we can add EF database migration for a table that has System-Versioning enabled. (If you don't know what System-Versioning enabled tables are in Microsoft SQL Server, you can read Temporal tables for more information, but if you landed here in this post, it's very likely you already know what that is).

In my case, I wanted to add a computed column to a table that has System Versioning enabled. So if I just try to add a column to the target table, when applying migrations EF will throw an error, something like "System-versioned table schema modification failed because adding computed column while system-versioning is ON is not supported". So here what we need is to first turn off the system-versioning, do the necessary schema changes and finally turn on the system-versioning back again.

So let's go by an example. I am adding a new computed column to my target table PropertyAccount and it's already System-Versioned.
public class PropertyAccountConfiguration : IEntityTypeConfiguration<PropertyAccount>
{
    public void Configure(EntityTypeBuilder<PropertyAccount> builder)
    {
        // some other configuration

        builder
            .Property(x => x.AccountNumber)
            .HasComputedColumnSql("RIGHT('000000'+CAST(Id AS VARCHAR(6)),6)")
            .HasMaxLength(6);
    }
}
So now if I add a migration, EF will generate something like this.
public partial class PT_AddAccountNumberToPropertyAccount : Migration
{
    protected override void Up(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.AddColumn<string>(
            name: "AccountNumber",
            schema: "pt",
            table: "PropertyAccount",
            type: "nvarchar(6)",
            maxLength: 6,
            nullable: true,
            computedColumnSql: "RIGHT('000000'+CAST(Id AS VARCHAR(6)),6)");
    }

    protected override void Down(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.DropColumn(
            name: "AccountNumber",
            schema: "pt",
            table: "PropertyAccount");
    }
}
But if we try to apply the migration, it's going to throw the above error.

So now we need to turn off the system-versioning, add the column to target table and its History table, and then turn on the system-versioning again, something like below.
public partial class PT_AddAccountNumberToPropertyAccount : Migration
{
    private const string PropertyAccountTableName = "pt.PropertyAccount";

    protected override void Up(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.DisableSystemVersioning(PropertyAccountTableName);

        migrationBuilder.AddColumn<string>(
            name: "AccountNumber",
            schema: "pt",
            table: "PropertyAccount",
            type: "nvarchar(6)",
            maxLength: 6,
            nullable: true,
            computedColumnSql: "RIGHT('000000'+CAST(Id AS VARCHAR(6)),6)");

        migrationBuilder.AddColumn<string>(
            name: "AccountNumber",
            schema: "pt",
            table: "PropertyAccountHistory",
            type: "varchar(6)",
            maxLength: 6,
            nullable: true);

        migrationBuilder.EnableSystemVersioning(PropertyAccountTableName);
    }

    protected override void Down(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.DisableSystemVersioning(PropertyAccountTableName);

        migrationBuilder.DropColumn(
            name: "AccountNumber",
            schema: "pt",
            table: "PropertyAccount");

        migrationBuilder.DropColumn(
            name: "AccountNumber",
            schema: "pt",
            table: "PropertyAccountHistory");

        migrationBuilder.EnableSystemVersioning(PropertyAccountTableName);
    }
}
The 2 methods DisableSystemVersioning and EnableSystemVersioning are some two extension methods I have created on MigrationBuilder.
public static class MigrationBuilderExtensions
{
    public static void DisableSystemVersioning(this MigrationBuilder migrationBuilderstring tableName)
    {
        if (string.IsNullOrEmpty(tableName))
        {
            throw new ArgumentNullException(nameof(tableName));
        }

        migrationBuilder.Sql($"ALTER TABLE {tableName} SET (SYSTEM_VERSIONING = OFF);");
    }

    public static void EnableSystemVersioning(this MigrationBuilder migrationBuilderstring tableNamestring historyTableName = null)
    {
        if (string.IsNullOrEmpty(tableName))
        {
            throw new ArgumentNullException(nameof(tableName));
        }

        if (string.IsNullOrEmpty(historyTableName))
        {
            historyTableName = $"{tableName}History";
        }

        migrationBuilder.Sql(@$"ALTER TABLE {tableName} SET
            (
                SYSTEM_VERSIONING = ON
                ( HISTORY_TABLE = 
{historyTableName})
            );
        "
);
    }
}
And this works like a charm. 

Hope this helps.

Happy Coding.

Regards,
Jaliya

Monday, June 14, 2021

.NET 6 Preview 4: Async Streaming in ASP.NET Core

In this post, let's go through another feature that got introduced to ASP.NET Core in .NET 6 Preview 4. That is Async Streaming.

Before looking at how Async Streaming works in .NET 6, let's first have a look at how it works in the current version: .NET 5. I have created a simple API targetting .NET 5 that returns an IAsyncEnumerable<T>.
[ApiController]
[Route("[controller]")]
public class ValuesController : ControllerBase
{
    [HttpGet]
    public IAsyncEnumerable<intGet()
    {
        IAsyncEnumerable<intvalue = GetData();
        return value;
    }

    private static async IAsyncEnumerable<intGetData()
    {
        for (var i = 1; i <= 10; i++)
        {
            await Task.Delay(1000);
            yield return i;
        }
    }
}
Here if you run this, first the results would be buffered into memory before being written into the response. So you will get the response after like ~10 seconds. Something like this.
IAsyncEnumerable<T> streaming .NET 5
Now let's change the target framework to .NET 6 and run the same code. 
IAsyncEnumerable<T> streaming .NET 6
Isn't it nice? So what's happening here is, IAsyncEnumerable<T> instances are no longer buffered into the memory before it gets written out into the response. But there is something very important to note here, this functionality will only work if you are using System.Text.Json as the serializer, because this functionality is actually made possible by System.Text.Json as it now has support for streaming IAsyncEnumerable<T> types. If you are using NewtonsoftJson as your serializer, things will not work as shown above.


Happy Coding.

Regards,
Jaliya

Monday, June 7, 2021

.NET 6 Preview 4: Introducing Minimal APIs in ASP.NET Core

In this post, let's see one of the nicest features coming in with ASP.NET Core in .NET 6. This feature is called Minimal APIs, the basic idea is being able to create a REST API with minimal code. 

Right now, if you create an ASP.NET Core Web API project using Visual Studio or dotnet CLI, you will see something like this, a project with multiple files.
Not Minimal API
Basically, you have a Program.cs, Startup.cs and Controller class. The motivation    behind the ASP.NET team for introducing Minimal API is, we don't have to do such ceremony to write smaller APIs or a small microservice.

If you have installed .NET 6 Preview 4 and do dotnet new web, you should see something new.

dotnet new web -n MinimalApi
Minimal API
There is only going to be just one .cs file which is Program.cs.

Program.cs

using System;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Hosting;

var builder = WebApplication.CreateBuilder(args);
await using var app = builder.Build();

if (app.Environment.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
}

app.MapGet("/", (Func<string>)(() => "Hello World!"));

await app.RunAsync();

This is just a simple API, which will return just "Hello World". There is no Main method, it's using Top-level statements (a C# 9.0 feature). 

Here you can basically do all the things that you used to do like set up dependencies and configure the HTTP Request Pipeline (what we usually do in ConfigureServices and Configure methods in Startup.cs respectively).

To give an example, something like this.

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);

// ConfigureServices
builder.Services.AddDbContext<EmployeeContext>(options =>
    options.UseSqlServer(builder.Configuration.GetConnectionString("DefaultConnection")));

await using WebApplication app = builder.Build();

// Configure
if (app.Environment.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
} // Setup routes and run the app

Here, I have set up a EF DbContext within the container and in the HTTP Request Pipeline, have set up a DeveloperExceptionPage if the environment is Development (again we used to do this in ConfigureServices and Configure methods in Startup.cs respectively).

Consider the below EmployeeContext.

namespace MinimalApi
{
    public class EmployeeContext : DbContext
    {
        public EmployeeContext(DbContextOptions options) : base(options)
        {
        }

        public DbSet<Employee> Employees { getset; }
    }

    public class Employee
    {
        public int Id { getset; }

        public string Name { getset; }
    }
}

I can basically create a CRUD API for Employees here easily, something like below.

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);

// ConfigureServices
builder.Services.AddDbContext<EmployeeContext>(options =>
    options.UseSqlServer(builder.Configuration.GetConnectionString("DefaultConnection")));

await using WebApplication app = builder.Build();

// Configure
if (app.Environment.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
}

app.MapGet("/employees"async ([FromServices] EmployeeContext dbContext) =>
{
    return await dbContext.Employees.ToListAsync();
});

app.MapGet("/employees/{id}"async ([FromServices] EmployeeContext dbContextint id) =>
{
    Employee employee = await dbContext.Employees.FindAsync(id);
    if (employee is null)
    {
        return NotFound();
    }

    return Ok(employee);
});

app.MapPost("/employees"async ([FromServices] EmployeeContext dbContext, Employee employee) =>
{
    await dbContext.Employees.AddAsync(employee);
    await dbContext.SaveChangesAsync();

    return Ok(employee);
});

app.MapPut("/employees/{id}"async ([FromServices] EmployeeContext dbContextint id, Employee employee) =>
{
    if (id != employee.Id)
    {
        return BadRequest();
    }

    if (!await dbContext.Employees.AnyAsync(x => x.Id == id))
    {
        return NotFound();
    }

    dbContext.Entry(employee).State = EntityState.Modified;
    await dbContext.SaveChangesAsync();

    return NoContent();
});

app.MapDelete("/employees/{id}"async ([FromServices] EmployeeContext dbContextint id) =>
{
    Employee employee = await dbContext.Employees.FindAsync(id);
    if (employee is null)
    {
        return NotFound();
    }

    dbContext.Employees.Remove(employee);
    await dbContext.SaveChangesAsync();

    return NoContent();
});

await app.RunAsync();

There is an interesting thing here. That is from the endpoints, all my return types are custom types that implement IResult, a new type that is coming with .NET 6.


My Return types are mapped manually here, through this class ResultMapper class.

public static class ResultMapper
{
    public static IResult BadRequest() => new StatusCodeResult(StatusCodes.Status400BadRequest);

    public static IResult NotFound() => new StatusCodeResult(StatusCodes.Status404NotFound);

    public static IResult NoContent() => new StatusCodeResult(StatusCodes.Status204NoContent);

    public static OkResult<T> Ok<T>(T value) => new(value);

    public class OkResult<T> : IResult
    {
        private readonly T _value;

        public OkResult(T value)
        {
            _value = value;
        }

        public Task ExecuteAsync(HttpContext httpContext)
        {
            return httpContext.Response.WriteAsJsonAsync(_value);
        }
    }
}

Hopefully, we will not have to do this in the next releases.

So as developers or newbies, we can start from the Minimal APIs and we can grow as we go rather than having a somewhat complex structure from the first day. Microsoft is going to provide tooling (I believe through Visual Studio and VS Code), so we can refactor the code into the current structure (separate Controller etc) as the code grows. An important thing is, this is not going to replace the existing code structure, it's just we can start with as little code as possible and then grow as we go!

You can find the sample code here,
   https://github.com/jaliyaudagedara/minimal-api

In this sample code, you will see I am using global usings (a C# 10 feature), you can read more about that here: C# 10.0: Introducing Global Usings

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, June 1, 2021

Azure Durable Functions: Change the Storage Provider from Azure Storage to Microsoft SQL Server

During the Microsoft Build 2021 last week, there was an important announcement related to Azure Durable Functions. For all this time Azure Durable Functions was maintaining it's state in Azure Storage and we didn't have any control of that. These were mainly blobs, queues, tables.

But now we have 2 other storage options. But note: as of the date I am writing this post, these are available as Preview functionality.
  1. Netherite
  2. Microsoft SQL Server
    • This can be both on-premise and cloud-hosted deployments of SQL Server, including Azure SQL Database.
In this post, let's see how we can use Microsoft SQL Server as the state storage provider for an Azure Durable Function App. I am going to be using .NET Function App using C#.

I have created an Azure Function App using Visual Studio choosing Durable Functions Orchestrator as the template (Note: Durable Functions are not yet supported with .NET 5, we still need to target .NET Core 3 (LTS) which is .NET Core 3.1).

Then we need to install Microsoft.DurableTask.SqlServer.AzureFunctions NuGet package (Note: as of the date I am writing this post, it's still in a pre-release version). Once that's installed, I am updating the host.json as follows.

host.json
{
  "version""2.0",
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled"true,
        "excludedTypes""Request"
      }
    },
    "logLevel": {
      "DurableTask.SqlServer""Information",
      "DurableTask.Core""Warning"
    }
  },
  "extensions": {
    "durableTask": {
      "storageProvider": {
        "type""mssql",
        "connectionStringName""SQLDB_Connection"
      }
    }
  }
Here the most important change is in the extensions node. There we have customized the storageProvider to be mssql and configured a connectionStringName. And also there is a slight modification to logging configuration to reduce unnecessary noise, so we can see what's happening on SQL side of things.

Now we need to specify the SQLDB_Connection key and it's value. Since I am going to be running this function locally, I am going to add that key into local.settings.json. When deployed to Azure, you can maintain this setting in Application Settings or where ever you maintain your AppSettings.

Before that, let's create an empty database in our Microsoft SQL Server. I am just using my local MSSQL Server.
Empty Database
It's just an empty database, I just expanded Tables and Stored Procedures, just to be sure. Now I am copying the connection string for this database and updating local.settings.json as follows.

local.settings.json
{
  "IsEncrypted"false,
  "Values": {
    "AzureWebJobsStorage""UseDevelopmentStorage=true",
    "FUNCTIONS_WORKER_RUNTIME""dotnet",
    "SQLDB_Connection""Data Source=RAVANA-TPP15\\MSSQLSERVER2019;Initial Catalog=AzDurableFuncApp;Integrated Security=True;"
  }
}
Here we are still going to need AzureWebJobsStorage key and value, because there are other things (secretc etc. ) that needs to be stored.

Now let's run the function app.
First run on the function app
You will see that some SQL scripts are getting executed. Now let's refresh the database.
Updated Database
There are couple of tables and a bunch of Stored Procudures (amongst other things) that got added in order to manage the state. If you want to have a look at the SQL scripts that is getting executed, you can find those here: https://github.com/microsoft/durabletask-mssql/tree/main/src/DurableTask.SqlServer/Scripts.

So why we do need to consider about changing Storage Provider for Azure Durable Functions in the first place and which one should we use. You can find answers for those and find more information by reading through these.

Hope this helps.

Happy Coding.

Regards,
Jaliya