Thursday, October 21, 2021

Sending Per User Notifications using Azure Functions SignalR Service Output Binding

In this post, let's have a look at how to send user Notifications using Azure Functions SignalR Service output binding.

I already have an Azure SignalR Service and Azure Function setup. My client app is an Angular application.

In my Azure Function, I have the Negotiate function as follows.

public static class NegotiateFunction
{
    [FunctionName(FunctionNames.Negotiate)]
    public static SignalRConnectionInfo Run(
        [HttpTrigger(AuthorizationLevel.Function, "get""post", Route = null)] HttpRequest request,
        [SignalRConnectionInfo(
            HubName = "%SignalR_HubName%",
            UserId = "{headers.x-application-user-id}",
            ConnectionStringSetting = "SignalR_ConnectionString")] SignalRConnectionInfo connectionInfo)
    {
        return connectionInfo;
    }
}

Note I am setting the UserId to a custom header (x-application-user-id) I am expecting the client app to send.

Alternatively, you can use imperative binding instead of declarative binding (which is done above). This is helpful when you have clients who aren't sending the header you are looking for UserId. With declarative binding, if the header x-application-user-id isn't set, Negotiate function will throw an error.

Imperative binding can be done as follows.

public class NegotiateFunction
{
    private readonly IConfiguration _configuration;
 
    public NegotiateFunction(IConfiguration configuration)
    {
        _configuration = configuration;
    }
 
    [FunctionName(FunctionNames.Negotiate)]
    public async Task<SignalRConnectionInfo> Run(
        [HttpTrigger(AuthorizationLevel.Function, "get""post", Route = null)] HttpRequest request, 
        IBinder binder)
    {
        string userId = request.Headers["x-application-user-id"];
 
        SignalRConnectionInfoAttribute attribute = new SignalRConnectionInfoAttribute
        {
            HubName = _configuration.GetValue<string>("SignalR_HubName"),
            UserId = userId,
            ConnectionStringSetting = "SignalR_ConnectionString"
        };
        SignalRConnectionInfo connectionInfo = await binder.BindAsync<SignalRConnectionInfo>(attribute);
 
        return connectionInfo;
    }
}

And on my client app, I am building the hubConnection as follows.

this.hubConnection = new signalR.HubConnectionBuilder()
    .withUrl("<azure-functions-base-url>", {
        headers: {
            "x-functions-key""<function key>",
            "x-application-user-id""<user-id>"
        }
    })
    .withAutomaticReconnect()
    .configureLogging(signalR.LogLevel.Information)
    .build();

Once that is done and the connection is started, we should be seeing the UserId in the access token when the client negotiates with Azure SignalR Service (after negotiating with Function App and receiving the connection info).

Access Token

Now we just need to set the UserId when adding a SignalRMessage via the output binding in our Function App, something like this.

public static class SendApplicationUserNotificationFunction
{
    [FunctionName(FunctionNames.SendApplicationUserNotification)]
    public static async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest request,
        [SignalR(
            HubName = "%SignalR_HubName%",
            ConnectionStringSetting = "SignalR_ConnectionString")]IAsyncCollector<SignalRMessage> signalRMessageCollector)
    {
        var requestBody = await new StreamReader(request.Body).ReadToEndAsync();
        ApplicationUserNotification applicationUserNotification = JsonConvert.DeserializeObject<ApplicationUserNotification>(requestBody);
 
        await signalRMessageCollector.AddAsync(new SignalRMessage
        {
            Target = SignalRTargets.AddApplicationUserNotification,
            Arguments = new[] { applicationUserNotification },
            UserId = applicationUserNotification.ApplicationUserId,
        });
 
        return new OkResult();
    }
}

So I have two users logged in to my client application, UserId: 1 and 110. And I am sending a message to UserId: 1.

I can see UserId: 1 received the message.

UserId: 1 received the message
But the UserId: 110 hasn't.

UserId: 110 didn't receive the message
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, October 20, 2021

EF Core 6.0: Introducing Pre-convention Model Configuration

In this post let's have a look at another feature EF Core 6.0 brings to the table.

Consider in your context, you have this specific type, and you want to maintain a similar configuration for that property throughout all your entities.

Let's go by a small example. Say I have multiple decimal properties in my entities, and I want all of them to be decimal(18,4) instead of decimal(18,2) which is the default.
public class OrderItem
{
    public int Id { getset; }
 
    public int OrderId { getset; }
 
    public decimal Amount { getset; }
 
    public decimal Discount { getset; }
}
Here I have two decimal properties in the same Entity. If I want to configure these to be decimal(18,4), I need to update the configuration for each property.

EF Core 6.0 exposes this new method ConfigureConventions(ModelConfigurationBuilder configurationBuilder) in DbContext which we can now override.
protected override void ConfigureConventions(ModelConfigurationBuilder configurationBuilder)
{
    configurationBuilder
       .Properties<decimal>()
       .HavePrecision(18, 4);
}
And if we create a migration for this, we can see EF is using decimal(18,4) for all my decimal properties without me having to configure for each of the properties. 
ConfigureConventions
And of course, if you want, you can still override this behavior per property.
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    // other configuration
 
    modelBuilder.Entity<OrderItem>(builder =>
    {
        builder
            .Property(x => x.Discount)
            .HasPrecision(18, 2);
    });
}
And if you add in a migration now,
Override per property

You can see only for the Discount property Scale is changed.

There are other great use cases, you can read more on,
   Pre-convention model configuration

Happy Coding.

Regards,
Jaliya

Sunday, October 17, 2021

EF Core 6.0: Introducing Support for SQL Server Temporal Tables

In this post, let's have a look at one of the nicest features coming with EF Core 6.0 (Actually I shouldn't be saying it's coming, it's already available. As of the day I am writing this post, we now have EF Core 6.0 RC2, support for SQL Server Temporal Tables was initially introduced with EF Core 6.0 RC1). It's one of the Highly requested features in EF Core 6.0.

Consider I have the following Entity.
public class Category
{
    public int Id { getset; }

    public string Name { getset; }

    public string Description { getset; }
}
We can enable Temporal Tables through the ModelBuilder or IEntityTypeConfiguration<TEntity> (these are just things you already know and have been there forever). For simplicity, I will just use the ModelBuilder.
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Category>(builder =>
    {
        builder
            .ToTable(x => x.IsTemporal());
    });
}
If we add in a migration for this, we will get something like this.
Create Table with Temporal Table Support
Here I am just creating a Category table with Temporal Table support from the first place. Oftentimes, we will already have the entities which we need to enable Temporal Table support. It's possible as well, just update the table with IsTemporal() .
Alter Table to support Temporal Tables
You can see the Period Columns and History table has default names. If you want, you can override those through TemporalTableBuilder as follows.
builder
    .ToTable(x => x.IsTemporal(x =>
        {
            x.HasPeriodStart("PeriodFrom");
            x.HasPeriodEnd("PeriodTo");
            x.UseHistoryTable($"{nameof(Category)}HistoricalData");
        })
    );
For simplicity, I am going to use the defaults.

And once the migration is applied, you can see your table now has Temporal Table support.

EF Core 6.0, also provides support for querying historical data which personally I am finding very much helpful.

Consider the following.
Category category = new() { Name = "Category A", Description = "Category A Description" };
await context.Categories.AddAsync(category);
await context.SaveChangesAsync();

await Task.Delay(5000);

category.Description = "Category A Description Updated";
context.Categories.Update(category);
await context.SaveChangesAsync();
Here I am just creating a new Category and then after 5 seconds, I am just doing an update to the Category Description.

Now I can query the history data as follows.
var categoryHistories = await context.Categories
    .TemporalAll()
    .Where(x => x.Id == category.Id)
    .OrderBy(x => EF.Property<DateTime>(x, "PeriodStart"))
    .Select(x => new
    {
        Employee = x,
        PeriodStart = EF.Property<DateTime>(x, "PeriodStart"),
        PeriodEnd = EF.Property<DateTime>(x, "PeriodEnd")
    })
    .ToListAsync();

foreach (var item in categoryHistories)
{
    Console.WriteLine($"Description: {item.Employee.Description}, " + $"Start: {item.PeriodStart} - End: {item.PeriodEnd}");
}
This would give an output something like the following.
Querying Historical Data
The important part here is the TemporalAll() method. We have a couple of options there.
  • TemporalAll
    • Returns all rows in the historical data.
  • TemporalAsOf
    • Returns a single row that was active (current) at the given UTC time
  • TemporalFromTo
    • Returns all rows that were active between two given UTC times
  • TemporalBetween
    • The same as TemporalFromTo, except that rows are included that became active on the upper boundary
  • TemporalContainedIn
    • Returns all rows that started being active and ended being active between two given UTC times.
Hope this helps.

Happy Coding.

Regards,
Jaliya

Friday, October 15, 2021

EF Core 5.0: Split Queries

In this post, let's have a look at this nice feature "Split Queries" that got introduced with  EF Core 5.0.

Consider you have the following Models in your DbContext.

public class Category
{
    public int Id { getset; }

    public string Name { getset; }

    public ICollection<Product> Products { getset; }
}

public class Product
{
    public int Id { getset; }

    public string Name { getset; }

    public int CategoryId { getset; }

    public ICollection<Item> Items { getset; }
}

public class Item
{
    public int Id { getset; }

    public string Name { getset; }

    public int ProductId { getset; }
}

And if you do a query something like this,

List<Category> query = await context.Categories
    .
Include(x => x.Products)
        .
ThenInclude(x => x.Items)
    .
ToListAsync();
The generated query would be something like this.
SELECT [c].[Id], [c].[Name], [t].[Id], [t].[CategoryId], [t].[Name], [t].[Id0], [t].[Name0], [t].[ProductId]
FROM [Categories] AS [c]
LEFT JOIN (
    SELECT [p].[Id], [p].[CategoryId], [p].[Name], [i].[Id] AS [Id0], [i].[Name] AS [Name0], [i].[ProductId]
    FROM [Products] AS [p]
    LEFT JOIN [Item] AS [i] ON [p].[Id] = [i].[ProductId]
) AS [t] ON [c].[Id] = [t].[CategoryId]
ORDER BY [c].[Id], [t].[Id], [t].[Id0]
And at the same time, you might see a warning like this.

Compiling a query which loads related collections for more than one collection navigation either via 'Include' or through projection but no 'QuerySplittingBehavior' has been configured. By default Entity Framework will use 'QuerySplittingBehavior.SingleQuery' which can potentially result in slow query performance. To identify the query that's triggering this warning call 'ConfigureWarnings(w => w.Throw(RelationalEventId.MultipleCollectionIncludeWarning))'

So why is that? 

Let's look at the result of this query.
Result: SingleQuery
As you can see, I have only just one Category in the table, but the Category information is duplicated based on how many Products a particular Category has and how many Items each Product has. This duplication leads to the problem called "Cartesian Explosion"

Now how to address this?

As mentioned in the warning, by default Entity Framework will use 'QuerySplittingBehavior.SingleQuery'. We can change this behavior,
  • Per query or
  • Change the default globally
Per query, we can achieve this by adding AsSplitQuery(),
List<Category> query = await context.Categories
    .
Include(x => x.Products)
        .
ThenInclude(x => x.Items)
    .
AsSplitQuery()
    .
ToListAsync();
And now the generated query would change as follows,
SELECT [c].[Id], [c].[Name]
FROM [Categories] AS [c]
ORDER BY [c].[Id]

SELECT [p].[Id], [p].[CategoryId], [p].[Name], [c].[Id]
FROM [Categories] AS [c]
INNER JOIN [Products] AS [p] ON [c].[Id] = [p].[CategoryId]
ORDER BY [c].[Id], [p].[Id]

SELECT [i].[Id], [i].[Name], [i].[ProductId], [c].[Id], [p].[Id]
FROM [Categories] AS [c]
INNER JOIN [Products] AS [p] ON [c].[Id] = [p].[CategoryId]
INNER JOIN [Item] AS [i] ON [p].[Id] = [i].[ProductId]
ORDER BY [c].[Id], [p].[Id]
This would result in
Result: SplitQuery
Or we can change the QuerySplittingBehavior globally.
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    optionsBuilder
        .
UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=EFCore5;Trusted_Connection=True"options =>
        {
            options.UseQuerySplittingBehavior(QuerySplittingBehavior.SplitQuery);
        })
        .EnableSensitiveDataLogging()
        .LogTo(Console.WriteLine, LogLevel.Information);
}
Then for some of the queries, if you want to use SingleQuery behavior, you can do AsSingleQuery().
List<Category> query = await context.Categories
    .
Include(x => x.Products)
        .
ThenInclude(x => x.Items)
    .
AsSingleQuery()
    .
ToListAsync();
Now there are some important factors to consider when using Split Queries.
  • If you use Split Queries over One-to-One related entities, it has no effect. Those are always loaded via JOINs in the same query, as it has no performance impact.
  • When the query is split, that means multiple round trips to the database.
  • There can be concerns over data consistency if the data is being updated concurrently. Because it's multiple queries.
  • Most databases allow only a single query to be active at any given point (except for SQL Server with Multiple Active Result Sets, Sqlite). So all results from earlier queries must be buffered in your application's memory, which leads to increased memory requirements.
Hope this helps.
 
Happy Coding.

Regards,
Jaliya

Monday, October 4, 2021

Profiling Azure SQL Database

In this post, let's have a look at how to profile an Azure SQL Database

If it's a database in an Azure SQL Managed Instance, we can use  SQL Server Management Studio (SSMS) Profiler which I believe most of us are comfortable with. but if it's Azure SQL Database, unfortunately we can't use SQL Server Management Studio (SSMS) Profiler. We are going to get an error like this.
SQL Server Profiler: Not Supported
But not to worry. There is still a way to profile an Azure SQL Database.

First, we need to download and install Azure Data Studio. Once that's done, then you need to install SQL Server Profiler extension (currently it's in Preview, but works great).
SQL Server Profile extension
Hint: instead you can install the Admin Pack for SQL Server extension, which will give you a couple of nice extensions (such as SQL Server dacpac, SQL Server Import etc) along with SQL Server Profiler.

Once the SQL Server Profiler extension is installed, first make a connection to the target server.

Then,
  • Open Profiler by pressing Alt+P on Windows, and Ctrl+Alt+P on macOS.
  • To Start/Stop Profiler, click the Start button or press Alt+S on Windows, or Ctrl+Alt+S on macOS.
  • Otherwise, open the command palette and type 'Profiler.'
Profiling Session
I just love this!

Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, September 30, 2021

Integrate API Management in an Internal VNET with Application Gateway

In this post let's see how we can integrate API Management (APIM) in an internal Virtual Network (VNet) and front it by an Azure Application Gateway

I am not going to explain what APIM, VNet, Application Gateway and all the services used etc is, and I am assuming you are already comfortable with those areas.

In my current environment, I have an APIM exposing some services and it's publicly available.

  • Gateway URL: https://apim-centura.azure-api.net
  • Developer Portal: https://apim-centura.developer.azure-api.net
  • Management API URL: https://apim-centura.management.azure-api.net
I can use the APIM Gateway URL to call underlined backends, pretty straightforward. And Developer Portal and Management API URL is public as well. 

All my resources are in the following Resource Group and Location. And I will be using these throughout.

Resource Group Name: rg-apim-internal
Location: East US 2
Calling API via APIM
Now let's start. The whole process has two parts. 

PART I: Create/Configure a VNet and Configure APIM to be in Internal VNet mode
PART II: Create and Configure Application Gateway to front APIM

PART I: Create/Configure a VNet and Configure APIM to be in Internal VNet mode


Let's create a VNet with two subnets. One for the Application Gateway (which we will create later) and the other for APIM.

Create Virtual Network
So I have the VNet named vnet-centura-eastus2-001 created,
  • snet-appgateway: Subnet for the Application Gateway
  • snet-apim: Subnet for the APIM
Now we need to update the APIMs VNet mode to Internal and place it inside our VNets subnet for APIM.
APIM: Update to Internal Mode
Once this change is applied and saved, our APIM  (or any of its access points) is no longer publicly available. The update is going to take some time, and when the service is updated, make note of the Private VIP.
APIM: Private VIP
Now let's create a Private DNS zone for DNS resolution in our virtual network. I am going to name it centura.net (some name that came to my mind).

Create a Private DNS zone
Once it's created, let's link our VNet to this private DNS zone.
Private DNS zone: Virtual Networks Links
Select the VNet and select Enable auto registration.
Private DNS zone: Add virtual network link
Next, let's add Record sets.

Private DNS zone: Overview
I am adding A type of record for the following hostnames, all pointing to the Private VIP of APIM we have noted above.

  • gateway.centura.com: APIM Gateway
  • developer.centura.com: APIM Developer Portal
  • management.centura.com: APIM Management API
Let's start with the Gateway.

Private DNS zone: Add Record Set
Repeat the same for Developer Portal and Management API. 

Once that's done my Record sets are like below.
Private DNS zone: Overview
Now before we move to the next steps, we need to create some certificates. For the demo purpose, I am creating certificates locally by running the following PowerShell script.
# create trusted root certificate
$trustedRootCert = New-SelfSignedCertificate `
    -Type Custom `
    -CertStoreLocation "Cert:\CurrentUser\My" `
    -KeySpec Signature `
    -Subject "CN=Centura (Fictional) CA" `
    -KeyExportPolicy Exportable `
    -HashAlgorithm sha256 `
    -KeyLength 4096 `
    -KeyUsageProperty Sign `
    -KeyUsage CertSign `
    -NotAfter (Get-Date).AddMonths(24)
# create self-signed SSL server certificate for *.centura.net
$sslCert = New-SelfSignedCertificate `
    -Type Custom `
    -CertStoreLocation "Cert:\CurrentUser\My" `
    -KeySpec Signature `
    -Subject "CN=*.centura.net" `
    -DnsName "*.centura.net","centura.net" `
    -KeyExportPolicy Exportable `
    -HashAlgorithm sha256 `
    -KeyLength 2048 `
    -Signer $trustedRootCert
# export trusted root certificate
Export-Certificate  `
    -Type CERT `
    -Cert $trustedRootCert `
    -FilePath .\centura-trustedroot.cer
# export self-signed SSL server certificate for *.centura.net $pfxPassword = ConvertTo-SecureString -String "SomePassword123" -AsPlainText -Force
Export-PfxCertificate `
    -ChainOption BuildChain `
    -Cert $sslCert `
    -FilePath .\centura.pfx `
    -Password $pfxPassword
This will create a trusted root certificate and a SSL certificate for *.centura.net.
Created Certificates
Now let's navigate to the APIM and configure custom domain names.
APIM: Custom Domains
Let's add 3 custom hostnames for APIM Gateway, Developer  Portal and Management API.

Starting with Gateway.

APIM: Add Custom Domain Name
I have given the Type, Hostname, the Certificate (SSL) we created and enabled Default SSL binding.

Repeat the same steps for Developer Portal and Management API.
APIM: Custom Domains
We have completed the first part.

Now let's start the second part.

PART II: Create and Configure Application Gateway to front APIM


Let's start by creating the Application Gateway.

Create Application Gateway: Basics
I have given a name, selected the Region, selected the virtual network and subnet. Note I have used the subnet we created for the Application Gateway.

Click on Next to set up the Frontends. 

Here out Application Gateway is going to be public and I am adding a new Public IP address.

Create Application Gateway: Frontends
Now click on Next to configure the Backends.

Here I need to add three backend pools targetting APIM Gateway, Developer Portal, and Management API.
Create Application Gateway: Backends
Let's start with the Gateway.
Create Application Gateway: Add a backend pool
Repeat the same steps for Developer Portal and Management API. Change the IP address or FQDN respectively.

Once that is done, I have something like below.
Create Application Gateway: Backends
Now click on Next to Configurare the Application Gateway.
Create Application Gateway: Routing Rules
We need to have at least a single Routing Rule for us to be able to create the Application Gateway. Let's start with adding a routing rule for Gateway.
Create Application Gateway: Add Routing Rule -> Listener
Here for Listener, I have selected the public Frontend IP, chose the SSL certificate, and specified the hostname. Now move to the Backend targets tab. Here I am selecting the Target type as the Backend pool  and selecting the correct Backend target from the dropdown. 
Create Application Gateway: Add Routing Rule -> Backend targets
Now let's add an HTTP setting for the Gateway route.
Create Application Gateway: Add Routing Rule -> Backend targets -> Http Setting
Note: here we aren't using a well-known CA certificate, and I have used the trusted root certificate we have created before. The hostname will get picked up from the backend target and I have said No to Create custom probes (we will be creating probes later).

Now complete the Add Routing Rule Wizard.
Create Application Gateway: Configuration
We have only created a routing rule for Gateway. That's enough for us to complete the Application Gateway creation, let's proceed and create the Application Gateway.

Once the Application Gateway is created, let's open it up.
Application Gateway
We are greeted with an error, but it's nothing to worry about for now. Let's check the Backend health.
It's Unhealthy hence the error. Now if you can recall, we didn't add any health probe when we are creating the routing rule for Gateway. 

Let's add a Health Probe for the Gateway.
Application Gateway: Health probes
Application Gateway: Health Probes -> Add Health Probe
Note: Here I have selected the hostname to be picked up from the backend HTTP setting. The Path is the /status-0123456789abcdef which is the default health endpoint hosted on all the API Management services. I have selected the HTTP setting we created as part of Application Gateway creation, and I am skipping testing the backend health before adding the health probe.

Once the health probe is added, let's go back to Backend health.
Application Gateway: Backend Health
It's healthy, so we are good with the Gateway.

Now let's configure the backends for Developer Portal and Management API. Let's start by adding HTTP settings. Navigate to HTTP Settings.

Application Gateway: HTTP Settings
Let's first create an HTTP setting for the Developer Portal.
Application Gateway: Add HTTP Setting
It's just the same as Add HTTP Setting in Application Gateway Creation wizard, this time, we need to select the existing trusted root certificate and add it.

Repeat the same step for Management API and once completed, I have something like this.
Application Gateway: HTTP Settings
Now navigate to Listeners and let's add Listeners for Developer Portal and Management API.
Application Gateway: Listeners
Let's start by adding a listener for the Developer Portal.
Aplication Gateway: Listeners -> Add
Here again, it's the same as in the Application Gateway Creation wizard, this time, I am selecting the existing SSL certificate. Note: I am using the respective Host name.

Repeat the same step for Management API and once completed, I have something like this.
Application Gateway: Listeners
Now we are all set to create Routing rules for Developer Portal and Management API. Navigate to Rules and add a new Request rotuing rule.
Application Gateway: Rules
Let's start with the Developer Portal again.
Application Gatway: Add Routing Rule -> Listener
I have selected the correct Listener. Then move to Backend targets tab.
Application Gatway: Add Routing Rule -> Backend targets
Selected the correct Backend target and the HTTP setting from respective dropdowns and clicking on Add.

Repeat the same step for Management API and once completed, I have something like this.
Application Gatway: Rules
We are almost there. One last thing to configure, that is to add Health Probes for Developer Portal and Management API. Navigate back to Health probes and click on Add.

Let's start with the Developer Portal.
Application Gateway: Health Probes -> Add
Note: here the Path is different, it's "/signin". Selected the correct HTTP setting and skipping the test. And clicking on Add.

Repeat the same steps for Management API. Its path is "/ServiceStatus".
Application Gateway: Health Probes
Now technically we should be all good. Let's go back to Backend health.

Application Gateway: Backend Health
That's it. Everything is Healthy.

Now comes the difficult part, that's testing to make sure everything is working as expected. Note down the Application Gateway Frontend Public IP.
Application Gateway: Public IP
Now for testing locally, add the following entries to hosts file (c:\Windows\System32\Drivers\etc\hosts).
104.46.6.83 gateway.centura.net
104.46.6.83 developer.centura.net
104.46.6.83 management.centura.net

Once that's done, let's do the same API call we did at the start of the post, instead, this time let's use the Gateway URL.

Calling API via Application Gateway -> APIM
It's working. Let's check the Developer Portal.
Developer Portal
Isn't it great?

Now if we want, we can introduce an Access Restriction on our APIs to only allow traffic from the APIM subnet in our virtual network, and deny everything else.

Add Access Restriction on App Services
Now that's a long post. Hope this helps!

If you want to do all this via PowerShell, you can find all the information here,