Tuesday, June 28, 2022

Session: Improve Developer Productivity and Code Quality with GitHub at the First Ever Microsoft Reactor Pop-Up, New Zealand

First-Ever Microsoft Reactor Pop-Up, New Zealand started today and it's a three-day event. 

Microsoft Reactor Pop-Up, New Zealand
Microsoft Reactor Pop-Up, New Zealand
Day 1 was targetting on Application Innovation with Azure, Day 2 on Innovating with Data and AI and the Day 3 is all about the new Azure NZ North region.

I got the opportunity to deliver a session on the first day and the session was about Improving Developer Productivity and Code Quality with GitHub. There were also some amazing sessions delivered by Microsoft FTEs and MVPs.

There in the session, I went through demoing GitHub Copilot using JavaScript, Python and C# examples. Then went through GitHub Advanced Security Features which are Code Scanning, Secret Scanning, Dependency Review, and Security Overview.

It was fun!

If you missed today, you can still join for Day 2 and Day 3 to learn from some amazing list of speakers.

Happy Coding.

Regards,
Jaliya

Thursday, June 16, 2022

.NET 7 Preview 5: Improved Polymorphic Serialization/Deserialization Support in System.Text.Json

.NET 7 Preview 5 is out and we now have improved Polymorphic Serialization/Deserialization Support in System.Text.Json.

Consider the following classes.
public class Person
{
    public string Name { getset; }
}
 
public class Student : Person
{
    public int StudentId { getset; }
}
 
public class Employee : Person
{
    public int EmployeeId { getset; }
}
Now if I do something like the below, you can see it only serialized the properties that are in the Parent class. (wrote a post about this a couple of months back: System.Text.Json.JsonSerializer: Serialize Properties of Derived Classes).
JsonSerializerOptions options = new() { WriteIndented = true };
 
Person person = new Student
{
    Name = "John Doe",
    StudentId = 1
};

string jsonString = JsonSerializer.Serialize<Person>(person, options);
Console.WriteLine(jsonString);
 
//{
//  "Name": "John Doe"
//}
We can change this behavior now using JsonDerivedType attribute annotations.
[JsonDerivedType(typeof(Student))]
[JsonDerivedType(typeof(Employee))]
public class Person
{
    public string Name { getset; }
}
This configuration enables polymorphic serialization for Person, specifically when the runtime type is one of the derived classes.

Now if we do something like the below, we can see correct runtime types are getting serialized.
JsonSerializerOptions options = new() { WriteIndented = true };
 
Person person = new Student
{
    Name = "John Doe",
    StudentId = 1
};
string jsonString = JsonSerializer.Serialize<Person>(person, options);
Console.WriteLine(jsonString);
 
//{
//  "StudentId": 1,
//  "Name": "John Doe"
//}
 
person = new Employee
{
    Name = "Jane Doe",
    EmployeeId = 1
};
 
jsonString = JsonSerializer.Serialize<Person>(person, options);
Console.WriteLine(jsonString);
 
//{
//  "EmployeeId": 1,
//  "Name": "Jane Doe"
//}
If I try to serialize a derived type that's not annotated in the base class, I am going to see an exception like the one below.
System.NotSupportedException: Runtime type 'Teacher' is not supported by polymorphic type 'Person'
Now when deserializing, it would be as follows.
string jsonString = """
{
  "StudentId": 1,
  "Name": "John Doe"
}
""";
 
Person person = JsonSerializer.Deserialize<Person>(jsonString);
Console.WriteLine(person is Student); // false, polymorphic deserialization doesn't work
Here note that this does not enable polymorphic deserialization. 

In order to enable polymorphic deserialization, we need to specify a type discriminator.
[JsonDerivedType(typeof(Student), typeDiscriminator: "student")]
public class Person
{
    public string Name { getset; }
}
Now when we serialize, the JSON will include the type discriminator.
Person person = new Student
{
    Name = "John Doe",
    StudentId = 1
};
string jsonString = JsonSerializer.Serialize<Person>(person, options);
Console.WriteLine(jsonString);
 
//{
//  "$type": "student",
//  "StudentId": 1,
//  "Name": "John Doe"
//}
Now when deserializing, we can see polymorphic deserialization works.
string jsonString = """
{
  "$type": "student",
  "StudentId": 1,
  "Name": "John Doe"
}
""";
 
Person person = JsonSerializer.Deserialize<Person>(jsonString);
Console.WriteLine(person is Student); // true, polymorphic deserialization works
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, June 15, 2022

Azure Functions (In Process): Output Binding to a Dynamic Queue/Topic

In this post let's see how we can set up an Output Binding in an In Process Azure Function to send messages to a dynamic target Azure Service Bus Queue or Topic.

We usually use ICollector or IAsyncCollector to write values to an output binding. And often times, the target queue/topic is set up through attributes. Something like below,

[FunctionName(FunctionNames.SomeFunction)]
public async Task Run(
    [ServiceBusTrigger("%ServiceBus_ListeningQueue%", Connection = "ServiceBus_ConnectionString")] 
        ServiceBusReceivedMessage serviceBusReceivedMessage,
    [ServiceBus("%ServiceBus_TargetQueue%", Connection = "ServiceBus_ConnectionString")] 
        IAsyncCollector<MyOutputMessage> asyncCollector)
{
    // TODO: Some code

    await asyncCollector.AddAsync(new MyOutputMessage { });
}

The above function is listening to an Azure Service Bus Queue (ServiceBus_ListeningQueue) and outputting a message to another Queue (ServiceBus_TargetQueue). Here the target Queue is static and cannot be changed. 

What if we want to route the message to a target Queue or Topic based on a property of the incoming message (most of the time based on ServiceBusReceivedMessage.ReplyTo Property).

In these cases, we can use Azure Functions imperative binding instead of declarative binding (which is done above).

[FunctionName(FunctionNames.SomeFunction)]
public async Task Run(
    [ServiceBusTrigger("%ServiceBus_ListeningQueue%", Connection = "ServiceBus_ConnectionString")]
        ServiceBusReceivedMessage serviceBusReceivedMessage,
    IBinder binder)
{
    // TODO: Some code
 
    IAsyncCollector<MyOutputMessage> asyncCollector =
        await binder.BindAsync<IAsyncCollector<MyOutputMessage>>(new ServiceBusAttribute(serviceBusReceivedMessage.ReplyTo)
        {
            // Note: This should be the App Setting Name that contains the Service Bus connection string
            Connection = "ServiceBus_ConnectionString",
        });
 
    await asyncCollector.AddAsync(new MyOutputMessage { });
}

And that's it. You can change the EntityType property of ServiceBusAttribute to ServiceBusEntityType.Topic if you want to output into a Topic (default is Queue).

Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, June 8, 2022

Update Azure APIM API from an OpenAPI Definition using az apim Command Group

This is a quick post on how you can update Azure APIM API from an OpenAPI definition using az apim command group. 

Note: As of the day I am writing this post, apim Command group is experimental and under development.

The command we need to use is az apim api import.

$resourceGroup = "<Resource Group>"
$serviceName = "<APIM Name>"
$apiId = "<API Name>"
$path = "/<API URL suffix>"
$specificationUrl = "https://.../swagger.json"
 
az apim api import `
    --resource-group $resourceGroup `
    --service-name $serviceName `
    --api-id $apiId `
    --path $path `
    --specification-url $specificationUrl `
    --specification-format OpenApi

This will update the existing API while preserving existing policies that you may have on your API.

There is also az apim api update command and that is to be used if you want to update the properties of the API.

This approach can be used if you are facing 504 Gateway Timeout errors when updating APIs that contain a large schema using Azure Resource Manager (ARM) APIs and you are on a Developer Pricing Tier APIM: Azure API Management with Developer Pricing Tier: 504 Gateway Timeout.

Hope this helps.

Happy Coding.

Regards.
Jaliya

Monday, June 6, 2022

Playing Around with GitHub Copilot

I have been spending some quality time with GitHub Copilot and so far it has made me fall out of my chair a couple of times. It truly is amazing. Thought of writing this post to share some experience, so whoever hasn't tried it so far, can get amazed like me.

GitHub Copilot is a Pair Programmer, which does way more than auto-completion and it's completely powered with AI (Artificial Intelligence). I have been using Visual Studio for a long time, and probably since the day I started, we had IntelliSense which does the basic auto-completion. Then a couple of years back, Microsoft announced IntelliCode which is an AI-Assisted tool to suggest the next piece of code based on where we are, and it was great.

Now GitHub Copilot places things on a whole different level. It basically can write code based on the comments in Natural Language, can you believe that? So whether it's a comment, docstring, function name, or the code itself, GitHub Copilot uses those to synthesize code. Currently, it's still in its Technical Preview stage, and has first-class support for the following programming languages.

  • Python
  • JavaScript
  • TypeScript
  • Ruby
  • Java
  • Go

Same time it understands dozens of other languages and can help you find your way around almost anything. I have tried with C#, T-SQL and it's working super nicely.

As of today, GitHub Copilot  is available as an extension for 

This is a simple express app, I have created using Visual Studio Code, I only wrote the comments, and the code was completely written by GitHub Copilot.
// Create an Express application
var express = require('express');
 
// Expose an endpoint
var app = express();
app.get('/'function (reqres) {
    res.send('Hello World!');
});
 
// Expose another endpoint
app.get('/api'function (reqres) {
    // Send a response of current time
    res.send(new Date());
});
 
// Run express application
app.listen(3000, function () {
    console.log('Listening on port ' + 3000);
});
I have also created a simple ASP.NET Core Minimal API with CRUD operations, again I only wrote the comments on natural language, and GitHub Copilot wrote all the new code for creating a class, the DbContext, and exposing CRUD endpoints and that's with the functionality.

So how does this work?

First, let's start with a bit of history. It all started with an AI Research Startup called OpenAI founded by Elon Musk and Sam Altman. In July 2019, Microsoft partnered up with OpenAI. In June 2020, OpenAI announced GPT-3, a language model trained on trillions of words from the Internet. In June 2021, OpenAI Codex was introduced, a model based on GPT-3 to translate natural language to code (and vice-versa).

GitHub Copilot is powered with OpenAI Codex models. 
How GitHub Copilot Works
OpenAI Codex was trained on publicly available source code (including code in public repositories on GitHub) and natural language, so it understands both programming and human languages. The GitHub Copilot editor extension sends our comments and code to the GitHub Copilot service, which then uses OpenAI Codex to synthesize and suggest individual lines and/or whole functions.

The whole GitHub Copilot experience is beyond words, you have to try it for yourself. Sign up with the Technical Program now, you will have to be on the waitlist for some time, but I would definitely say it's worth the wait, because once you have access, it's going to blow your minds off.

Hope this helps.

Happy Coding.

Regards,
Jaliya