Monday, May 22, 2023

Enabling CORS in Locally Running In-Process Azure Function

In this post, let's see how we can enable CORS in locally running In-Process Azure Function. 

I had a simple In-Process HTTP Trigger Azure Function that exposes SignalRConnectionInfo. When running locally, a web app that is again running locally is failing to call the endpoint. The fetch request to the HTTP endpoint (http://localhost:7071/api/negotiate?negotiateVersion=1) is getting the status CORS error.
CORS Error
But when I sent a cURL request to the endpoint, it's working fine. When deployed to Azure, again everything is working as expected (after updating CORS policies of course).

So something fishy was going on.

In Azure Functions Core Tools, when you are starting a function app, luckily you can configure CORS options.
func --help
So what I did was, modify launchSettings.json by adding commandLineArgs as follows (I was using Visual Studio). 
{
  "profiles": {
    "FunctionApp": {
      "commandName""Project",
      "commandLineArgs""--cors https://tenant1-dev20.localhost:4200 --cors-credentials true",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT""Development"
      }
    }
  }
}
The key things here to get it to work are, 
  • --cors: Don't specify star (*). Instead, provide comma-separated CORS origins with no spaces
  • --cors-credentials: Set this to true
And that did it.
CORS Working
More information:

Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, May 18, 2023

Azure AD B2C Custom Policies: Include User EmployeeId in Claims on an UserJourney

In this post, let's see how we can include EmployeeId in Azure AD B2C User Profile in Claims for an Azure AD B2C Sign-In (it could be any actually) UserJourney.
AAD B2C User Profile - Employee ID
Unfortunately, AAD-UserReadUsingObjectId technical profile only retrieves the basic User Profile information and EmployeeId is not part of it (Read User profile attributes).

So in order to retrieve these additional properties what we need to do is update our UserJourney by adding another OrchestrationStep to retrieve users' EmployeeId by calling Microsoft Graph API. But an important thing to note here is, users cannot obtain tokens for Microsoft Graph API using delegated permissions (Read Working with MSAL.js and Azure AD B2C). 

In that case, we can create a simple Minimal API or a HttpTriggered Azure Function that accepts users' ObjecteId and from there, call Microsoft Graph API using Client-Credentials and retrieve the users' EmployeeId. You can have a look at this Minimal API for an example on how to call Graph API using Client-Credentials 

For simplicity, let's say we have the following endpoint.
WebApplicationBuilder builder = WebApplication.CreateBuilder(args);

WebApplication app = builder.Build();
app.UseHttpsRedirection();

app.MapPost("/claims", (GetClaimsRequest getClaimsRequest) =>
{
    // TODO: Get claims for the user by calling Graph API using ClientCredentials

    return new
    {
        employeeId = "EMP101"
    };
});

app.Run();

public class GetClaimsRequest
{
    public string ObjectId { getset; }
}
Now the first thing to do is modify our B2C_1A_TrustFrameworkExtensions and define a ClaimType for employeeId.
<ClaimsSchema>
  ...
  <ClaimType Id="employeeId">
    <DisplayName>Employee Id</DisplayName>
    <DataType>string</DataType>
  </ClaimType>
</ClaimsSchema>

And then register a Restful ClaimsProvider to call our REST endpoint and pass the objectId.

<ClaimsProviders>
  ...
  <ClaimsProvider>
    <DisplayName>Get Additional Claims via REST</DisplayName>
    <TechnicalProfiles>
      <TechnicalProfile Id="REST-GetAdditionalClaims">
        <DisplayName>Get Additional Claims via REST call and transform claims</DisplayName>
        <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
        <Metadata>
          <Item Key="ServiceUrl">https://743q57pk-7047.aue.devtunnels.ms/claims</Item>
          <Item Key="SendClaimsIn">Body</Item>
          <Item Key="AuthenticationType">None</Item>
          <Item Key="AllowInsecureAuthInProduction">true</Item>
        </Metadata>
        <InputClaims>
          <InputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="objectId" />
        </InputClaims>
        <OutputClaims>
          <OutputClaim ClaimTypeReferenceId="employeeId" PartnerClaimType="employeeId"/>
        </OutputClaims>
        <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
      </TechnicalProfile>
    </TechnicalProfiles>
  </ClaimsProvider>
</ClaimsProviders>

Here, I am mapping the employeeId that is being returned from the API to ClaimTypeReferenceId="employeeIdthat we created before.

The last step is to modify our UserJourney and add an OrchestrationStep to call the REST endpoint before sending the claims.
<OrchestrationSteps>
  ...
  <OrchestrationStep Order="4" Type="ClaimsExchange">
    <ClaimsExchanges>
      <ClaimsExchange Id="AADUserReadWithObjectId" TechnicalProfileReferenceId="AAD-UserReadUsingObjectId" />
    </ClaimsExchanges>
  </OrchestrationStep>
  <OrchestrationStep Order="5" Type="ClaimsExchange">
    <ClaimsExchanges>
      <ClaimsExchange Id="RESTGetAdditionalClaims" TechnicalProfileReferenceId="REST-GetAdditionalClaims" />
    </ClaimsExchanges>
  </OrchestrationStep>
  ...
</OrchestrationSteps>
And that's about it. In my example, I am using SAML authentication, and I can see employeeId is now included in the SAML response.
EmployeeId Included In Claims
And that's about it.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, May 10, 2023

Visual Studio 2022: Apply File Scoped Namespaces using Code Cleanup Profile

In this post, let's see how we can apply File scoped namespaces to all the existing C# files using a Code Cleanup Profile.

Imagine you have an old .NET project that uses Block scoped namespaces and you are upgrading the project to the latest .NET and you need to use features like File scoped namespaces that got introduced as part of C# 10.0. If you have many files, you definitely don't need to be updating files one by one.

We can apply this change to all the files at once using a Visual Studio Code Cleanup profile.

First, we need to set Namespace declarations code style preference to File scoped (if you haven't already) by going into Tools -> Options -> Text Editor -> C# -> Code Style -> General.
Update Code Style
Then go to Analyse -> Code Cleanup -> Configure Code Cleanup and configure your profile, adding the following preference.
Configure Code Cleanup
You can create a new profile just for this or update an existing profile. And then just run your Code Cleanup profile. That should update all your files with File scoped namespaces.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Saturday, May 6, 2023

SSH into Linux Container in Azure App Service from Local Machine

In this post, let's see how we can SSH into Linux Container in Azure App Service from our local machine. Azure Portal has an option to SSH within the browser itself, but I personally hate that experience.

SSH from Azure Portal
We can use az webapp create-remote-connection to create a remote connection using a TCP tunnel from our local machine to the Azure App Service.

az webapp create-remote-connection `
    --subscription <subscription-id> `
    --resource-group <resource-group-name> `
    --name <app-name>

Now while is this running and the connection is open, we can open up another terminal window and SSH through the tunnel using the following command.
ssh <username>@<addr> -p <port>

SSH from Local Machine
And I am connected.

Now I can even use tools like WinSCP to file transfer between the Linux Container and my local machine.
WinSCP Connection
WinSCP Connected
Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, April 27, 2023

C# 12.0 Preview: Primary Constructors for Classes and Structs

In this post let's have a look at "Primary Constructors for Classes and Structs", a preview feature in C# 12.0. You can try this out with the latest Preview versions of Visual Studio 2022 and .NET 8. And make sure to LangVersion as preview.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>net8.0</TargetFramework>
    <LangVersion>preview</LangVersion> ...
  </PropertyGroup>

</Project>

Consider the following example using C# latest stable (C# 11.0) version.

public class BlogPost
{
    public BlogPost(string title, IEnumerable<stringtags)
    {
        Title = title;
        Tags = tags;
    }

    public BlogPost(string title) : this(title, Enumerable.Empty<string>())
    {
    }

    public string Title { getinit; }

    public IEnumerable<string> Tags { getset; }
}

I can write the same code as follows using C# 12.0 Preview.

public class BlogPost(string title, IEnumerable<stringtags)
{
    public BlogPost(string title) : this(title, Enumerable.Empty<string>())     {
    }

    public string Title => title;

    public IEnumerable<string> Tags { getset; } = tags;
}

The important thing to note here is, we have added parameters to the class declaration itself and we can use these values in the class body. It's kind of the same as the primary constructors that were introduced for records in C# 9, but in records, properties are automatically created for primary constructor parameters. But due to complexity and use case differences between records and classes/structs, Primary Constructors for Classes and Structs won't be creating properties automatically.

A class with a primary constructor can have additional constructors just like we have above. But all those constructors much ensure the primary constructor is being called using this(...). It can be a direct call to the primary constructor or calling another constructor that in turn calls the primary constructor, something like below.

public class BlogPost(string title, IEnumerable<stringtags)
{
    public BlogPost(string title) : this(title, Enumerable.Empty<string>())     {
    }

    public BlogPost() : this("Default Title")
    {
    }

    ...
}

Read more,
   What’s new in C# 12: Primary Constructors

Hope this helps!

Happy Coding.

Regards,
Jaliya

Monday, April 10, 2023

Azure Kubernetes Service: Enable Ingress using Application Gateway

In this post, let's see how we can enable ingress in an AKS cluster using Application Gateway. We can easily do that using Application Gateway Ingress Controller (AGIC). 

Let's get to it.

Make sure you have the latest Azure CLI installed, logged in, and have selected the correct subscription. I am using Azure CLI in Windows.

Here I am going to be doing a Greenfield deployment to keep things clear. You can also do a Brownfield deployment.

First I am creating a Resource Group for our AKS cluster.

az group create `
    --name rg-aks-appgw-demo-001 `
    --location eastus2
Now, I am creating the AKS cluster in the Resource Group we just created.
az aks create `
    --resource-group rg-aks-appgw-demo-001 `
    --node-resource-group rg-mc-aks-appgw-demo-001 <
# custom node resource group name #> `
    --name aks-appgw-demo-001 `
    --network-plugin azure 
<# only Azure CNI supports AGIC #> `
    --enable-managed-identity `
    --enable-addon ingress-appgw 
<# enable AGIC add-on #> `
    --appgw-name agw-aks-appgw-demo-001 
<# name of the Application Gateway #> `
    --appgw-subnet-cidr "10.225.0.0/16" `
    --generate-ssh-keys

Here I have,

  • Explicitly specified the node resource group name as a personal preference
  • Specified azure as the network plugin. This is a must because only Azure CNI (Container Networking Interface) supports AGIC.
  • Then enabled add-on ingress-appgw
  • Specified appgw-name. Since I don't have an Application Gateway created before, this will create an Application Gateway as part of the cluster deployment. Application Gateway will be placed on the node resource group. You can also use an existing Application Gateway if you have one already.
  • Since I am creating a new Application Gateway, I am providing a Subnet CIDR through appgw-subnet-cidr to be used for a new subnet created to deploy the Application Gateway. The address space must be within the AKS virtual network without overlapping with AKS subnet. Since we haven't defined any particular virtual network to be used, defaults will be used, which are:
    • AKS virtual Network: 10.224.0.0/12
    • AKS subnet: 10.224.0.0/12

Once the AKS is created, let's connect to it.

az aks get-credentials `
    --resource-group rg-aks-appgw-demo-001 `
    --name aks-appgw-demo-001

Now, I have the following K8s declarative configuration in a file named deployment.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: customer-service
  name: customer-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: customer-service
  template:
    metadata:
      labels:
        app: customer-service
    spec:
      containers:
        - name: customer-service
          image: ravana.azurecr.io/aspnetcore/hello-world/api:latest
          imagePullPolicy: Always
          env:
            - name: ServiceName
              value: "Customer Service"

---

apiVersion: v1
kind: Service
metadata:
  name: customer-service
spec:
  selector:
    app: customer-service
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: application-ingress
  annotations:
    kubernetes.io/ingress.class: azure/application-gateway
    appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
  rules:
  - http:
      paths:
      - path: /api/customers/
        pathType: Prefix
        backend:
          service:
            name: customer-service
            port:
              number: 80

Here,

  • I have a deployment that will deploy a pod, containing a single container of an image of a simple ASP.NET Core Minimal API. I have the aspnetcore/hello-world/api image in my Azure Container Repository (which isn't public, unfortunately). You can find the sample code here. You can have it deployed to your ACR and attach the ACR to your AKS cluster. Or you can even use mcr.microsoft.com/dotnet/samples:aspnetapp image.
  • Then I have a Cluster IP service, which targets TCP port 80 on any pod with the app: customer-service label.
  • And finally the ingress Application Gateway Ingress Controller. I have a Prefix path mapping of /api/customers/ pointing to my service (important read: Behavioural Change Notice for  AGIC 1.5.1 for paths). And when I do any request starting with http://<INGRESS_IP>/api/customers/, the expectation is, all those requests should get routed to my service.  I have appgw.ingress.kubernetes.io/backend-path-prefix: "/" to rewrite the backend path specified in an ingress resource with "/".
Now let's apply this configuration.
kubectl apply -f .\deployment.yaml
kubectl apply -f .\deployment.yaml
Now I can get the ingress information as follows.
kubectl get ingress
kubectl get ingress
This is basically the Frontend public IP address of the Application Gateway that got created.
Frontend public IP address of the Application Gateway
Note down the IP address of the ingress. And I am doing 2 curl requests to make sure I can reach my service.
curl http://20.96.70.198/api/customers/
curl http://20.96.70.198/api/customers/fibonacci?number=5
Test the connectivity with curl
And it works.

Hope this helps.

You can find the complete code here:

Happy Coding.

Regards,
Jaliya

Wednesday, April 5, 2023

Azure Kubernetes Service: Running MongoDB as a StatefulSet with Azure Files as PersistentVolume

In this post let's see how we can run MongoDB as a StatefulSet in AKS and maintain its storage outside of the pods so data is safe from the ephemeral nature of pods. We are going to maintain the database storage in File Shares in an Azure Storage Account.

Preparation


In Azure, I already have an AKS created. 

And then I have created a simple ASP.NET Core Minimal API which reads and writes data from/to a MongoDB. We will use this API to test the functionality.
using KubeStorage.Mongo.Api.Models;
using KubeStorage.Mongo.Api.Services;
using Microsoft.AspNetCore.Http.HttpResults;

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);

builder.Services.Configure<CustomerDatabaseSettings>(builder.Configuration.GetSection("CustomersDatabase"));
builder.Services.AddSingleton<CustomersService>();

WebApplication app = builder.Build();

app.UseHttpsRedirection();

app.MapGet("/customers/{id}"async Task<Results<Ok<Customer>, NotFound>> (CustomersService customersServicestring id) =>
{
    Customer? customer = await customersService.GetAsync(id);
    if (customer == null)
    {
        return TypedResults.NotFound();
    }

    return TypedResults.Ok(customer);
});

app.MapGet("/customers"async (CustomersService customersService) =>
{
    return TypedResults.Ok(await customersService.GetAsync());
});

app.MapPost("/customers"async (CustomersService customersService, Customer customer) =>
{
    await customersService.CreateAsync(customer);

    return TypedResults.Created($"/customers/{customer.Id}", customer);
});

app.Run();

I have containerized this and have it available as a Docker image.

Now let's run MongoDB as a StatefulSet in AKS, maintain its storage in Azure File Shares, and use the above API to consume the database.

Start


First, let's start by creating a K8s namespace for the demo.
apiVersion: v1
kind: Namespace
metadata:
  name: demo-mongodb
Now let's create a K8s StorageClass (SC). This will be used to dynamically provision storage.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sc-azurefile-mongo
provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
reclaimPolicy: Retain # default is Delete
allowVolumeExpansion: true
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=0
  - gid=0
  - mfsymlinks
  - cache=strict
  - actimeo=30
parameters:
  skuName: Standard_LRS
  location: eastus2
Here you can customize dynamic provisioning parameters and those are listed here: Create and use a volume with Azure Files in Azure Kubernetes Service (AKS): Dynamic provisioning parameters.  

And another important thing to note here, SCs are cluster-scoped resources.

The next step is creating a StatefulSet and its wrapper Service for MongoDB.
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
  namespace: demo-mongodb
spec:
  selector:
    matchLabels:
      app: mongodb
  serviceName: mongodb
  replicas: 3
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongodb
          image: mongo:latest
          command:
            - mongod
            - "--bind_ip_all"
            - "--replSet"
            - rs0
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: vol-azurefile-mongo
              mountPath: /data/db
  volumeClaimTemplates:
    - metadata:
        name: vol-azurefile-mongo
      spec:
        storageClassName: sc-azurefile
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

---

apiVersion: v1
kind: Service
metadata:
  name: mongodb
  namespace: demo-mongodb
spec:
  clusterIP: None
  ports:
    - name: tcp
      port: 27017
      protocol: TCP
  selector:
    app: mongodb
Here, I am creating a StatefulSet of mongo:latest with 3 replicas. One of the advantages of  StatefulSet is, for a StatefulSet with n replicas, when Pods are being deployed, they are created sequentially, ordered from {0..n-1}. And then I am setting up a ReplicaSet named rs0 by running the command section. The important section is the volumeClaimTemplates, where we will create storage using PersistentVolumes provisioned by our storage class sc-azurefile.

And then I have a headless service wrapping the MongoDB StatefulSet

Now let's apply all these configurations to our AKS.
k apply
And let's make sure everything is created.
k get all --namespace demo-mongodb
And now if I check the node resource group for my AKS, I can see a new Storage Account is provisioned and Azure Files are created for each replica.
Azure File Shares Created For Each Replica
MongoDB Files
Now since MongoDB replicas are running, let's configure the ReplicaSet. I am shelling into the primary relica.
kubectl exec --namespace demo-mongodb mongo-0 --stdin --tty  -- mongosh
Mongo Shell
Now run the following set of commands in the Mongo Shell.
rs.initiate()
var cfg = rs.conf()
cfg.members[0].host="mongo-0.mongodb:27017"
rs.reconfig(cfg)
rs.add("mongo-1.mongodb:27017")
rs.add("mongo-2.mongodb:27017")
rs.status()
And note here, hostnames must follow below.
<mongo-pod-name>.<mongodb-headless-service>:<mongodb-port>
And make sure rs.status() is all successful. Something like below.
rs0 [direct: primary] test> rs.status()
{
  ...
  members: [
    {
      _id: 0,
      name'mongo-0.mongodb:27017',
      health: 1,
      state: 1,
      stateStr'PRIMARY',
      ...
    },
    {
      _id: 1,
      name'mongo-1.mongodb:27017',
      health: 1,
      state: 2,
      stateStr'SECONDARY',
      syncSourceHost'mongo-0.mongodb:27017',
      ...
    },
    {
      _id: 2,
      name'mongo-2.mongodb:27017',
      health: 1,
      state: 2,
      stateStr'SECONDARY',
      ...
    }
  ],
  ok: 1,
  ...
}
Let's exit from the Shell. 

Finally, let's create a deployment and a service for our test API.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-bridge-mongodb
  namespace: demo-mongodb
spec:
  selector:
    matchLabels:
      app: api-bridge-mongodb
  template:
    metadata:
      labels:
        app: api-bridge-mongodb
    spec:
      containers:
        - name: api-bridge-mongodb
          image: myacr.azurecr.io/demo/mongodb/api:dev
          imagePullPolicy: Always
          env:
            - name: CustomersDatabase__ConnectionString
              value: mongodb://mongo-0.mongodb:27017,mongo-1.mongodb:27017,mongo-2.mongodb:27017?replicaSet=rs0

---

apiVersion: v1
kind: Service
metadata:
  name: api-bridge-mongodb
  namespace: demo-mongodb
spec:
  type: LoadBalancer
  ports:
    - port: 5051
      targetPort: 80
      protocol: TCP
  selector:
    app: api-bridge-mongodb
    
---
Note here for our test API, I am using the MongoDB Cluster ConnectionString.

Now let's test things out. I am getting the IP for our Test API.
k get services --namespace demo-mongodb
And let's hit the endpoints.
GET: /customers
No errors at least. Let's create a Customer.
POST: /customers
And now let's get all Customers.
GET: /customers
Hope this helps. You can find the complete code sample here,
   https://github.com/jaliyaudagedara/aks-examples/tree/main/storages/mongodb

Happy Coding.

Regards,
Jaliya