Thursday, April 27, 2023

C# 12.0 Preview: Primary Constructors for Classes and Structs

In this post let's have a look at "Primary Constructors for Classes and Structs", a preview feature in C# 12.0. You can try this out with the latest Preview versions of Visual Studio 2022 and .NET 8. And make sure to LangVersion as preview.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>net8.0</TargetFramework>
    <LangVersion>preview</LangVersion> ...
  </PropertyGroup>

</Project>

Consider the following example using C# latest stable (C# 11.0) version.

public class BlogPost
{
    public BlogPost(string title, IEnumerable<stringtags)
    {
        Title = title;
        Tags = tags;
    }

    public BlogPost(string title) : this(title, Enumerable.Empty<string>())
    {
    }

    public string Title { getinit; }

    public IEnumerable<string> Tags { getset; }
}

I can write the same code as follows using C# 12.0 Preview.

public class BlogPost(string title, IEnumerable<stringtags)
{
    public BlogPost(string title) : this(title, Enumerable.Empty<string>())     {
    }

    public string Title => title;

    public IEnumerable<string> Tags { getset; } = tags;
}

The important thing to note here is, we have added parameters to the class declaration itself and we can use these values in the class body. It's kind of the same as the primary constructors that were introduced for records in C# 9, but in records, properties are automatically created for primary constructor parameters. But due to complexity and use case differences between records and classes/structs, Primary Constructors for Classes and Structs won't be creating properties automatically.

A class with a primary constructor can have additional constructors just like we have above. But all those constructors much ensure the primary constructor is being called using this(...). It can be a direct call to the primary constructor or calling another constructor that in turn calls the primary constructor, something like below.

public class BlogPost(string title, IEnumerable<stringtags)
{
    public BlogPost(string title) : this(title, Enumerable.Empty<string>())     {
    }

    public BlogPost() : this("Default Title")
    {
    }

    ...
}

Read more,
   What’s new in C# 12: Primary Constructors

Hope this helps!

Happy Coding.

Regards,
Jaliya

Monday, April 10, 2023

Azure Kubernetes Service: Enable Ingress using Application Gateway

In this post, let's see how we can enable ingress in an AKS cluster using Application Gateway. We can easily do that using Application Gateway Ingress Controller (AGIC). 

Let's get to it.

Make sure you have the latest Azure CLI installed, logged in, and have selected the correct subscription. I am using Azure CLI in Windows.

Here I am going to be doing a Greenfield deployment to keep things clear. You can also do a Brownfield deployment.

First I am creating a Resource Group for our AKS cluster.

az group create `
    --name rg-aks-appgw-demo-001 `
    --location eastus2
Now, I am creating the AKS cluster in the Resource Group we just created.
az aks create `
    --resource-group rg-aks-appgw-demo-001 `
    --node-resource-group rg-mc-aks-appgw-demo-001 <
# custom node resource group name #> `
    --name aks-appgw-demo-001 `
    --network-plugin azure 
<# only Azure CNI supports AGIC #> `
    --enable-managed-identity `
    --enable-addon ingress-appgw 
<# enable AGIC add-on #> `
    --appgw-name agw-aks-appgw-demo-001 
<# name of the Application Gateway #> `
    --appgw-subnet-cidr "10.225.0.0/16" `
    --generate-ssh-keys

Here I have,

  • Explicitly specified the node resource group name as a personal preference
  • Specified azure as the network plugin. This is a must because only Azure CNI (Container Networking Interface) supports AGIC.
  • Then enabled add-on ingress-appgw
  • Specified appgw-name. Since I don't have an Application Gateway created before, this will create an Application Gateway as part of the cluster deployment. Application Gateway will be placed on the node resource group. You can also use an existing Application Gateway if you have one already.
  • Since I am creating a new Application Gateway, I am providing a Subnet CIDR through appgw-subnet-cidr to be used for a new subnet created to deploy the Application Gateway. The address space must be within the AKS virtual network without overlapping with AKS subnet. Since we haven't defined any particular virtual network to be used, defaults will be used, which are:
    • AKS virtual Network: 10.224.0.0/12
    • AKS subnet: 10.224.0.0/12

Once the AKS is created, let's connect to it.

az aks get-credentials `
    --resource-group rg-aks-appgw-demo-001 `
    --name aks-appgw-demo-001

Now, I have the following K8s declarative configuration in a file named deployment.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: customer-service
  name: customer-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: customer-service
  template:
    metadata:
      labels:
        app: customer-service
    spec:
      containers:
        - name: customer-service
          image: ravana.azurecr.io/aspnetcore/hello-world/api:latest
          imagePullPolicy: Always
          env:
            - name: ServiceName
              value: "Customer Service"

---

apiVersion: v1
kind: Service
metadata:
  name: customer-service
spec:
  selector:
    app: customer-service
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: application-ingress
  annotations:
    kubernetes.io/ingress.class: azure/application-gateway
    appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
  rules:
  - http:
      paths:
      - path: /api/customers/
        pathType: Prefix
        backend:
          service:
            name: customer-service
            port:
              number: 80

Here,

  • I have a deployment that will deploy a pod, containing a single container of an image of a simple ASP.NET Core Minimal API. I have the aspnetcore/hello-world/api image in my Azure Container Repository (which isn't public, unfortunately). You can find the sample code here. You can have it deployed to your ACR and attach the ACR to your AKS cluster. Or you can even use mcr.microsoft.com/dotnet/samples:aspnetapp image.
  • Then I have a Cluster IP service, which targets TCP port 80 on any pod with the app: customer-service label.
  • And finally the ingress Application Gateway Ingress Controller. I have a Prefix path mapping of /api/customers/ pointing to my service (important read: Behavioural Change Notice for  AGIC 1.5.1 for paths). And when I do any request starting with http://<INGRESS_IP>/api/customers/, the expectation is, all those requests should get routed to my service.  I have appgw.ingress.kubernetes.io/backend-path-prefix: "/" to rewrite the backend path specified in an ingress resource with "/".
Now let's apply this configuration.
kubectl apply -f .\deployment.yaml
kubectl apply -f .\deployment.yaml
Now I can get the ingress information as follows.
kubectl get ingress
kubectl get ingress
This is basically the Frontend public IP address of the Application Gateway that got created.
Frontend public IP address of the Application Gateway
Note down the IP address of the ingress. And I am doing 2 curl requests to make sure I can reach my service.
curl http://20.96.70.198/api/customers/
curl http://20.96.70.198/api/customers/fibonacci?number=5
Test the connectivity with curl
And it works.

Hope this helps.

You can find the complete code here:

Happy Coding.

Regards,
Jaliya

Wednesday, April 5, 2023

Azure Kubernetes Service: Running MongoDB as a StatefulSet with Azure Files as PersistentVolume

In this post let's see how we can run MongoDB as a StatefulSet in AKS and maintain its storage outside of the pods so data is safe from the ephemeral nature of pods. We are going to maintain the database storage in File Shares in an Azure Storage Account.

Preparation


In Azure, I already have an AKS created. 

And then I have created a simple ASP.NET Core Minimal API which reads and writes data from/to a MongoDB. We will use this API to test the functionality.
using KubeStorage.Mongo.Api.Models;
using KubeStorage.Mongo.Api.Services;
using Microsoft.AspNetCore.Http.HttpResults;

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);

builder.Services.Configure<CustomerDatabaseSettings>(builder.Configuration.GetSection("CustomersDatabase"));
builder.Services.AddSingleton<CustomersService>();

WebApplication app = builder.Build();

app.UseHttpsRedirection();

app.MapGet("/customers/{id}"async Task<Results<Ok<Customer>, NotFound>> (CustomersService customersServicestring id) =>
{
    Customer? customer = await customersService.GetAsync(id);
    if (customer == null)
    {
        return TypedResults.NotFound();
    }

    return TypedResults.Ok(customer);
});

app.MapGet("/customers"async (CustomersService customersService) =>
{
    return TypedResults.Ok(await customersService.GetAsync());
});

app.MapPost("/customers"async (CustomersService customersService, Customer customer) =>
{
    await customersService.CreateAsync(customer);

    return TypedResults.Created($"/customers/{customer.Id}", customer);
});

app.Run();

I have containerized this and have it available as a Docker image.

Now let's run MongoDB as a StatefulSet in AKS, maintain its storage in Azure File Shares, and use the above API to consume the database.

Start


First, let's start by creating a K8s namespace for the demo.
apiVersion: v1
kind: Namespace
metadata:
  name: demo-mongodb
Now let's create a K8s StorageClass (SC). This will be used to dynamically provision storage.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sc-azurefile-mongo
provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
reclaimPolicy: Retain # default is Delete
allowVolumeExpansion: true
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=0
  - gid=0
  - mfsymlinks
  - cache=strict
  - actimeo=30
parameters:
  skuName: Standard_LRS
  location: eastus2
Here you can customize dynamic provisioning parameters and those are listed here: Create and use a volume with Azure Files in Azure Kubernetes Service (AKS): Dynamic provisioning parameters.  

And another important thing to note here, SCs are cluster-scoped resources.

The next step is creating a StatefulSet and its wrapper Service for MongoDB.
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
  namespace: demo-mongodb
spec:
  selector:
    matchLabels:
      app: mongodb
  serviceName: mongodb
  replicas: 3
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongodb
          image: mongo:latest
          command:
            - mongod
            - "--bind_ip_all"
            - "--replSet"
            - rs0
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: vol-azurefile-mongo
              mountPath: /data/db
  volumeClaimTemplates:
    - metadata:
        name: vol-azurefile-mongo
      spec:
        storageClassName: sc-azurefile
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

---

apiVersion: v1
kind: Service
metadata:
  name: mongodb
  namespace: demo-mongodb
spec:
  clusterIP: None
  ports:
    - name: tcp
      port: 27017
      protocol: TCP
  selector:
    app: mongodb
Here, I am creating a StatefulSet of mongo:latest with 3 replicas. One of the advantages of  StatefulSet is, for a StatefulSet with n replicas, when Pods are being deployed, they are created sequentially, ordered from {0..n-1}. And then I am setting up a ReplicaSet named rs0 by running the command section. The important section is the volumeClaimTemplates, where we will create storage using PersistentVolumes provisioned by our storage class sc-azurefile.

And then I have a headless service wrapping the MongoDB StatefulSet

Now let's apply all these configurations to our AKS.
k apply
And let's make sure everything is created.
k get all --namespace demo-mongodb
And now if I check the node resource group for my AKS, I can see a new Storage Account is provisioned and Azure Files are created for each replica.
Azure File Shares Created For Each Replica
MongoDB Files
Now since MongoDB replicas are running, let's configure the ReplicaSet. I am shelling into the primary relica.
kubectl exec --namespace demo-mongodb mongo-0 --stdin --tty  -- mongosh
Mongo Shell
Now run the following set of commands in the Mongo Shell.
rs.initiate()
var cfg = rs.conf()
cfg.members[0].host="mongo-0.mongodb:27017"
rs.reconfig(cfg)
rs.add("mongo-1.mongodb:27017")
rs.add("mongo-2.mongodb:27017")
rs.status()
And note here, hostnames must follow below.
<mongo-pod-name>.<mongodb-headless-service>:<mongodb-port>
And make sure rs.status() is all successful. Something like below.
rs0 [direct: primary] test> rs.status()
{
  ...
  members: [
    {
      _id: 0,
      name'mongo-0.mongodb:27017',
      health: 1,
      state: 1,
      stateStr'PRIMARY',
      ...
    },
    {
      _id: 1,
      name'mongo-1.mongodb:27017',
      health: 1,
      state: 2,
      stateStr'SECONDARY',
      syncSourceHost'mongo-0.mongodb:27017',
      ...
    },
    {
      _id: 2,
      name'mongo-2.mongodb:27017',
      health: 1,
      state: 2,
      stateStr'SECONDARY',
      ...
    }
  ],
  ok: 1,
  ...
}
Let's exit from the Shell. 

Finally, let's create a deployment and a service for our test API.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-bridge-mongodb
  namespace: demo-mongodb
spec:
  selector:
    matchLabels:
      app: api-bridge-mongodb
  template:
    metadata:
      labels:
        app: api-bridge-mongodb
    spec:
      containers:
        - name: api-bridge-mongodb
          image: myacr.azurecr.io/demo/mongodb/api:dev
          imagePullPolicy: Always
          env:
            - name: CustomersDatabase__ConnectionString
              value: mongodb://mongo-0.mongodb:27017,mongo-1.mongodb:27017,mongo-2.mongodb:27017?replicaSet=rs0

---

apiVersion: v1
kind: Service
metadata:
  name: api-bridge-mongodb
  namespace: demo-mongodb
spec:
  type: LoadBalancer
  ports:
    - port: 5051
      targetPort: 80
      protocol: TCP
  selector:
    app: api-bridge-mongodb
    
---
Note here for our test API, I am using the MongoDB Cluster ConnectionString.

Now let's test things out. I am getting the IP for our Test API.
k get services --namespace demo-mongodb
And let's hit the endpoints.
GET: /customers
No errors at least. Let's create a Customer.
POST: /customers
And now let's get all Customers.
GET: /customers
Hope this helps. You can find the complete code sample here,
   https://github.com/jaliyaudagedara/aks-examples/tree/main/storages/mongodb

Happy Coding.

Regards,
Jaliya

Saturday, April 1, 2023

Azure Kubernetes Service: Using Azure Files as a Persistent Volume

In this post, let's see how we can use Azure Files as a Persistent Volume in Azure Kubernetes Service (AKS).

Preparation

In Azure, I already have an AKS and a Storage Account created. In the Storage Account, I have created a File Share named k8-file-share.

File Share
And then I have created a simple ASP.NET Core Minimal API with just 2 endpoints, and using those I can read and write to application storage.
using System.Text;
using System.Text.Json;

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);

WebApplication app = builder.Build();

app.UseHttpsRedirection();

var dataPath = Path.Combine("files""azure""customers.json");

app.MapGet("/customers", () =>
{
    List<Customer>? customers = GetCustomers();

    return customers;
});

app.MapPost("/customers", (Customer customer) =>
{
    List<Customer>? customers = GetCustomers();
    customers.Add(customer);

    string jsonString = JsonSerializer.Serialize(customers, new JsonSerializerOptions()
    {
        WriteIndented = true
    });

    using StreamWriter streamWriter = new(dataPath);
    streamWriter.Write(jsonString);

    return customer;
});

app.Run();

List<Customer> GetCustomers()
{
    if (!File.Exists(dataPath))
    {
        using FileStream fileStream = File.Create(dataPath);
        byte[] info = new UTF8Encoding(true).GetBytes("[ ]");
        fileStream.Write(info, 0, info.Length);
    }

    using StreamReader streamReader = new(dataPath);
    string json = streamReader.ReadToEnd();
    return JsonSerializer.Deserialize<List<Customer>>(json)!;
}

internal record Customer(string FirstNamestring LastName) { }

I have containerized this and have it available as a Docker image.

Now let's run the above application in AKS and use the Azure File Share (k8-file-share) as its Persistent Volume.

Start

First, let's start by creating a K8s namespace for the demo.

apiVersion: v1
kind: Namespace
metadata:
  name: demo-file-share

Now I am creating a K8s secret to maintain our Storage Account name and its key. I am naming it azure-storage-account-secret.

apiVersion: v1
kind: Secret
metadata:
  name: azure-storage-account-secret
  namespace: demo-file-share
stringData:
  azurestorageaccountname: <storage-account-name>
  azurestorageaccountkey: <storage-account-key>

Now I am creating a K8s PersistentVolume (PV).

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-file-share
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: azurefile-csi
  csi:
    driver: file.csi.azure.com
    readOnly: false
    volumeHandle: pv-file-share-001  # make sure this volumeid is unique for every identical share in the cluster
    volumeAttributes:
      resourceGroup: rg-k8s-storage-demo  # optional, only set this when storage account is not in the same resource group as node
      shareName: k8-file-share
    nodeStageSecretRef:
      name: azure-storage-account-secret # secret name and it's namespace
      namespace: demo-file-share
  mountOptions:
    - dir_mode=0777
    - file_mode=0777
    - uid=0
    - gid=0
    - mfsymlinks
    - cache=strict
    - nosharesock
    - nobrl

Here make sure to update resourceGroup, shareName, and nodeStageSecretRef.name/namespace as necessary. Something to note here, PVs are cluster-scoped resources.

Now let's create a K8s PersistentVolumeClaim (PVC) to use the above PV.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-file-share
  namespace: demo-file-share
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: azurefile-csi
  volumeName: pv-file-share
  resources:
    requests:
      storage: 1Gi

Here volumeName should be the name of the PersistentVolume.

And finally, I have the following deployment, with a spec that uses the above PVC for my application.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-bridge-file-share
  namespace: demo-file-share
spec:
  selector:
    matchLabels:
      app: api-bridge-file-share
  template:
    metadata:
      labels:
        app: api-bridge-file-share
    spec:
      containers:
        - name: api-bridge-file-share
          image: informaacr.azurecr.io/demo/azure-file-share/api:dev
          imagePullPolicy: Always
          volumeMounts:
            - name: vol-file-share
              mountPath: /app/files/azure
      volumes:
        - name: vol-file-share
          persistentVolumeClaim:
            claimName: pvc-file-share

Here, I have defined a Volume pointing to PVC. And inside the container, I am mounting the PVC under my desired location (/app/files/azure, the location my API is using).

And that's about it. Now I can make a deployment to AKS. I am going to expose my pod via Service so I can access the API, but I am not sharing it here.

Once deployed, let's describe the pod and see whether the volume mounts are all good.
kubectl describe pod
Now let's try the endpoints out. First I am doing an HTTP GET to /customers endpoint. And I can see the file is getting created.
File Share
Now let's write some data by doing an HTTP POST to /customers endpoint. And I can see the data is being written out.
customers.json
Hope this helps. You can find the complete code sample here,
   https://github.com/jaliyaudagedara/aks-examples/tree/main/storages/azure-file-share

More read,
   Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)

Happy Coding.

Regards,
Jaliya