Wednesday, January 22, 2020

Deploying an ASP.NET Core Application On a Local Kubernetes Cluster

In this post, let's see how we can deploy an ASP.NET Core application on a local Kubernetes (K8s) cluster.

So the first is to set up a local K8s cluster and luckily Docker Desktop includes a standalone K8s server and client, as well as Docker CLI integration. The K8s server runs locally within your Docker instance, is not configurable, and is a single-node cluster.

So I have Docker Desktop installed in my Windows 10 machine and enabled Kubernetes. Once enabled, make sure you can see "Kubernetes is running" in the lower-left corner of Docker Desktop and the docker-desktop is selected as the context.
Enable K8s and is running
Context
The next step is to create a Docker Image of an ASP.NET Core Application. Here I have created an ASP.NET Core Web API application with the default template and with Docker enabled.
Solution
And I have updated the default Dockerfile as follows.

Dockerfile
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
 
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Kube.Weather.Api.csproj""./"]
RUN dotnet restore "./Kube.Weather.Api.csproj"
COPY . .
RUN dotnet build "Kube.Weather.Api.csproj" -c Release -o /app/build
 
FROM build AS publish
RUN dotnet publish "Kube.Weather.Api.csproj" -c Release -o /app/publish
 
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet""Kube.Weather.Api.dll"]
And then I have created a Docker image by running the following command.
docker build -t kube-weather:dev .
And the image is created. And this is the image that we are going to deploy into K8s.
PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> docker images
REPOSITORY      TAG        IMAGE ID         CREATED             SIZE
kube-weather    dev        d9cd7bec80e7     22 seconds ago      208MB
Actually, before deploying, it's worth to check whether the image is working fine locally. I can just spin up a new container out of the image by running the below command.
docker run -it --rm -p 9090:80 kube-weather:dev
Here I am running the image kube-weather:dev in interactive mode and binding the port 80 of the container to 9090, so I can access the application through localhost:9090.

And yes, it's working.
Running inside Docker

The next step is to deploy the image to our local K8s cluster. Let's start by adding a .yml configuration file named deployment.yml. (you can name it with whatever name you like). To scaffold the content of this file, you can use the Kubernetes extension in VS Code. And you can basically type de... and it will scaffold the file for you.
deployment.yml
And I have updated the scaffolded content as below.

deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-weather-deployment
spec:
  selector:
    matchLabels:
      app: kube-weather-pod
  template:
    metadata:
      labels:
        app: kube-weather-pod
    spec:
      containers:
      - name: kube-weather-container
        image: kube-weather:dev
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 80
So here,
  1. A deployment named kube-weather-deployment is created.
  2. The selector field defines how the Deployment finds which Pods to manage. In this case, we have simply selected a label that is defined in the Pod template (app: kube-weather-pod)
  3. The template field contains the following sub-fields:
    1. The Pods are labeled app: kube-weather-pod the labels field.
    2. The Pod template’s specification, or .template.spec field, indicates that the Pods run one container, kube-weather-container, which runs the kube-weather:dev, my local Docker image.
    3. The container limits memory of 128Mi (Mebibyte = 1 MiB => 1.04858 MB) and CPU of 500m (The unit suffix m-milliCPU stands for "thousandth of a core")
    4. And finally, we are asking the port 80 to be exposed
Next, let's apply this deployment configuration to our local K8s cluster. Before that let's have a quick look at the status of nodes, deployments, and pods in our K8s cluster.
PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
docker-desktop   Ready    master   23h   v1.14.8
 
PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> kubectl get deployment
No resources found.
 
PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> kubectl get pods
No resources found.
So I have a single node in ready status and there are no deployments and thus no pods so far.

Let's apply the deployment configuration by running the following command.
PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> kubectl apply -f .\deployment.yml
deployment.apps/kube-weather-deployment created
And the deployment is created. Now if I checked the deployment and pods, I can see my deployment and a pod is there running.
PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> kubectl get deployment
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-weather-deployment   1/1     1            1           86s

PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
kube-weather-deployment-d96787866-fhrdt   1/1     Running   0          94s
Still, the application running inside the pod is not accessible to outside. For that, we need to create a service which is an abstract way to expose an application running on a set of Pods as a network service. For that, let's create another .yml configuration file named service.yml. (again you can name it with whatever the name you like). And again you can use Kubernetes extension in VS Code and you can type se... and it will scaffold the file for you.
service.yml
I have updated the scaffolded content as below.

service.yml
apiVersion: v1
kind: Service
metadata:
  name: kube-weather-service
spec:
  selector:
    app: kube-weather-pod
  ports:
  - port: 9095
    targetPort: 80
  type: LoadBalancer
So here,
  1. This specification creates a new Service object named “kube-weather-service”. 
  2. Then we are mentioning, the service targets port 80 on any Pod having the label app=kube-weather-pod.
  3. Port 9095 is where the service receives a request.
Next before applying this service, let's have a look at do we have any existing services.
PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   24h
It shows only the default Kubernetes service. Now let's apply the configuration for our service.
PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> kubectl apply -f .\service.yml
service/kube-weather-service created
Now let's look at the services again.
PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> kubectl get services
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kube-weather-service   LoadBalancer   10.106.210.252   <pending>     9095:31888/TCP   4s
kubernetes             ClusterIP      10.96.0.1        <none>        443/TCP          24h
My service is still pending. Running the command again.
PS C:\Users\Jaliya\Desktop\Kube.Weather\Kube.Weather.Api> kubectl get services
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kube-weather-service   LoadBalancer   10.106.210.252   localhost     9095:32226/TCP   11m
kubernetes             ClusterIP      10.96.0.1        <none>        443/TCP          24h
And now it's exposed and let's navigate to localhost:9095.
Running inside K8s
So that's it, we have an ASP.NET Core Web API application running inside a local K8s cluster. Hope this helps. Feedback is always welcome.

Happy Coding.

Regards,
Jaliya

No comments:

Post a Comment