Deploying a Stateful Application on Azure Kubernetes Service (AKS)

RisingStack's services:

Sign up to our newsletter!

In this article:

Once you go through this Kubernetes tutorial, you’ll be able to follow the processes & ideas outlined here to deploy any stateful application on Azure Kubernetes Service (AKS).

In our previous post, we guided you through the process of deploying a stateful, Dockerized Node.js app on Google Cloud Kubernetes Engine! As an example application, we used our blog engine called Ghost. If you read that post, you’ll see that the cluster creation, disk provisioning and the MySQL database creation and handling is vendor specific, which also leaks into our Kubernetes objects. So let’s compare it to setting up an AKS cluster on Azure and deploying our Ghost there.

This article was written by Kristof Ivancza who is a software engineer at RisingStack & Tamas Kadlecsik, RisingStack’s CEO. In case you need guidance with Kubernetes or Node.js, feel free to ping us at info@risingstack.com

If you are not familiar with Kubernetes, I recommend reading our Getting started with Kubernetes article first.

What will we need to deploy a stateful app on Azure Kubernetes Engine?

  • Create a cluster
  • Persistent Disks to store our images and themes
  • Create a MySQL instance and connect to it
  • A secret to store credentials
  • A deployment
  • A service to expose the application

Creating the Cluster

First, we need to create a cluster, set the default cluster for AKS and pass cluster credentials to kubectl.

 # create an Azure resource group
 $ az group create --name ghost-blog-resource --location eastus
 # locations: eastus, westeurope, centralus, canadacentral, canadaeast
 # ------
 # create a cluster
 $ az aks create --resource-group ghost-blog-resource --name ghost-blog-cluster --node-count 1 --generate-ssh-keys
 # this process could take several minutes
 # it will return a JSON with information about the cluster
 # ------
 # pass AKS Cluster credentials to kubectl
 $ az aks get-credentials --resource-group ghost-blog-resource --name ghost-blog-cluster
 # make sure it works
 $ kubectl get node

The Container and the Deployment

We’ll use the same image as before, and the Deployment will be the same as well. I’ll add it to this blogpost though, so you can see how it looks like.

# deployment.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: ghost-blog
  labels:
    app: ghost-blog
  spec:
    replicas: 1
    selector:
      matchLabels:
        app: ghost-blog
    template:
      metadata:
        labels:
          app: ghost-blog
      spec:
        containers:
        # ghost container
        - name: ghost-container
          image: ghost:alpine
          imagePullPolicy: IfNotPresent
          # ghost always starts on this port
          port:
          - containerPort: 2368

Creating Persistent Disks to Store our Images and Themes

We’ll create our disk using Dynamic Provisioning again. Although, in this case, we won’t specify the storageClassName, as Kubernetes will use the default one when it’s omitted. We could have done this on GKE as well, but I wanted to provide a more detailed picture of the disk creation. On GKE the default StorageClass was called standard, on AKS it is called default.

# PersistentVolumeClaim.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pd-blog-volume-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Submit this yaml with the following command:

$ kubectl apply -f PersistentVolumeClaim.yml
# make sure it is bound
$ kubectl get pvc
# it could take a few minutes to be bound, if its pending for more than a minute check `kubectl describe` to make sure nothing fishy happened 
$ kubectl describe pvc

The deployment should be updated as well, just as before:

# deployment.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: ghost-blog
  labels:
    app: ghost-blog
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ghost-blog
  template:
    metadata:
      labels:
        app: ghost-blog
    spec:
      containers:
      # ghost container
      - name: ghost-container
      	image: ghost:alpine
      	imagePullPolicy: IfNotPresent
        # ghost always starts on this port
        ports:
        - containerPort: 2368
        volumeMounts:
 	      # define persistent storage for themes and images
        - mountPath: /var/lib/ghost/content/
          name: pd-blog-volume
      volumes:
      - name: pd-blog-volume
        persistentVolumeClaim:
          claimName: pd-blog-volume-claim

Creating a MySQL Instance and Connecting to it Using SSL

  • First we need to add the MySQL extension for Azure Databases.
$ az extension add --name rdbms
  • Now we’re ready to create our MySQL server.
$ az mysql server create --resource-group ghost-blog-resource --name ghost-database --location eastus --admin-user admin --admin-password password --sku-name GP_Gen4_2 --version 5.7
# this could take several minutes to complete
  • Configuring the Firewall Rule
$ az mysql server firewall-rule create --resource-group ghost-blog-resource  --server ghost-database --name allowedIPrange --start-ip-address 0.0.0.0 --end-ip-address 255.255.255.255

This rule will give access to the database from every IP. It is certainly not recommended to open everything. However, the Nodes in our cluster will have different IP addresses which are difficult the guess ahead of time. If we know that we will have a set number of Nodes, let’s say 3, we can specify those IP addresses. However, if we plan to use Node autoscaling, we will need to allow connections from a wide range of IPs. You can use this as a quick and dirty solution, but it is definitely better to use a Vnet.

  • Configure Vnet service endpoints for Azure Database for MySQL

Virtual Network (VNet) service endpoint rules for MySQL is a firewall security feature. By using it, we can limit access to our Azure MySQL server, so it only accepts requests that are sent from a particular subnet in a virtual network. Via using VNet rules, we don’t have to configure Firewall Rules and add each and every node’s IP to grant access to our Kubernetes Cluster.

$ az extension add --name rdbms-vnet
# make sure it got installed
$ az extensions list | grep "rdbms-vnet"
{ "extensionType": "whl", "name": "rdbms-vnet", "version": "10.0.0" }

The upcoming steps will have to be done in the browser as there is no way to do it through the CLI. Or even if there is, it is definitely not documented, so it is a lot more straightforward to do it on the UI.

  1. Go to Azure Portal and login to your account
  2. In the search bar on the top search for Azure Database for MySQL servers.
  3. Select the database you created (ghost-database).
  4. On the left sidebar, click Connection Security
  5. You will find VNET Rules in the middle. Click + Adding existing virtual network
  • Give it a name (e.g: myVNetSQLRule),
  • Select your subscription type
  • Under Virtual Network, select the created resource group and the subnet name / address prefix will autocomplete itself with the IP range.
  • Click Enable.
  1. That’s it. 🙂

Security on Azure Kubernetes Service (AKS)

Now that we’re discussing security let’s talk about SSL. By default its enforced, but you can disable/enable it with the following command (or disable it in Azure Portal under Connection Security):

$ az mysql server update --resource-group ghost-blog-resource --name ghost-database --ssl-enforcement Disabled/Enable

Download the cert file, we will use it later when we will create secrets. Also, you can verify the SSL connection via the MySQL client by using the cert file.

$ mysql -h ghost-database.mysql.database.azure.com -u admin@ghost-database -p --ssl-ca=BaltimoreCyberTrustRoot.crt.pem
mysql> status
# output should show: `SSL: Cipher in use is AES256-SHA`

Creating Secrets to Store Credentials

The secrets will store the sensitive data that we’ll need to pass on to our pods. As secret objects can store binary data as well, we need to base64 encode anything we store in them.

$ echo -n "transport" | base64
$ echo -n "service" | base64
$ echo -n "user" | base64
$ echo -n "pass" | base64

The -n option is needed, so echo doesn’t add a \n at the end of the echoed string. Provide the base64 values for transportserviceuserpass:

# mail-secrets.yml
apiVersion: v1
kind: Secret
metadata:
 name: mail-credentials
type: Opaque
data:
 transport: QSBsbGFtYS4gV2hhdCBlbHNl
 service: VGhlIFJveWFsIFBvc3QuIE5vbmUgZWxzZSB3b3VsZCBJIHRydXN0 
 user: SXQncy1hIG1lISBNYXJpbw== 
 pass: WW91IHNoYWxsIG5vdA==

Create another secret file and provide your credentials for MySQL.

# db-secrets.yml
apiVersion: v1
kind: Secret
metadata:
 name: db-credentials
type: Opaque
data:
 user: SXQncy1hIG1lISBNYXJpbw==
 host: QSB2ZXJ5IGZyaWVuZGx5IG9uZSwgSSBtaWdodCBhZGQ=
 pass: R2FuZGFsZiEgSXMgdGhhdCB5b3UgYWdhaW4/
 dbname: V2FuZGEsIGJ1dCBoZXIgZnJpZW5kcyBjYWxsIGhlciBFcmlj

Upload the secrets, so you can access them in your deployment.

$ kubectl create -f mail-secrets.yml db-secrets.yml

We need to create one more secret for the previously downloaded cert.

$ kubectl create secret generic ssl-cert --from-file=BaltimoreCyberTrustRoot.crt.pem

We will use these later in the deployment.

Creating the Deployment

Everything is set up, now we can create the deployment which will pull our app container and run it on Kubernetes.

# deployment.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: ghost-blog
 labels:
    app: ghost-blog
spec:
 replicas: 1
 selector:
   matchLabels:
      app: ghost-blog
 template:
   metadata:
     labels:
        app: ghost-blog
   spec:
     containers:
       # ghost container
       - name: ghost-container
        image: ghost:alpine
         # envs to run ghost in production
         env:
           - name: mail__transport
             valueFrom:
               secretKeyRef:
                 name: mail-credentials
                 key: transport
           - name: mail__options__service
             valueFrom:
               secretKeyRef:
                 name: mail-credentials
                 key: service
           - name: mail__options__auth__user
             valueFrom:
               secretKeyRef:
                 name: mail-credentials
                 key: user
           - name: mail__options__auth__pass
             valueFrom:
               secretKeyRef:
                 name: mail-credentials
                 key: pass
           - name: mail__options__port
             value: "2525"
           - name: database__client
             value: mysql
           - name: database__connection__user
             valueFrom:
               secretKeyRef:
                 name: db-credentials
                 key: user
           - name: database__connection__password
             valueFrom:
               secretKeyRef:
                 name: db-credentials
                 key: pass
           - name: database__connection__host
             valueFrom:
               secretKeyRef:
                 name: db-credentials
                 key: host
           - name: database__connection__ssl__rejectunauthorized
             value: "true"
           - name: database__connection__ssl
             valueFrom:
               secretKeyRef:
                 name: ssl-cert
                 key: BaltimoreCyberTrustRoot.crt.pem
           - name: database__connection__database
             valueFrom:
               secretKeyRef:
                 name: db-credentials
                 key: dbname
           - name: url
             value: "http://your_url.com"
           - name: NODE_ENV
             value: production
         imagePullPolicy: IfNotPresent
         # ghost always starts on this port
         ports:
           - containerPort: 2368
         volumeMounts:
        # define persistent storage for themes and images
        - mountPath: /var/lib/ghost/content/
          name: pd-blog-volume
          subPath: blog
        # resource ghost needs
         resources:
           requests:
            cpu: "130m"
             memory: "256Mi"
           limits:
             cpu: "140m"
             memory: "512Mi"
     volumes:
      - name: pd-blog-volume
        persistentVolumeClaim:
           claimName: pd-blog-volume-claim

Create the deployment with the following command:

$ kubectl apply -f deployment.yml
# you can run commands with --watch flag, so you don’t have to spam to see changes
$ kubectl get pod -w
# if any error occurs
$ kubectl describe pod

Creating a Service to Expose our Blog

We can expose our application to the internet with the following command:

$ kubectl expose deployment ghost-blog --type="LoadBalancer" \
--name=ghost-blog-service --port=80 --target-port=2368

This will expose ghost deployment on port 80 as ghost-blog-service.

$ kubectl get service -w
# run get service with --watch flag, so you will se when `ghost-service` get an `External-IP`

Creating a Service with Static IP

Now we want to point our DNS provider to our service, so we need a static IP.

# reserve a Static IP
$ az network public-ip create --resource-group MC_ghost-blog-resource_ghost-blog-cluster_eastus --name staticIPforGhost --allocation-method static
# get the reserved Static IP
$ az network public-ip list --resource-group MC_ghost-blog-resource_ghost-blog-cluster_eastus --query [0].ipAddress --output tsv

And now let’s create the following service.yml file and replace loadBalancerIP with yours. With this now, you can always expose your application on the same IP address.

# service.yml
apiVersion: v1
kind: Service
metadata:
 name: ghost-blog-service
 labels:
   app: ghost
spec:
 loadBalancerIP: 133.713.371.337 # your reserved IP
 type: LoadBalancer
 ports:
 - port: 80 #
   targetPort: 2368 # port where ghost run
 selector:
   app: ghost

It does the same as the kubectl expose command, but we have a reserved static IP.

Final Thoughts on Deploying on Azure Kubernetes Service (AKS)

As you can see, even though Kubernetes abstracts away cloud providers and gives you a unified interface when you interact with your application, you still need to do quite a lot vendor specific setup. Thus, if you are on the way of moving to the cloud, I highly suggest playing around with different providers so you can find the one the suits you the best. Some might be easier to set up for one use case, but another might be cheaper.

This article was written by Kristof Ivancza who is a software engineer at RisingStack & Tamas Kadlecsik, RisingStack’s CEO. In case you need guidance with Kubernetes or Node.js, feel free to ping us at info@risingstack.com

By running a blog, or something similar on some of the major platforms can help you figure out which one should you use for what, while the experimentation can also give you an idea about the actual costs you’ll pay on the long run. I know most of them have price calculators, but when it’s about running a whole cluster, you’ll face quite a lot of charges that you did not anticipate, or at least did not expect to be that high.

Share this post

Twitter
Facebook
LinkedIn
Reddit