High-Available and Multi Master AZ PostgreSQL Cluster with KOPS

December 25, 2020

First of all, I would like to say there are different ways to do this, but it’s my architecture.

This article aims to create a High Availability Kubernetes Cluster with Multi-Master Availability Zones on AWS. It means we’ll create multiple Kubernetes master and worker nodes, running across multiple zones.

To deploy a HA cluster, I’ll create three masters and three workers in three (I have no superstition about ‘3’) different availability zones. In this way, if any master or an availability zone crash somehow, we have the two other zones with three masters and workers.

Prerequisites

  • AWS account
  • Ansible

Step I

Create an EC2 instance on AWS with Ansible.

---
- name: EC2
  hosts: localhost
  gather_facts: False

  vars:
      region: us-west-2
      instance_type: t2.medium
      ami: ami-06f2f779464715dc5 
      keypair: kks 
 
  tasks:
    - name: Create an EC2 instance
      ec2:
         key_name: "{{ keypair }}"
         instance_type: "{{ instance_type }}"
         image: "{{ ami }}"
         wait: true
         region: "{{ region }}"
         count: 1
         assign_public_ip: yes
         instance_tags:
            Name: void
      register: ec2

I chose ami-06f2f779464715dc5 for instance image, it’s optional. It doesn’t matter which instance type running on the target machine, but I would recommend t2.medium to do this.

I also used my existing key pair ‘kks’. You must create your own.

Step II

We need a DNS record that all hosts on which we’ll deploy the Kubernetes HA cluster must be resolvable by a proper DNS server.

1

It can be done with Route 53. After register the domain, you must create hosted zones.

1

Step III

Create an S3 bucket to work with KOPS.

1

Step IV

Install KOPS and other requirements for Kubernetes on the target machine via Ansible.

---
- name: KOPS & k8s install
  hosts: 34.220.51.161
  become: yes
  gather_facts: False

  tasks:

    - name: k8s repo
      shell: "{{item}}"
      with_items:
        - curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
        - echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list

    - name: Upgrade
      apt:
        upgrade: yes
        update_cache: yes
        force_apt_get: True

    - name: K8s components
      apt:
        pkg:
          - 'kubelet'
          - 'kubeadm'
          - 'kubectl'
        state: present

    - name: KOPS install
      get_url:
        url: https://github.com/kubernetes/kops/releases/download/1.13.0/kops-linux-amd64
        dest: /home/ubuntu/
        
    - name: Change ownership
      file:
        path: /home/ubuntu/kops-linux-amd64
        mode: '775'

    - name: move binaries
      command: mv ./kops-linux-amd64 /usr/local/bin/kops

    - name: Install AWS CLI
      apt:
        name: awscli
        state: present

With this playbook; kubelet, kubeadm, kubectl, aws-cli, and KOPS binaries would install on the machine.

Step V

Create a bash script to automated AWS-CLI profile configuration and KOPS cluster installation with necessary IAM policies as Multi Master AZ: us-west-2a us-west-2b us-west-2c.

#!/bin/bash
  
aws configure

aws iam create-group --group-name kops

aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops

aws iam create-user --user-name kops
aws iam add-user-to-group --user-name kops --group-name kops

echo 'Enter S3 bucket name:'
read s3

export KOPS_STATE_STORE=s3://$s3

echo 'Enter the cluster name:'
read name
export KOPS_CLUSTER_NAME=$name

kops create secret --name $KOPS_CLUSTER_NAME sshpublickey admin -i ~/.ssh/id_rsa.pub

kops create cluster --name=$KOPS_CLUSTER_NAME --cloud=aws --zones=us-west-2a,us-west-2b,us-west-2c --master-size=t2.xlarge --node-count=3 --node-size=t2.xlarge --master-zones=us-west-2a,us-west-2b,us-west-2c --state=$KOPS_STATE_STORE --yes

This script provides:

  • Configure AWS profile
  • Deploy IAM user, group, and policies for KOPS
  • Define Cluster name and S3 bucket
  • Deploy Masters and Workers across to multiple availability zones on US-WEST-2 region

1

Of course, you can do the same thing on AWS IAM dashboard with JSON import.

Step VI

Let’s see the magic.

kops validate cluster

1

As you can see above, I chose t2.xlarge instance type on AWS to prevent the lack of resources for PostgreSQL in Kubernetes.

Step VII

Create another bash script to install and configure HELM.

#!/bin/bash
  
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

kubectl create -f role.yaml
kubectl create serviceaccount -n kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

helm init --service-account tiller

Step VIII

I used Stolon chart, which is a cloud-native PostgreSQL manager for HA PostgreSQL.

helm install --name stolon2 stable/stolon

1

Step IX

Our expectation is if we kill any PostgreSQL instance, another instance (pod) should restart immediately.

1

rockz!

Step X

If you are going to check nodes randomly to prove that containers are running inside, you must see an output like this:

1

Seems good.

Step XI

Autoscaler for AWS provides integration with Auto Scaling groups. I’ll show you with Auto-Discovery deployment. Normally it has 4 different types:

  • One Auto Scaling group
  • Multiple Auto Scaling groups
  • Master Node setup
  • Auto Discovery

Let’s get the AutoScaler file.

Click

You have to modify:

--node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/<YOUR CLUSTER NAME>

part with your cluster name.

Another point, mount path for ssl-certs must be changed as /etc/ssl/certs/ca-certificates.crt instead of /etc/ssl/certs/ca-bundle.crt due to the distro version. It should be /etc/kubernetes/pki directory if you are working with EKS.

The last point, I recommended that it would be better if you use ‘v1.13.2’ instead of k8s.gcr.io/cluster-autoscaler:v1.2.2 image due to a small bug about Auto Scaling Groups.

1

Bon Appétit!



Written by Deniz Parlak