Sunday, April 1, 2018

AWS kubernetes + Mongodb cluster network configuration

In this tutorial I will quickly go through the steps I have done to set up two different clusters and properly configure both of them in a way that suits my need .. here are what I wanted to have in the end:

- Setting up a kubernetes cluster with one master and 2 nodes
- Setting up a mongodb cluster with one primary and 2 secondary instances
- Be able to connect from my application running on kubernetes to mongodb servers
- Mongodb server to not be publicly exposed .. so I wanted to keep it private for security reasons

Ready? Lets go ;)

1. Create kubernetes cluster using kops

Note: If your cluster is not created successfully .. may be you need to request some limits of instance types you configure for master and nodes of the cluster ;)

2. Create mongo-db cluster

I just went through this awesome guide which is using a basic AWS template to create all needed resources and configuration.

3. Connect both clusters together

Now at this step you can easily SSH to your bastian instance and connect to your mongodb  service and test your credentials and so on.

Lets try to connect from our publicly available bastian server:

se7so@se7so:~/Desktop/se7so-workspace$ ssh -i "mykeypair.pem" ubuntu@ec2-bastian-****
Last login: Sun Apr  1 17:15:28 2018 from *.*.*.*
ubuntu@linux-bastian:~$ ssh -i "mykeypair.pem" ec2-user@mongodb-primary
Last login: Sun Apr  1 17:15:28 2018 from *.*.*.*
[ec2-user@mongodb-primary ~]$ mongo
MongoDB shell version v3.4.14
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.14

But you may want to be able to directly connect to your DB from your application which will later run as a container on K8s POD.

But in this template the mongodb has only a private IP and not exposed publicly through a dns or elastic IP .. it also belongs to a diff VPC than our K8s cluster ...

You will need to configure two things in order to successfully connect to your mongodb instance from your K8s cluster.

3.1 Create a connection peering between both VPCs (Make sure you pick up the VPC ID that your mongodb instances belong to, and the one that your K8s instances belong to).

3.2 Once requested .. you will need to accept the request, right click accept connection peering.

3.3 Once you accept the request .. you will need then to adjust the route tables of both VPCs by adding a new route (Destination should be the CIDR of other VPC,  and target should be the ID of the connection peering request you created earlier).

At this step you configured both VPCs to be able to connect the IP ranges from the other one .. but that's not enough to connect to our mongodb cluster either through SSH or mongodb port(s).

3.4 Go to security groups of your mongodb primary instance and edit the inbound rules by allowing the security group of K8s cluster instances to connect on mongodb ports, you can also allow SSH.

Note: Outbound is accepting all traffic by default so no need to change anything.

4. Connect from K8s cluster

Now you can SSH to any of your K8s nodes or master and SSH to mongodb primary instance using its private IP.

root@k8-master:/# telnet 10.*.*.* 27017
Trying 10.*.*.*...
Connected to 10.*.*.*.
Escape character is '^]'.


5. Test connection from a running POD on K8s

Now create a kubernetes pod and SSH to it, to test your connection to mongodb instance.

root@nginx-7c87f569d-9mp8q:/# telnet 10.*.*.* 27017
Trying 10.*.*.*...

Oops it doesn't work :S

After many tries and alot of research I found some solutions to manually adjust iptables rules to route traffic to IP ranges of mongodb cluster to the host machine .. but I didn't like this solution as its not sustainable enough for me..

Now its time to use my kubernetes tricks :) .. what I came up with is the following ;)

5.1 Create a kubernetes service without a selector .. to point to the mongodb IP/PORT .. here are my configuration:

service.yaml
---
kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 27017

endpoints.yaml:
---
kind: Endpoints
apiVersion: v1
metadata:
  name: my-service
subsets:
  - addresses:
      - ip: 10.0.4.205
    ports:
      - port: 27017

Run:

se7so@se7so:~/Desktop/se7so-workspace/$ kubectl apply -f .
endpoints "my-service" created
service "my-service" created

SSH again to your K8s POD and run:

se7so@se7so:~/Desktop/se7so-workspace/$ kubectl exec -it nginx-7c87f569d-9mp8q bash
root@nginx-7c87f569d-9mp8q:/# telnet my-service.default.svc.cluster.local 80
Trying 100.71.103.125...
Connected to my-service.default.svc.cluster.local.
Escape character is '^]'.

Using mongo CLI:

Note: I'm redirecting 80 to 27017 ports so don't get confused .. not sure why I did it this way :P

root@nginx-7c87f569d-9mp8q:/# mongo --host my-service.default.svc.cluster.local --port 80
MongoDB shell version: 3.2.11
connecting to: my-service.default.svc.cluster.local:80/test
Welcome to the MongoDB shell.
s0:PRIMARY>
Note: you don't have to use `.default.svc.cluster.local` postfix, you can just type 'my-service'


Pingo it works and its more sustainable solution ;) .. I really hope this will help you saving some hours of research.