LAUNCHING A COMPLETE SERVER ENVIRONMENT ON EKS CLUSTER

Akshat Soni
6 min readJul 11, 2020

What is EKS cluster?

Amazon Web Services(AWS) provides a managed service for the management of the Master nodes and Slave nodes on the cloud that means we can run multiple nodes on the top of AWS cloud architecture which will be more efficient than running these clusters on top of our personal laptops. The hardware cloud provides is much more better than a normal working laptop. Hence it will increase the efficiency of the webapplications.

Now i’ll start explaining by creating the admin user in AWS. Goto the IAM services and click on users and then click on Create user Icon.

After this proceed as shown above. Give admin access and download the credentials file of the new user.

After creating the user from your Command Prompt give the credentials as shown and give ap-south-1(Mumbai) as the region name. Now the command will run using the user permissions.

Now after this put the eksctl app into the minikube folder in the C drive. And then set the environment variables for it.

Now create a cluster.yml file and run this file as shown. Here I have used a static cluster or you can say an on-spot cluster.

You can see that after creating the cluster we can check the node group from cli as well as the webUI. This cluster will create certain services like EBS EC2 CloudFormation internally as the requirement comes up.

The cluster here will have the kubernetes installed in it and also it will have all the necessary network plugins for the connectivity between the pods so we do not have to install manually but we need to update the kube-config file so that we can create pods by giving commands on the cli. To update the config file run the command as shown below

Now after updating the configuration file we will first create a namespace of our own so as to perform a particular project within that space and make it permanent.

After creating the Namespace we will change the storage class. For this create a storage class.yml file

Run the sc.yml file using kubectl create -f sc.yml and then it will create a new class as show but still after creating we need to do some manual settings to make it as default storage for our project. Use kubectl edit sc gp2 and copy the annotation part and paste in the edit file for you storage class file as shown above

Now two initial requirements for the project is completed. We can also do the auto scalling of the nodes. But for that we need to check the no of ips a node can provide.

Now we will move towards launching our webapps and database. But their will be multiple pods running at one time and diff user will get diff pods in different regions So we need to have a centralized storage for that. I have created a pvc file and it will take storage from the ebs service

After running this code it might show the status of the pvc pending as it needs to be first connected with the pods than it will give the storage to that.

Secret file is used to keep the username and password safe as their may be a no of different people working on one project. And kustomization file is used to run multiple files at one single time. Run the kustomization file using kubectl create -k .

After running the deployment file you need to make some changes in it. We need to provide the volume mount path in the deployment file so to do this use kubectl edit deploy deploymentname and then inside the spec block give the volume name and claimname and then it will allocate the block storage to the pods. I am not able to take the ss at that moment but i have provided a test.yml file in my github post in which i have marked the details for where you need to make the changes.

After doing this you can check the details of all the pods you launched. In my case I have used Drupal:8.6 as the frontend webapp and also we can use postgres:10 as well as mysql:5.6 as the backend application. But we need to expose the ports of the databases.

Now after doing this we need to expose the ports for the drupal so that the outside world can connect to it. For doing this use kubectl expose deploy deployname — — type=LoadBalancer — — port=80

After connecting through the drupal fill all the necessary details to make the connection for the database.

Use eksctl delete cluster -f cluste.yml command to destroy the complete environment. And just check manually if the stacks are deleted completely

Their is one more approach to do this part we can use EFS service instead of EBS but in first go I am not able to understand it completely and also their is some part of helm i am not able to understand during the training. But I am trying to learn about them and as soon as i’ll understand those concepts I’ll create a new blog and try to explain each and everything in it.

THANK YOU

Github link: https://github.com/akshat-crypto/EKS_TRAINING.git

LinkedIN profile: https://www.linkedin.com/in/akshat-soni-011b461a6

--

--