Skip to content

kavana-14/Kubernetes-basic-Learnings

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 

Repository files navigation

Kubernetes-basic-Learnings


Creating this repo to tell about my recent basic understandings about kuberenetes.

Kubernetes is a portable, extensible and open-source container orchestration tool that allows you to run and manage container-based workloads and services.
It helps in automating deployment, scaling and management of containerized applications.

Problem with containers:

1. Nature of container is scope to single host

Containers are ephemeral (short-lived) in nature. They can die and revive at any time. If there are less resources or any issues with container (like image is not pulling) container will immediately die.

2. Doesn’t support auto healing

If someone kill the container, can’t access the application. User has to manually look into it.

3. Doesn’t support auto scaling

Load balancing when the load increases

4. Doesn’t provide Enterprise support

Load balancing, firewall, api gateway, auto scaling and healing are not supported by container.

Screenshot (50)

Components of Kubernetes Cluster

why Kubernetes?
In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start
Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more.

1. Cluster:
By default, Kubernetes is a cluster (group of nodes), if any faulty nodes are present in a Kubernetes, immediately it will move the pods to another node.
2. Auto Scaling:
Kubernetes has a replica set (replication controller). In this case there is no need to deploy a new container. In the deployment YAML file we can increase the replica manually.
Kubernetes also support automatic scaling like HPA (Horizontal Pod Auto Scale), if the load increases, then it will spin up new container.
3. Auto healing:
Whenever there is damage, Kubernetes must control and fix it. Whenever API server receives a signal that any container (pod) is going down, even before it goes down, Kubernetes will roll out a new container (pod).
4. Enterprise support:
Enterprise level container orchestration platform (Kubernetes)

What is Pod. Why do we deploy applications as a pod in Kubernetes?
It is described as a “Definition of how to run a container”.
A pod can be a single or multiple containers. In Kubernetes, everything will deal with YAML files. Pod contains yaml file for how to run the container.
If we put 2 containers inside a pod, Kubernetes will ensure that container can have shared network and shared storage.

Example: Container A and B inside a single pod can talk to each other on a localhost.

pod
Kubernetes allocates cluster IP addresses to pod, we can access the application inside the container using this pod cluster IP address. Kube proxy generates IP addresses.


Kubectl:

Kubectl is a command line tool for Kubernetes. We can interact with clusters through kubectl.

Install kubectl for linux- (Ubuntu 22.04)

> curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"<br><br>
> sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl <br><br>
> kubectl version

Deploying a Kubernetes Cluster on Virtual Machines:(ubuntu 22.04)
Prerequisites:
  • 2 CPUs
  • Min. 8GB RAM
  • Create two virtual machines.(control plane and worker node)

Installation and setting up of kubernetes nodes:

Reference:

Deploying a Kubernetes Cluster

About

Basic understanding of kubernetes

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published