Deploying and Managing Application with Kubernetes

Over the weekend, I was invited as a speaker at the @spaceyatech event, where I was tasked with introducing Kubernetes to the community. In this blog, I will discuss Kubernetes, its key features, and benefits. Additionally, we will dive into a demo and explore a local cluster together.
Overview of the Blog
- Intro into Kubernetes
2. Kubernetes Architecture
3. Main kubernetes components
4. Demo project
What is Kubernetes
The official definition of Kubernetes is it is an open source container orchestration framework which was originally developed by google in 2014 but now it is maintained by the Cloud native and Computing foundation.
So on the foundation Kubernetes manages containers, whether it is a docker containers or any other container technology Kubernetes can manage it.
What this means is that Kubernetes helps you manage application that are made up of many containers in different deployment enviroments whether is on-prem , bare metal , cloud or hybrid cloud and also in the edge devices.
What problems does Kubernetes solve?
The emergence of container technology in the industry has brought forth many advantages as it effectively shift the say from “it works on my machine” to “it works everywhere.”
Containers enable you to package your application into a single, portable binary, which can be deployed across various environments.
The rise of containerization technology has resulted in a significant increase in the number of containers that we are running, as applications are now decoupled into smaller, more manageable components.
However, this surge in containerization also introduces challenges, such as managing a growing number of containers, addressing security concerns, and navigating complex networking issues.
As we continue to embrace the benefits of containerization, it is crucial to find solutions that effectively mitigate these challenges and maintain the efficiency and reliability of our applications and that is what Kubernetes solves.
Features of Orcherstration Tool
Orchestration tools offer the following key features:
- High Availability: Ensuring that applications have no downtime and remain accessible to users at all times.
- Scalability and Performance: Allowing applications to quickly scale up when facing increased demand, and scale down when demand decreases, providing flexibility to adapt to fluctuating loads.
- Disaster Recovery: Implementing mechanisms to restore infrastructure in case of issues such as data loss or server failure, ensuring minimal impact on application availability and functionality.

Kubernetes Architecture
A Kubernetes cluster consists of at least one master node, which is connected to several worker nodes. Each worker node runs a process called kubelet.
Kubelet is the component in the Kubernetes ecosystem which facilitates communication within the cluster and executing tasks on nodes, such as running applications, debuging the application.
Each worker node has containers of different applications deployed on it, so depending on how workload is distributed you will have different number of containers running on each worker nodes.
The worker node is actually where our applications are running, you might wonder what is in the master node.

Master Node:
The master node is responsible for managing and controlling the Kubernetes cluster. It comprises several processes that work together to maintain the cluster’s overall health and functionality:
- API Server: The API Server is a container that serves as the primary entry point into the Kubernetes cluster. It acts as a user interface when using the Kubernetes dashboard, an API for scripts, and a command-line tool via kubectl.
- Controller Manager: The Controller Manager monitors the state of the cluster, ensuring that any issues are addressed promptly. For example, it can detect when a container has failed and initiate the process to create a new one.
- Scheduler: The Scheduler intelligently assigns containers to worker nodes based on their workloads. It considers factors such as resource requirements and current utilization to determine the most suitable worker node for deploying the next container.
- etcd Key-Value Store: The etcd datastore maintains the current state of the cluster at any given time. It serves as a central source of truth for the cluster’s configuration and status.
Virtual Network:
The virtual network plays a crucial role in facilitating communication within the Kubernetes cluster. By interconnecting all nodes, it creates a powerful, unified machine that enables seamless interaction between containers and efficient distribution of workloads across the cluster.
Main Kubernetes Components
In this section, we will explore some of the primary Kubernetes components that you’ll frequently encounter, such as pods, services, deployments, secrets, and configmaps. To facilitate a better understanding, we will walk through a simple use case involving a web application and a database. I will demonstrate how each Kubernetes component plays a role in deploying and managing this setup, and discuss the specific responsibilities of each component.
Pod
The pod is the smallest unit of a kubernetes deployment, and serves as an abstraction layer over a container, encapsulating one or more closely related containers within a single unit.
If you have experience with Docker containers or images, you can think of a pod as a runtime environment or an additional layer built on top of a container.
This design choice enables Kubernetes to abstract away from the underlying container technology, allowing for flexibility and the possibility of using alternative container technologies without direct interaction.
Typically, pods are designed to run a single container. However, in some cases, you may need to run multiple containers within a pod. This could include a helper container, which assists the primary container with specific tasks, or an init container, which performs setup tasks before the main container starts running

How does communication take place?
Kubernetes provides a built-in virtual network, ensuring that each pod receives its own unique IP address not the container.
This internal IP address facilitates communication between pods, allowing components like application containers to interact with database containers seamlessly.
Pods in Kubernetes are ephemeral, meaning they can be terminated or fail very easy.
For instance, if a database container crashes due to an issue with the application inside it, a new container will be created to replace it.
However, this new container will be assigned a different IP address, which can be inconvenient if other components are communicating with the database using its IP address and continuously adjusting the IP address each time a pod restarts is impractical.
To address this issue, Kubernetes introduces another component called a “service”

Service
A service in Kubernetes essentially provides a stable or permanent IP address that can be associated with each pod. In our example, the web application pod and the database pod would each have their own dedicated service. The key advantage here is that the lifecycle of a service and a pod are not interconnected. Even if a pod fails or is terminated, the service and its IP address remain unaffected. So there is no need to continuously updating the communication endpoint.
To enable access to your application via a browser, you will need to create an external service in Kubernetes.
External services allow communication with external sources, making your application publicly accessible.
However, it’s important to keep your database secure and not expose it to public requests. To achieve this, you can create an internal service for your database. The type of service (external or internal) can be specified during the service creation process, ensuring appropriate access and security for each component of your application.
We have seen some of the basic components of Kubernetes we just have a one server and a couple of container running and some services. Nothing really cool but we heading there.
ConfigMaps
Pods in Kubernetes use services to communicate with one another. For example, a web application might connect to its database using a service endpoint called mando-db-service. Traditionally, configuring the database URL would require updating an application properties file or the build image. If the service endpoint changes, this process can be time-consuming and inefficient.
Kubernetes addresses this issue with ConfigMaps, which allow you to store external configurations, such as database URLs, separately from the application. ConfigMaps are connected to pods, so when you need to update the service endpoint, you can simply adjust the ConfigMap without rebuilding the application or going through a complex deployment cycle.
While ConfigMaps are great for storing general configuration data, they may not be suitable for storing sensitive information like database usernames and passwords. Storing such credentials in ConfigMaps could be insecure. To handle sensitive data securely, Kubernetes offers another component called Secrets, which are specifically designed for storing and managing sensitive information.
Secret
Secrets in Kubernetes serve a similar purpose to ConfigMaps, but they are specifically designed for storing sensitive data such as credentials. Although Secrets store data in base64 encoding, this does not provide automatic encryption or absolute security. To ensure that sensitive data stored in Secrets is properly encrypted and secure, it is recommended to use third-party encryption tools or other security measures in conjunction with Kubernetes Secrets.
Volumes

In our current setup, if the database pod used by the application restarts, any generated application data would be lost. This is inconvenient, as you’d want your data to persist reliably for the long term. In Kubernetes, you can achieve this using volumes.
Volumes work by attaching a physical storage device, such as a hard drive, to your pod. This storage can be located on the same server node, on a remote machine outside the Kubernetes cluster, in cloud storage, or on-premises storage. When the pod restarts, all data remains persistent.
It’s important to understand that Kubernetes does not directly manage data persistence. Instead, you can think of the storage as an external drive plugged into the cluster. As a Kubernetes administrator, it’s your responsibility to ensure that data is backed up and properly managed.
Deployment
To minimize downtime and take advantage of distributed systems and containers, we can replicate our application across multiple servers. Instead of relying on a single pod, we can create replicas running on different nodes, all connected to the same service. Since the service has a static IP address and acts as a load balancer, it can distribute incoming requests to the least busy pod.
Rather than manually creating multiple pods, we can define a blueprint of the application pod and specify the number of replicas we want to run. This blueprint is called a “deployment.”
In practice, you will create deployments instead of individual pods, as deployments allow you to specify the number of replicas and easily scale up or down as needed.
Remember pods are an abstraction over containers, while deployments are an abstraction over pods, making it more convenient to interact with, replicate, and configure pods. If one pod fails, the service will forward requests to another available pod, ensuring the application remains accessible to users.
Statefulset
You might be wondering about the database pod and how to handle it in case of failure. Since databases have a state (the data), replicating them using deployments is not feasible. When creating replicas of a database, they all need access to the same shared data storage. To avoid data inconsistency, a mechanism is required to manage which pods are writing to the storage and which are reading from it. In Kubernetes, this component is called a StatefulSet.
StatefulSets are designed for stateful applications or databases, such as MySQL, MongoDB, or any other stateful service. Instead of using deployments, you should create stateful applications using StatefulSets. Just like deployments, StatefulSets handle replicating the pods and scaling them up or down. However, they also ensure that database read and write operations are synchronized to avoid any data inconsistency issues.
With two replicas of our application and two replicas of the database, along with load balancing, our setup is more robust. If one node fails and nothing runs on it, we still have the other node running both the application and the database. This ensures that the application remains accessible to users, preventing downtime and enhancing the overall reliability of the system.
Utilizing these core components, you can construct powerful applications that are highly available, scalable, and resilient. By leveraging Kubernetes and its various components, you can create efficient, robust, and flexible systems that can adapt to your application’s requirements and provide a seamless user experience.
Kubernetes Configuration
All configurations within the Kubernetes cluster are managed through the master node via a process known as the API server. Various Kubernetes clients, such as the Kubernetes UI dashboard, API scripts, or command line tools like kubectl, communicate with the API server and send configuration requests to it. The API server serves as the primary and sole entry point into the cluster. These requests must be formatted in YAML or JSON, which are the accepted file formats for Kubernetes configuration files.
Configurations in Kubernetes are declarative, which means you specify your desired outcome in a YAML file, and Kubernetes works to achieve that state. For example, if you declare that you want two replicas of a pod and one of them fails, the controller manager detects that one pod is not functioning properly. It then takes appropriate action to restore the desired state, ensuring that your declared configuration is maintained and the system remains consistent with your intentions.
Every Kubernetes configuration file has three main parts:
- Metadata: This includes information such as the name of the resource being defined.
- Specification: Each component in Kubernetes has its own set of specifications, which are the properties or attributes you want to apply to the resource.
- Status (optional): This section represents the current state of the resource within the cluster. It is usually managed by Kubernetes itself and might not be included in your initial configuration files.
Yaml Configuration
The format of Kubernetes configuration files is YAML, which is straightforward to understand but very strict with indentation. If your file has incorrect indentation, it will be considered invalid. However, once you become familiar with the syntax, it’s fairly simple to use.
You can store the configuration files alongside your application code since the deployment and service configurations are applied to your cluster as part of the infrastructure-as-code (IaC) approach. This practice helps ensure consistency and maintainability across your entire application and its infrastructure.
Minikube and Kubectl — Local Setup
What is minikube?
In the Kubernetes world, when setting up a cluster, you typically need multiple control plane and worker nodes. However, if you want to experiment locally, there’s an open-source tool called Minikube that simplifies the process. Minikube creates a single-node cluster where both master and worker processes run on the same node. This node has a Docker runtime installed, allowing you to easily test and learn Kubernetes concepts in a local environment.
How to install Minikube
For those using macbook you can easily install minikube by running the command
brew install minikube
Minikube needs drivers to run you need to install docker or a virtual machine
What is Kubectl?
kubectl is a command-line interface (CLI) tool that provides powerful capabilities for interacting with Kubernetes clusters. It allows you to perform a wide range of actions, from creating and managing resources like pods, services, and deployments to scaling applications up or down, rolling out updates, and monitoring cluster health.
As the primary tool for administering and managing Kubernetes clusters, kubectl offers a flexible and extensible platform for building and deploying modern, cloud-native applications. It provides a simple and consistent interface for interacting with your cluster, regardless of whether you are running it on-premises or in the cloud, and offers a wide range of options for customizing and fine-tuning your Kubernetes deployment to meet your specific needs.
Lens

Lens is a Kubernetes UI that provides a user-friendly interface for viewing and managing your cluster. Unlike tools such as Visual Studio that serve multiple purposes, Lens is specifically designed for Kubernetes, with an intuitive and easy-to-use graphical user interface.
K8sgpt

This is a powerful new tool that leverages OpenAI’s cutting-edge technology, Chat GPT. To use it, simply connect to the ChatGPT API. It can scan your entire cluster in seconds and provide valuable feedback and recommendations. This tool is incredibly handy and can greatly simplify your work with Kubernetes.
Starting Minikube
minikube start
This will start minikube and create a local cluster in our machine

We can then view the status of our cluster by running
minikube status

Now we can start interacting with our cluster using kubectl which is installed when installing Minikube.
Lets view the nodes running
kubectl get nodes
We can view the node and see what is happening and its running so we can go ahead and deploy application using kubernetes

Demo Project Overview
We will deploy a mango db database and a web application which will connect to the mango db database using external configuration DB URL and credentials from config maps and secret and finally we will make our web application accessible externally in the browser
Let’s generate the configuration needed for deploying our application. We’ll begin by creating a ConfigMap to hold the database endpoint information, followed by a Secret to store the MongoDB username and password. Finally, we’ll create a Deployment and Service for both the MongoDB and web application.
mango-config.yaml
Creating is simple you can reference to the Kubernetes documentation
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
Data: All the key value pairs that we define a as external configuration. The value is the service we will create for the mango db database.
mongo-secret.yaml
You can refer to the documentation on how to create a secret
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
The kind is secret
Type is opeque — default for arbitrary key-value pairs
the values in secret are encoded in base 64 and to do that is simple go to the terminal and use the command;
echo -n mongouser | base64
for mongo username
echo -n mongopassword | base64
for mongo password
When we create deployment we can reference any of the values
mongo.yaml
We will create one file for mongo deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:5.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
resources:
limits:
memory: "128Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: mongo
ports:
- protocol: TCP
port: 27017
targetPort: 27017
Template a config of pod within the deployment
Container which image will be used and which ports
Labels
Labels are key value pairs that are attached to k8s resources that means you can label any component from pod to deployment.
Additional identifiers for our components and we can identify all pod replica using a specific label that all of them share.
Selector matchLabels
It will match all pods created with this configuration
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
resources:
limits:
memory: "128Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30100
Deploy all resources
Deploying the resources is easy you can run the following
kubectl apply -f mongo-config.yaml
kubectl apply -f mongo-secret.yaml
kubectl apply -f mongo-deployment.yaml
kubectl apply -f webapp.yaml
so after running that we can be able to check if everything is running in our cluster
kubectl get all

Next we can acess the application using the service we provided
kubectl get svc
next get the ip address of Minikube
minikube ip
Grab the minikube ip and access the application from the browser and add the port that our service is listening on.

Thank you for finishing up it was great learning i hope this can trigger you to try playing around with kubernetes and see what it got.