How Kubernetes is used in industries & its use cases👩🏻‍🏫

Gulsha Chawla
11 min readAug 24, 2021

What is Kubernetes?

Kubernetes (also known as k8s or “Kube”) is an open-source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.

✒️In other words, you can cluster together groups of hosts running Linux® containers, and Kubernetes helps you easily and efficiently manage those clusters.

✒️Kubernetes clusters can span hosts across on-premise, public, private, or hybrid clouds. For this reason, Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling, like real-time data streaming through Apache Kafka.

✒️Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers. (This is the technology behind Google’s cloud services.)

✒️Google generates more than 2 billion container deployments a week, all powered by its internal platform, Borg. Borg was the predecessor to Kubernetes, and the lessons learned from developing Borg over the years became the primary influence behind much of Kubernetes technology.

What can you do with Kubernetes?

✒️The primary advantage of using Kubernetes in your environment, especially if you are optimizing app dev for the cloud, is that it gives you the platform to schedule and run containers on clusters of physical or virtual machines (VMs).

✒️More broadly, it helps you fully implement and rely on a container-based infrastructure in production environments. And because Kubernetes is all about automation of operational tasks, you can do many of the same things other application platforms or management systems let you do — but for your containers.

✒️Developers can also create cloud-native apps with Kubernetes as a runtime platform by using Kubernetes patterns. Patterns are the tools a Kubernetes developer needs to build container-based applications and services.

With Kubernetes you can:

✏️Orchestrate containers across multiple hosts.
✏️Make better use of hardware to maximize the resources needed to run your enterprise apps.
✏️Control and automate application deployments and updates.
✏️Mount and add storage to run stateful apps.
✏️Scale containerized applications and their resources on the fly.
✏️Declaratively manage services, which guarantees the deployed applications are always running the way you intended them to run.
✏️Health-check and self-heal your apps with auto-placement, auto-restart, auto replication, and autoscaling.

However, Kubernetes relies on other projects to fully provide these orchestrated services. With the addition of other open-source projects, you can fully realize the power of Kubernetes. These necessary pieces include (among others):

✏️Registry, through projects like Docker Registry.
✏️Networking, through projects like OpenvSwitch and intelligent edge routing.
✏️Telemetry, through projects such as Kibana, Hawkular, and Elastic.
✏️Security, through projects like LDAP, SELinux, RBAC, and OAUTH with multitenancy layers.
✏️Automation, with the addition of Ansible playbooks for installation and cluster life cycle management.
✏️Services, through a rich catalog of popular app patterns.

Let’s break down some of the more common terms to help you better understand Kubernetes.

Control plane: The collection of processes that control Kubernetes nodes. This is where all task assignments originate.

Nodes: These machines perform the requested tasks assigned by the control plane.

Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources. Pods abstract network and storage from the underlying container. This lets you move containers around the cluster more easily.

Replication controller: This controls how many identical copies of a pod should be running somewhere on the cluster.

Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod — no matter where it moves in the cluster or even if it’s been replaced.

Kubelet: This service runs on nodes, reads the container manifests and ensures the defined containers are started and running.

kubectl: The command-line configuration tool for Kubernetes.

How does Kubernetes work?

✒️A working Kubernetes deployment is called a cluster. You can visualize a Kubernetes cluster as two parts: the control plane and the compute machines, or nodes.

✒️Each node is its own Linux environment and could be either a physical or virtual machine. Each node runs pods, which are made up of containers.

✒️The control plane is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. Compute machines actually run the applications and workloads.

✒️Kubernetes runs on top of an operating system (Red Hat® Enterprise Linux®, for example) and interacts with pods of containers running on the nodes.

✒️The Kubernetes control plane takes the commands from an administrator (or DevOps team) and relays those instructions to the computing machines.

✒️This handoff works with a multitude of services to automatically decide which node is best suited for the task. It then allocates resources and assigns the pods in that node to fulfill the requested work.

✒️The desired state of a Kubernetes cluster defines which applications or other workloads should be running, along with which images they use, which resources should be made available to them, and other such configuration details.

✒️From an infrastructure point of view, there is little change to how you manage containers. Your control over containers just happens at a higher level, giving you better control without the need to micromanage each separate container or node.

✒️Your work involves configuring Kubernetes and defining nodes, pods, and the containers within them. Kubernetes handles orchestrating the containers.

✒️Where you run Kubernetes is up to you. This can be on bare metal servers, virtual machines, public cloud providers, private clouds, and hybrid cloud environments. One of Kubernetes’ key advantages is it works on many different kinds of infrastructure.

Why do you need Kubernetes?

✒️Kubernetes can help you deliver and manage containerized, legacy, and cloud-native apps, as well as those being refactored into microservices.

✒️ To meet changing business needs, your development team needs to be able to rapidly build new applications and services. Cloud-native development starts with microservices in containers, which enables faster development and makes it easier to transform and optimize existing applications.

✒️Production apps span multiple containers, and those containers must be deployed across multiple server hosts. Kubernetes gives you the orchestration and management capabilities required to deploy containers, at scale, for these workloads.

✒️Kubernetes orchestration allows you to build application services that span multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time. With Kubernetes, you can take effective steps toward better IT security.

✒️Kubernetes also needs to integrate with networking, storage, security, telemetry, and other services to provide a comprehensive container infrastructure.

✒️Linux containers give your microservice-based apps an ideal application deployment unit and self-contained execution environment. And microservices in containers make it easier to orchestrate services, including storage, networking, and security.

✒️This significantly multiplies the number of containers in your environment, and as those containers accumulate, the complexity also grows.

✒️Kubernetes fixes a lot of common problems with container proliferation by sorting containers together into “pods.” Pods add a layer of abstraction to grouped containers, which helps you schedule workloads and provide necessary services — like networking and storage — to those containers.

✒️Other parts of Kubernetes help you balance loads across these pods and ensure you have the right number of containers running to support your workloads.

✒️With the right implementation of Kubernetes — and with the help of other open-source projects like Open vSwitch, OAuth, and SELinux — you can orchestrate all parts of your container infrastructure.

Evolution of K8s

Kubernetes & Docker🐋

K8s & Docker

✒️Docker can be used as a container runtime that Kubernetes orchestrates. When Kubernetes schedules a pod to a node, the kubelet on that node will instruct Docker to launch the specified containers.

✒️The kubelet then continuously collects the status of those containers from Docker and aggregates that information in the control plane. Docker pulls containers onto that node and starts and stops those containers.

✒️The difference when using Kubernetes with Docker is that an automated system asks Docker to do those things instead of the admin doing so manually on all nodes for all containers.

Kubernetes & Pinterest👩🏻‍💻

Challenge

After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.

Solution

The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.

Influence

“By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins,” says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. “We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent fewer instance-hours per day when compared to the previous static cluster.”

“So far it’s been good, especially the elasticity around how we can configure our Jenkins workloads on that Kubernetes shared cluster. That is the win we were pushing for.”

~MICHEAL BENEDICT, PRODUCT MANAGER

With such growth came layers of infrastructure and diverse set-up tools and platforms for the different workloads, resulting in an inconsistent and complex end-to-end developer experience, and ultimately less velocity to get to production. So in 2016, the company launched a roadmap toward a new compute platform, led by the vision of having the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.

The first phase involved moving to Docker. “Pinterest has been heavily running on virtual machines, on EC2 instances directly, for the longest time,” says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group. “To solve the problem around packaging software and not make engineers own portions of the fleet and those kinds of challenges, we standardized the packaging mechanism and then moved that to the container on top of the VM. Not many drastic changes. We didn’t want to boil the ocean at that point.”

The first service that was migrated was the monolith API fleet that powers most of Pinterest. At the same time, Benedict’s infrastructure governance team built chargeback and capacity planning systems to analyze how the company uses its virtual machines on AWS. “It became clear that running on VMs is just not sustainable with what we’re doing,” says Benedict. “A lot of resources were underutilized. There were efficiency efforts, which worked fine at a certain scale, but now you have to move to a more decentralized way of managing that. So orchestration was something we thought could help solve that piece.”

By the end of Q1 2018, the team successfully migrated Jenkins Master to run natively on Kubernetes and also collaborated on the Jenkins Kubernetes Plugin to manage the lifecycle of workers. “We’re currently building the entire Pinterest JVM stack (one of the larger mono repo at Pinterest which was recently baptized) on this new cluster,” says Benedict. “At peak, we run thousands of pods on a few hundred nodes. Overall, by moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins. We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent fewer instance-hours per day when compared to the previous static cluster.”

Benedict points to a “pretty robust roadmap” going forward. In addition to the Pinterest big data team’s experiments with Spark on Kubernetes, the company collaborated with Amazon’s EKS team on an ENI/CNI plug-in.

Once the Jenkins cluster is up and running out of dark mode, Benedict hopes to establish best practices, including having governance primitives established — including integration with the chargeback system — before moving on to migrating the next service. “We have a healthy pipeline of use-cases to be on-boarded. After Jenkins, we want to enable support for Tensorflow and Apache Spark. At some point, we aim to move the company’s monolithic API service. If we move that and understand the complexity around that, it builds our confidence,” says Benedict. “It sets us up for migration of all our other services.”

After years of being a cloud-native pioneer, Pinterest is eager to share its ongoing journey. “We are in the position to run things at scale, in a public cloud environment, and test things out in a way that a lot of people might not be able to do,” says Benedict. “We’re in a great position to contribute back some of those learnings.”

“We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do.”

— MICHEAL BENEDICT, PRODUCT MANAGER

Conclusion

✒️It has self-healing abilities. It’s important to know that although by default, K8s do come with self-healing abilities, but only for pods. But, a Kubernetes solution provider may have further integration of self-healing layers to ensure application reliability.

✒️K8s draw underlying computing resources and allow developers to deploy workloads to the entire cluster and not just a particular server.

✒️Kubernetes enables workload portability without limiting the types of applications it supports. Any application that a container can run, Kubernetes can too.

✒️For a swift load-balancing, K8s provides individual IP addresses for every pod and a single DNS name for a set of pods.

✒️Kubernetes is a portable and cost-effective platform.

✒️Kubernetes requires experience and extensive training for its debugging and troubleshooting in due time.

✒️The benefits of K8s are in abundance, but reaching that level might consume a lot of time, effort, and resources. Teams need to re-plan out their time to invest and familiarize themselves with new processes and workflow.

Thank you, everyone, for reading my article😇

Keep Learning🤩Keep Sharing 🤝🏻

Good Day!

--

--