What Is Kubernetes?

xpertlab-android-studio
Features Of Android Studio 3.4.1
16th May 2019
xpertlab-Microsoft-Web-Template-Studio
Microsoft Web Template Studio
20th May 2019

Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community.

Kubernetes, at its basic level, is a system for running and coordinating containerized applications across a cluster of machines. It is a platform design to completely manage the life cycle of containerized applications and services using methods that provide predictability, scalability, and high availability.

As a Kubernetes user, you can define how your applications should run and the ways they should be able to interact with other applications or the outside world. You can scale your services up or down, perform graceful rolling updates, and switch traffic between different versions of your applications to test features or rollback problematic deployments. Kubernetes provides interfaces and composable platform primitives that allow you to define and manage your applications with high degrees of flexibility, power, and reliability.

Why Do You Need Kubernetes?

Real production apps span multiple containers. Those containers must be deployed across multiple server hosts. Security for containers is multilayered and can be complicated. That’s where Kubernetes can help. Kubernetes gives you the orchestration and management capabilities required to deploy containers, at scale, for these workloads. Kubernetes orchestration allows you to build application services that span multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time. With Kubernetes you can take real steps towards better IT security.

Kubernetes also needs to integrate with networking, storage, security, telemetry and other services to provide a comprehensive container infrastructure.

What Is Container Orchestration?

Containers support VM-like separation of concerns but with far less overhead and far greater flexibility. As a result, containers have reshaped the way people think about developing, deploying, and maintaining software. In a containerized architecture, the different services that constitute an application packaged into separate containers and deployed across a cluster of physical or virtual machines. But this gives rise to the need for container orchestration—a tool that automates the deployment, management, scaling, networking, and availability of container-based applications.

Kubernetes Architecture

To understand how Kubernetes is able to provide these capabilities, it is helpful to get a sense of how it is designed and organized at a high level. Kubernetes can visualized as a system built in layers, with each higher layer abstracting the complexity found in the lower levels.

At its base, Kubernetes brings together individual physical or virtual machines into a cluster using a shared network to communicate between each server. This cluster is the physical platform where all Kubernetes components, capabilities, and workloads are configured.

The machines in the cluster are each given a role within the Kubernetes ecosystem. One server (or a small group in highly available deployments) functions as the master server. This server acts as a gateway and brain for the cluster by exposing an API for users and clients, health checking other servers, deciding how best to split up and assign work (known as “scheduling”), and orchestrating communication between other components. The master server acts as the primary point of contact with the cluster and is responsible for most of the centralized logic Kubernetes provides.

The other machines in cluster designated as nodes: Thus servers responsible for accepting & running workloads using local and external resources. To help with isolation, management, flexibility, Kubernetes runs applications & services in containers, so each node needs equipped with a container runtime (like Docker or rkt). The node receives work instructions from the master server and creates or destroys containers accordingly, adjusting networking rules to route and forward traffic appropriately.

How Works Kubernetes?

As mentioned above, the applications and services themselves runs on the cluster within containers. But underlying components make sure that the desired state of the applications matches the actual state of the cluster. Users interact with the cluster by communicating with the main API server either directly or with clients and libraries.
To start-up an application or service, a declarative plan is submitted in JSON or YAML defining what to create and how it should be managed.
master server then takes plan & figures out how to run it on infrastructure by examining requirements & current state of system. This group of user-defined applications running according to a specified plan represents Kubernetes’ final layer.

Thus, Kubernetes simplifies management of storage, secrets, and other application-related resources