Engineering
Kubernetes
Microservices

Kubernetes service discovery

Kubernetes is a pioneering open-source system that automates the deployment, management, and scaling of containerized applications. In this post, we'll go over a few fundamentals of containerized applications and how service discovery works in Kubernetes.

By
Cortex
-
March 23, 2022

Containerized applications are being increasingly adopted by companies worldwide. According to a recent Gartner report, more than 75% of organizations will run containerized applications in 2022. Unlike traditional virtualization or physical application hosting, containerized applications are both storage and CPU usage efficient. This helps developers reduce the operational vulnerabilities in their applications, providing a smoother and seamless user experience.

Kubernetes, more popularly known as K8s, is a pioneering open-source system that automates the deployment, management, and scaling of containerized applications. To execute commands against Kubernetes clusters, you must use kubectl, the Kubernetes command-line tool. 

But first, what exactly is a service in Kubernetes? To understand this, we go over a few fundamentals of containerized applications. 

The unique problem of service discovery within Kubernetes 

Containerized applications, as the name suggests, are built using small ‘containers’ that can be dynamically created and destroyed to provide the functionality to an application. A small container, for example, can be created to display an image file to the user. In Kubernetes, these basic computational units/containers are called Pods. 

A Pod can be a group of one or more containers with shared network and storage resources and specific instructions on how to run those containers, complete with a Pod IP address. A Pod's contents are always co-located and co-scheduled and run in a shared context. In a Kubernetes deployment, the batch of Pods running at one second could be different from the set of Pods running that application a second later. This leads to a functional problem. 

Link: https://kubernetes.io/docs/concepts/services-networking/service/ 

If one set of Pods provides functionality to another set of Pods inside your cluster, how exactly does a leading set gain access to the training set in order to access its resources? In theory, there needs to be a set of defined parameters that helps the main/leading set find the IP addresses of the sets it depends on. 

Kubernetes presents an ingenious solution to this problem in the form of Services. Services allow Kubernetes to implement an efficient and agile microservice discovery mechanism that optimizes client-side and server-side operations. 

Types of Services in Kubernetes 

A service is an abstraction complete with a logical pattern that bundles a specific group of Pods together. These groups perform network micro-services for the users, like making a calculation and playing a video. Kubernetes services can be categorized under four broad types depending on their structure and use cases. 

  1. ClusterIP 

ClusterIP is the default and, by extension, most commonly used service type in Kubernetes. These services are assigned a cluster-internal IP address, making them reachable only to other services within their cluster. ClusterIP services are used for intra-cluster communication. For example, requests are sent from the frontend of an application to the backend.

  1. NodePort

NodePort services are extensions of ClusterIP services. Each NodePort service consists of a ClusterIP service and a cluster-wide port that can route all external requests to services within the cluster. However, all these requests have to be routed through the same single endpoint available for the NodePort service. NodePort services can be contacted from outside the cluster by requesting <NodeIP>:<NodePort>. NodePort services are used to enable external connectivity on a cluster of services. They can be used to set up your own load balancing solution, which helps configure environments that are not fully supported by Kubernetes. 

  1. LoadBalancer 

LoadBalancer services are an extension of NodePort services. Each one consists of a NodePort service (complete with its own ClusterIP service and cluster-wide port) to which an external load balancer can be routed. Every cloud service provider (like AWS or Azure) creates its own native load balancer, which routes requests to your Kubernetes services. LoadBalancer services should be made when using a cloud service to host your Kubernetes cluster. 

  1. ExternalName

These services are usually mapped onto a DNS name instead of a typical selector (such as my-service). The service is mapped to the contents of the ExternalName field with the help of a CNAME record that contains its value. ExternalName is most commonly used in Kubernetes to represent an external datastore (a database that runs outside of Kubernetes). 

How service discovery works in Kubernetes 

Here are some of the key components within the Kubernetes service discovery system that you should be aware of:

  • Pods - Smallest computational unit; container equivalent of Kubernetes. 
  • Nodes - A Pod always runs on a Node. A Node is a worker machine in Kubernetes and depending on the cluster, may either be a virtual or a physical machine.
  • Kubernetes Service - An abstraction containing one or several similar Pods and a policy to access them.

Kubernetes gives each API object (including Nodes and Pods) unique tags called key-value pairs. These tags contain metadata that helps identify and group objects with a common attribute. These tags are called labels (in the server’s context) or selectors (in the client’s context). Kubernetes uses these labels to associate a specific service with a set of Pods, thereby forming an abstract encapsulation. 

These services can be thought of as basic C++ or Java functions in their structure. Each service is simply a set of Pods and logical links between them, working towards a pre-defined purpose. 

Internal or external clients invoke a service with the help of a service manifest that contains certain selectors that act as criterion while searching for relevant Pods. Any Pod whose label matches the required selector is discovered and allocated by the service. This multi-level, layered architecture provides a faster, more agile service discovery in Kubernetes. 

Why choose Kubernetes 

Kubernetes is a state-of-the-art solution that helps manage your containerized applications more efficiently than ever before. It is designed to optimize memory storage and CPU usage, substantially reducing the file size and processing times associated with computerized applications. 

With a global community of over 5.6 million developers, Kubernetes is trusted by professionals from around the globe. Switch to Kubernetes to avail the best in containerized application orchestration and management. 

Summary 

  • It is difficult to orchestrate and arrange all functioning containers within containerized applications to fulfill user requests. 
  • Kubernetes solves this problem by implementing a dynamic, Pod-based structure that divides the application into microservices. 
  • These microservices are tagged using labels and selectors to aid better discovery of service clusters. 
  • The four main categories of services in Kubernetes are - ClusterIP, NodePort, LoadBalancer, and ExternalName. 
  • Each service functions according to a specific structure that decides what service requests it is exposed to and how it responds. 
  • Kubernetes’s efficient and agile service discovery mechanism makes it an industry favorite.
Engineering
Kubernetes
Microservices
By
Cortex
What's driving urgency for IDPs?