Figure 1: Simplified Kubernetes architecture and components
Control Plane, Nodes, and Persistent Storage
Kubernetes’ basic architecture requires a number of components.
The Kubernetes API server handles all requests coming into the cluster. Users, as well as other Kubernetes components, send events to the Kubernetes API Server through HTTPS (port 443). The API Server then processes events and updates the Kubernetes persistent data store, usually etcd. The API Server also performs authentication and authorization depending on how the cluster was configured.
Control Plane Datastore
All Kubernetes cluster data is stored in a single datastore. The default datastore is etcd. . A best practice is to ensure that there are multiple etcd instances to ensure high availability of the cluster. A loss of etcd storage will result in a loss of the cluster’s state, so etcd should be backed up for disaster recovery. Since etcd is the single source of truth for the cluster, it’s imperative to secure it against malicious actors.
Kubernetes uses a control plane with several types of controllers to perform non-terminating control loops that observe the state of Kubernetes objects and reconcile it with the desired state. This includes a wide variety of functions such as invoking pod admission controllers, setting pod defaults, or injecting sidecar containers into pods, all according to the configuration of the cluster.
Each node in a Kubernetes cluster has an agent called the Kubelet. The Kubelet manages the container runtime (e.g., containerdt) on individual nodes. The kubelet reads workload definitions from the API server, ensures the requested workloads run and stay healthy, and reports the status of both workloads and the node itself back to the API server.
Kubernetes relies on a sophisticated algorithm to schedule Kubernetes objects. The scheduling takes into account attributes from a given workload (e.g., resource requirements) and applies custom prioritization settings (e.g., affinity rules) to provision pods on particular Kubernetes worker nodes.
Each Kubernetes cluster worker node has a network proxy for connectivity to application services in the cluster. The proxy reflects the services defined in the cluster and enables simple stream and round-robin forwarding to a set of backends, regardless of which node they reside in the cluster.
Other Kubernetes Components (Platform Applications)
To run the most basic Kubernetes cluster, a number of additional components and platform applications are required.
Production deployments of Kubernetes include a DNS server, which can be used for service discovery. All services and pods in a cluster are assigned a domain name that is resolved by the DNS. This DNS is ordinarily handled by Kubernetes DNS, which by default is backed by popular services such as CoreDNS (coredns.io). With Kubernetes DNS configured, Services (and pods) can be addressed by a defined naming convention in the form of A/AAAA or SRV records for discernable access by clients or other Services in the cluster.
The official command line for Kubernetes is called kubectl. All industry-standard Kubernetes commands start with kubectl.
Metrics Server and Metrics API
Kubernetes has two resources for giving usage metrics to users and tools. First, Kubernetes can include the metrics server, which is the centralized aggregation point for Kubernetes metrics. Second is the Kubernetes metrics API, which provides an API to access these aggregated metrics.
Web UI (Dashboard)
Kubernetes has an official GUI called Dashboard. That GUI is distinct from vendor GUIs that have been designed for specific Kubernetes derivative products. Note that the Kubernetes Dashboard release version does not necessarily match the Kubernetes release version.