Static Pods, Manual Scheduling, Labels, and Selectors — Kubernetes

Always learning
5 min readJul 31, 2024

--

Static pods are pods created and managed by kubelet daemon on a specific node without the API server observing them. If the static pod crashes, kubelet restarts it.

The control plane is not involved in the lifecycle of a static pod. Kubelet also tries to create a mirror pod on the kubernetes api server for each static pod so that the static pods are visible.

i.e. when you do kubectl get pod for example, the mirror object of the static pod is also listed.

You rarely have to deal with static pods. Static pods are usually used by software bootstrapping Kubernetes itself.

For example, kubeadm uses static pods to bring up Kubernetes control plane components like API-server, and controller-manager as static pods.

kubelet can watch a directory on the host file system (configured using --pod-manifest-path the argument to kubelet) (or) sync pod manifests from a web URL periodically (configured using --manifest-urlargument to kubelet).

When kubeadm is bringing up Kubernetes control plane, it generates pod manifests for API-server, and controller-manager in a directory that kubelet is monitoring. Then kubelet brings up these control plane components as static pods.

Kubernetes scheduling is simply the process of assigning pods to the matched nodes in a cluster. A scheduler watches for newly created pods and finds the best node for their assignment.

Manual scheduling means running a pod on a particular node by manually mentioning the nodeName field in the pod YAML configuration file.

Here are some common reasons you may want to schedule pods manually:

  1. You want pods with high resource requirements to run on larger nodes.
  2. You want to spread pods across nodes for high availability.
  3. You want pods to run on nodes with special hardware like GPUs.
  4. You want a pod locality with services already running on a node.

Methods Used for Kubernetes Pod Scheduling

  1. Node Selector
  2. Node Affinity/Anti-Affinity
  3. Taints and Tolerations
  4. Taints/Toleration and Node Affinity

NodeSelector is the simplest recommendation for scheduling a pod on a specific node. Node Selector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on that specific node, the node should have each of the indicated key-value pair labels as used for the pod.

Node Affinity/Anti-Affinity is a method to fix rules on which nodes are selected by the scheduler. This feature is a generalization of the node selector.

The rules are defined using the familiar concepts of custom labels on nodes and selectors laid out in pods, and they are often either required (or) preferred, depending on how strictly you want the scheduler to enforce them.

Taints are applied to nodes in a Kubernetes cluster to repel pods from being scheduled on those nodes, except for pods that explicitly tolerate the taint.

Taints are typically used to mark nodes with specific attributes (or) limitations, such as reserving certain nodes for particular workloads (or) preventing pods from being placed on nodes with specific hardware characteristics.

Read more Taints & Tolerations 👈

Tolerations are applied to pods and indicate that the pod can be scheduled on nodes with specific taints. A pod with toleration will only be scheduled on nodes that have a matching taint.

By setting tolerations, you can make sure that certain pods are placed on nodes with specific attributes or restrictions, even if those nodes are tainted.

Node affinity allows pods to specify rules about which nodes they can run on, based on node labels.

There are two types of node affinity:

  1. Required
  2. Preferred.

Required node affinity, pods can only be placed on nodes matching certain rules.

Preferred node affinity, scheduling tries to match the rules but is not obligated.

Selectors are more efficient since they are pre-computed expressions. They are evaluated on the target node where a pod is scheduled. It can be evaluated on any node in the cluster.

Labels have to be dynamically updated each time the label value changes. They are evaluated on the kube controller which means that all pods are evaluated against the label even if they don’t have that label. It can only be evaluated against pods on the same node as the kube controller.

The static pods which are primarily responsible for managing the resources of a cluster, networking are created under the namespace kube-systemall the pods that are created under this namespace.

kubectl get pods -n kube-system

Master node components

  1. kube-apiserver-kind-control-plane → The core API server that serves the Kubernetes API.
  2. kube-controller-manager-kind-control-plane → Manages the core control loops.
  3. kube-proxy-v2zlv → Acts as a network proxy. It manages network rules.
  4. kube-scheduler-kind-control-plane → Responsible for scheduling pods across nodes.

Control-plane node

Docker container which is responsible for running the control-plane node.

See the container name is cka-cluster1-control-plane

docker exec -it cka-cluster1-control-plane bash

Go to the /etc/kubernetes/manifestsdirectory, see the YAML configurations for our static pods.

  1. etcd.yaml
  2. kube-controller-manager.yaml
  3. kube-apiserver.yaml
  4. kube-scheduler.yaml

Schedule a pod manually

Create a YAML config for our pod and save it as manual-pod.yml. We also want our pod to be created in the cka-cluster-worker2 node.

apiVersion: v1
kind: Pod
metadata:
name: nginx-manual
spec:
containers:
- name: nginx
image: nginx:latest
nodeName: cka-cluster-worker2

Run the below command from the directory

kubectl apply -f manual-pod.yml

Verify f the nginx-manual pod is created in the cka-cluster-worker2 node.

kubectl describe pods/nginx-manual

Thank you 🙏 for taking the time to read our blog.

--

--

Always learning

கற்றுக் கொள்ளும் மாணவன்...