-
to test if required ports are open
- setup a listener in target vm:
nc -l <port_number>
- listen from another vm
nc <target_vm_ip> <port_number>
- setup a listener in target vm:
-
swap needs to be disabled as it affects resource management, performance, and stability of kubernetes.
sudo swapoff -a
disables swap until rebootsudo sed -i '/ swap / s/^/#/' /etc/fstab
disables swap permanently by commenting out any lines that contain the word swap- check if it’s disabled by inspecting the file to confirm, as well as running
free -h
after reboot.
-
install
br_filter
andoverlay
kernel modules-
br_filter module is necessary for implementing pod communication, network policies. it allows k8s to apply iptables rules to bridged traffic, ensuring that the desired network policies are enforced. Understanding br_netfilter in the Kubernetes Context
sudo modprobe br_netfilter
lsmod | grep br_netfilter
-
overlay filesystem is used by container runtimes to manage images and their filesystems. among other functions, it enables efficient container storage and fast container creation times leading to fast container startup times. Understanding Overlay Filesystem in Kubernetes
sudo modprobe overlay
- modprobe loads module into the kernellsmod | grep overlay
- if module is loaded we should see an entry for it
-
for persistence between reboots create this conf file
sudo tee /etc/modules-load.d/k8s.conf <<EOF overlay br_netfilter EOF
-
-
create
kubernetes.conf
systctl file with the following settingssudo tee /etc/sysctl.d/kubernetes.conf<<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF
- enables bridged IPv6 traffic to be processed by ip6tables.
- enables bridged IPv4 traffic to be processed by iptables
- enables IP forwarding, allowing the system to forward packets between NICs
sysctl --system
load and apply kernel parameters from all configuration files typically located in the/etc/sysctl.conf
file and any files in the/etc/sysctl.d/
directory
-
install container runtime:
containerd
curl -LO https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz sudo tar Cxzvf /usr/local containerd-2.0.4-linux-amd64.tar.gz curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service mkdir -p /usr/local/lib/systemd/system/ mv containerd.service /usr/local/lib/systemd/system/ # generate containerd config mkdir -p /etc/containerd/ containerd config default > /etc/containerd/config.toml # enable `systemd` cgroup driver sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml systemctl daemon-reload systemctl enable --now containerd systemctl status containerd
-
control groups are used to constrain resources that are allocated to processes. Both the kubelet and underlying container runtime interface with cgroups to enforce resource management for pods and containers. To interface with cgroups a cgroup driver is required and it’s critical that both kubelet and container runtime use the same driver. On systems that use systemd as the init system it is recommended to use the
systemd
cgroup driver instead of the defaultcgroupfs
driver. -
install
runc
curl -LO https://github.com/opencontainers/runc/releases/download/v1.2.6/runc.amd64 install -m 755 runc.amd64 /usr/local/sbin/runc
-
install CNI plugins
curl -LO https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz mkdir -p -m 755 /opt/cni/bin tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz
-
install kubeadm, kubelet, kubectl
apt-get update apt-get install -y apt-transport-https ca-certificates curl gpg mkdir -p /etc/apt/keyrings curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl # prevent these utilities from being upgraded along with system upgrades sudo apt-mark hold kubelet kubeadm kubectl kubeadm version kubelet --version kubectl version --client
-
configure
crictl
/ set runtime-endpointsudo crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
-
initialise control plane
- for the
--pod-network-cidr
we used192.168.0.0/16
because that is the default CIDR used by the pod network addon we’ll use later - Calico.
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=10.0.0.4 --node-name master --cri-socket=unix:///var/run/containerd/containerd.sock --v=5
I0320 19:52:31.668395 7087 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd" I0320 19:52:31.668453 7087 version.go:192] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt [init] Using Kubernetes version: v1.32.3 [preflight] Running pre-flight checks I0320 19:52:31.985986 7087 checks.go:561] validating Kubernetes and kubeadm version I0320 19:52:31.986013 7087 checks.go:166] validating if the firewall is enabled and active I0320 19:52:31.997432 7087 checks.go:201] validating availability of port 6443 I0320 19:52:31.997588 7087 checks.go:201] validating availability of port 10259 I0320 19:52:31.997612 7087 checks.go:201] validating availability of port 10257 I0320 19:52:31.997636 7087 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I0320 19:52:31.997648 7087 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I0320 19:52:31.997659 7087 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I0320 19:52:31.997665 7087 checks.go:278] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I0320 19:52:31.997672 7087 checks.go:428] validating if the connectivity type is via proxy or direct I0320 19:52:31.997685 7087 checks.go:467] validating http connectivity to first IP address in the CIDR I0320 19:52:31.997701 7087 checks.go:467] validating http connectivity to first IP address in the CIDR I0320 19:52:31.997711 7087 checks.go:102] validating the container runtime I0320 19:52:31.998199 7087 checks.go:637] validating whether swap is enabled or not I0320 19:52:31.998245 7087 checks.go:368] validating the presence of executable ip I0320 19:52:31.998268 7087 checks.go:368] validating the presence of executable iptables I0320 19:52:31.998288 7087 checks.go:368] validating the presence of executable mount I0320 19:52:31.998304 7087 checks.go:368] validating the presence of executable nsenter I0320 19:52:31.998321 7087 checks.go:368] validating the presence of executable ethtool I0320 19:52:31.998334 7087 checks.go:368] validating the presence of executable tc I0320 19:52:31.998348 7087 checks.go:368] validating the presence of executable touch I0320 19:52:31.998364 7087 checks.go:514] running all checks I0320 19:52:32.006702 7087 checks.go:399] checking whether the given node name is valid and reachable using net.LookupHost I0320 19:52:32.011428 7087 checks.go:603] validating kubelet version I0320 19:52:32.051712 7087 checks.go:128] validating if the "kubelet" service is enabled and active I0320 19:52:32.069583 7087 checks.go:201] validating availability of port 10250 I0320 19:52:32.069662 7087 checks.go:327] validating the contents of file /proc/sys/net/ipv4/ip_forward I0320 19:52:32.069695 7087 checks.go:201] validating availability of port 2379 I0320 19:52:32.069717 7087 checks.go:201] validating availability of port 2380 I0320 19:52:32.069742 7087 checks.go:241] validating the existence and emptiness of directory /var/lib/etcd [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action beforehand using 'kubeadm config images pull' I0320 19:52:32.071283 7087 checks.go:832] using image pull policy: IfNotPresent W0320 19:52:32.071714 7087 checks.go:846] detected that the sandbox image "" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image. I0320 19:52:32.072198 7087 checks.go:863] image exists: registry.k8s.io/kube-apiserver:v1.32.3 I0320 19:52:32.072497 7087 checks.go:863] image exists: registry.k8s.io/kube-controller-manager:v1.32.3 I0320 19:52:32.072748 7087 checks.go:863] image exists: registry.k8s.io/kube-scheduler:v1.32.3 I0320 19:52:32.073450 7087 checks.go:863] image exists: registry.k8s.io/kube-proxy:v1.32.3 I0320 19:52:32.074076 7087 checks.go:863] image exists: registry.k8s.io/coredns/coredns:v1.11.3 I0320 19:52:32.074394 7087 checks.go:863] image exists: registry.k8s.io/pause:3.10 I0320 19:52:32.074778 7087 checks.go:863] image exists: registry.k8s.io/etcd:3.5.16-0 [certs] Using certificateDir folder "/etc/kubernetes/pki" I0320 19:52:32.074826 7087 certs.go:112] creating a new certificate authority for ca [certs] Generating "ca" certificate and key I0320 19:52:32.575652 7087 certs.go:473] validating certificate period for ca certificate [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 10.0.0.4] [certs] Generating "apiserver-kubelet-client" certificate and key I0320 19:52:33.209193 7087 certs.go:112] creating a new certificate authority for front-proxy-ca [certs] Generating "front-proxy-ca" certificate and key I0320 19:52:33.272572 7087 certs.go:473] validating certificate period for front-proxy-ca certificate [certs] Generating "front-proxy-client" certificate and key I0320 19:52:33.417716 7087 certs.go:112] creating a new certificate authority for etcd-ca [certs] Generating "etcd/ca" certificate and key I0320 19:52:33.638010 7087 certs.go:473] validating certificate period for etcd/ca certificate [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.0.0.4 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.0.0.4 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key I0320 19:52:34.291364 7087 certs.go:78] creating new public/private key files for signing service account users [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0320 19:52:34.689345 7087 kubeconfig.go:111] creating kubeconfig file for admin.conf [kubeconfig] Writing "admin.conf" kubeconfig file I0320 19:52:34.782313 7087 kubeconfig.go:111] creating kubeconfig file for super-admin.conf [kubeconfig] Writing "super-admin.conf" kubeconfig file I0320 19:52:34.868510 7087 kubeconfig.go:111] creating kubeconfig file for kubelet.conf [kubeconfig] Writing "kubelet.conf" kubeconfig file I0320 19:52:35.129108 7087 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0320 19:52:35.231768 7087 kubeconfig.go:111] creating kubeconfig file for scheduler.conf [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0320 19:52:35.300030 7087 local.go:66] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I0320 19:52:35.300060 7087 manifests.go:104] [control-plane] getting StaticPodSpecs I0320 19:52:35.300199 7087 certs.go:473] validating certificate period for CA certificate I0320 19:52:35.300260 7087 manifests.go:130] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I0320 19:52:35.300271 7087 manifests.go:130] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver" I0320 19:52:35.300276 7087 manifests.go:130] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I0320 19:52:35.300284 7087 manifests.go:130] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver" I0320 19:52:35.300302 7087 manifests.go:130] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver" I0320 19:52:35.301060 7087 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" [control-plane] Creating static Pod manifest for "kube-controller-manager" I0320 19:52:35.301080 7087 manifests.go:104] [control-plane] getting StaticPodSpecs I0320 19:52:35.301228 7087 manifests.go:130] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I0320 19:52:35.301239 7087 manifests.go:130] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager" I0320 19:52:35.301251 7087 manifests.go:130] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I0320 19:52:35.301257 7087 manifests.go:130] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I0320 19:52:35.301265 7087 manifests.go:130] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I0320 19:52:35.301272 7087 manifests.go:130] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager" I0320 19:52:35.301281 7087 manifests.go:130] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager" I0320 19:52:35.301910 7087 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [control-plane] Creating static Pod manifest for "kube-scheduler" I0320 19:52:35.301948 7087 manifests.go:104] [control-plane] getting StaticPodSpecs I0320 19:52:35.302100 7087 manifests.go:130] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I0320 19:52:35.302522 7087 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" I0320 19:52:35.302538 7087 kubelet.go:70] Stopping the kubelet [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet I0320 19:52:35.640359 7087 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0320 19:52:35.640382 7087 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false I0320 19:52:35.640388 7087 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false I0320 19:52:35.640393 7087 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 1.0013663s [api-check] Waiting for a healthy API server. This can take up to 4m0s [api-check] The API server is healthy after 4.501888099s I0320 19:52:41.145464 7087 kubeconfig.go:665] ensuring that the ClusterRoleBinding for the kubeadm:cluster-admins Group exists I0320 19:52:41.146496 7087 kubeconfig.go:738] creating the ClusterRoleBinding for the kubeadm:cluster-admins Group by using super-admin.conf I0320 19:52:41.159147 7087 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0320 19:52:41.195435 7087 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0320 19:52:41.210074 7087 uploadconfig.go:132] [upload-config] Preserving the CRISocket information for the control-plane node I0320 19:52:41.210548 7087 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "master" as an annotation [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: glz0qu.mo5ix3t6353mx2db [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0320 19:52:41.279428 7087 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig I0320 19:52:41.279838 7087 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig I0320 19:52:41.280030 7087 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace I0320 19:52:41.286001 7087 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace I0320 19:52:41.346317 7087 request.go:661] Waited for 60.22878ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s I0320 19:52:41.545685 7087 request.go:661] Waited for 194.258284ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s I0320 19:52:41.550524 7087 kubeletfinalize.go:91] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem" [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0320 19:52:41.551362 7087 kubeletfinalize.go:145] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation [addons] Applied essential addon: CoreDNS I0320 19:52:42.174855 7087 request.go:661] Waited for 138.332974ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s I0320 19:52:42.346486 7087 request.go:661] Waited for 158.32817ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s I0320 19:52:42.546015 7087 request.go:661] Waited for 194.337232ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.4:6443 --token glz0qu.mo5ix3t6353mx2db \ --discovery-token-ca-cert-hash sha256:672af8eb1f8bd4e6b89549ee0cc0b5c16ee01d3afea954706e45409e38127850
- if anything goes wrong with the init process, first run
kubeadm reset
before starting over
- for the
-
install pod network add-on Calico in this case
curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/calico.yaml -O kubectl apply -f calico.yaml