The purpose of this guide is to setup a simple Kubernetes cluster with one control-plane node (or “master” node), and one worker node (or “minion” node) on CentOS. This guide is not intended to be used for Production, however, it could be used to get started on building a Production cluster. This guide also assumes that you are installing the Kuberenetes cluster on freshly installed hosts with no previous existing software’s and configurations. You should have your networking and hostnames configured, SELinux diabled, and for this simple dev cluster, should have the IP and hostname mapping already setup in /etc/hosts.
You will need to turn swap off on your hosts. Kubelet will not work otherwise.
swapoff -a sed -i '/swap/d' /etc/fstab
Setup the Docker repo for each host:
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install Docker on each host:
sudo yum -y install docker sudo systemctl start docker sudo systemctl enable docker
We are now ready to being installing Kubernetes.
Setup the Kubernetes repo on each host:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
Install the following packages on each host:
yum install -y kubelet kubeadm kubectl
Start Kubelet service on each host
systemctl enable kubelet systemctl start kubelet
On the master node, set the following firewall rules:
firewall-cmd --permanent --add-port=6443/tcp firewall-cmd --permanent --add-port=2379-2380/tcp firewall-cmd --permanent --add-port=10250/tcp firewall-cmd --permanent --add-port=10251/tcp firewall-cmd --permanent --add-port=10252/tcp firewall-cmd --permanent --add-port=10255/tcp firewall-cmd --reload
On the worker node, set the following firewall rules:
firewall-cmd --permanent --add-port=10250/tcp firewall-cmd --permanent --add-port=10251/tcp firewall-cmd --permanent --add-port=10255/tcp firewall-cmd --reload
On each host, set the net.bridge.bridge-nf-call-iptables to ‘1’ in your sysctl config file so that packets are properly processed by iptables for filtering and port forwarding:
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
On the master node, initialize a cluster with pod network CIDR of 10.244.0.0/16. This is required by flannel:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
*** IMPORTANT *** Make note of the last line of output (kubeadm join …). You will need to run this later.
On master node, to start using the cluster you need to run it as a regular user by typing:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Set Up Pod Network
A Pod Network allows nodes within the cluster to communicate. We’re using flannel for this purpose.
On the master node, install flannel with the command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Wait a a few seconds, and then on master node, confirm everything is “ready” or “running”. If not, check back a minute later.
kubectl get nodes kubectl get pods --all-namespaces
On your worker nodes, run the “kubeadm join” line you got from the “kudeadm init” earlier.
Flush iptables on all nodes
sudo -s systemctl stop kubelet systemctl stop docker iptables --flush iptables -tnat --flush systemctl start kubelet systemctl start docker echo "iptables --flush" >> /etc/rc.d/rc.local echo "iptables -tnat --flush" >> /etc/rc.d/rc.local chmod +x /etc/rc.d/rc.local systemctl enable rc-local exit
Back to the master, check and make sure status’ are “Ready”
kubectl get nodes -o wide
You have now successfully setup a simple Kubernetes Bare Metal cluster.
The post Kubernetes Bare Metal Walkthrough appeared first on San Diego Linux - Linux Consultant.