Skip to main content

Installing Kubernetes 1.8.1 on centos 7 with flannel

Prerequisites:-

You should have at least two VMs (1 master and 1 slave) with you before creating cluster in order to test full functionality of k8s.

1] Master :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD     ( suggested )

2] Slave :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD     ( suggested )

3] Also, make sure of following things.

  • Network interconnectivity between VMs.

  • hostnames

  • Prefer to give Static IP.

  • DNS entries

  • Disable SELinux


$ vi /etc/selinux/config

  • Disable and stop firewall. ( If you are not familiar with firewall )


$ systemctl stop firewalld

$ systemctl disable firewalld

Following steps creates k8s cluster on the above VMs using kubeadm on centos 7.

Step 1] Installing kubelet and kubeadm on all your hosts

$ ARCH=x86_64

$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-${ARCH}

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

$ setenforce 0

$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni

$ systemctl enable docker && systemctl start docker

$ systemctl enable kubelet && systemctl start kubelet

You might have an issue where the kubelet service does not start. You can see the error in /var/log/messages: If you have an error as follows:
Oct 16 09:55:33 k8s-master kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Oct 16 09:55:33 k8s-master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE

Then you will have to initialize the kubeadm first as in the next step. And the start the kubelet service.

Step 2.1] Initializing your master

$ kubeadm init

Note:-

  1. execute above command on master node. This command will select one of interface to be used as API server. If you wants to provide another interface please provide “--apiserver-advertise-address=<ip-address>” as an argument. So the whole command will be like this-


$ kubeadm init --apiserver-advertise-address=<ip-address>

 

  1. K8s has provided flexibility to use network of your choice like flannel, calico etc. I am using flannel network. For flannel network we need to pass network CIDR explicitly. So now the whole command will be-


$ kubeadm init --apiserver-advertise-address=<ip-address> --pod-network-cidr=10.244.0.0/16

Exa:- $  kubeadm init --apiserver-advertise-address=172.31.14.55 --pod-network-cidr=10.244.0.0/16

Step 2.2] Start using cluster

$ sudo cp /etc/kubernetes/admin.conf $HOME/
$ sudo chown $(id -u):$(id -g) $HOME/admin.conf
$ export KUBECONFIG=$HOME/admin.conf
-> Use same network CIDR as it is also configured in yaml file of flannel that we are going to configure in step 3.

-> At the end you will get one token along with the command, make a note of it, which will be used to join the slaves.

 

Step 3] Installing a pod network

Different networks are supported by k8s and depends on user choice. For this demo I am using flannel network. As of k8s-1.6, cluster is more secure by default. It uses RBAC ( Role Based Access Control ), so make sure that the network you are going to use has support for RBAC and k8s-1.6.

  1. Create RBAC Pods :


$ kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Check whether pods are creating or not :

$ kubectl get pods --all-namespaces

  1. Create Flannel pods :


$ kubectl apply -f   https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Check whether pods are creating or not :

$ kubectl get pods --all-namespaces -o wide

-> at this stage all your pods should be in running state.

-> option “-o wide” will give more details like IP and slave where it is deployed.

 

Step 4] Joining your nodes

 

SSH to slave and execute following command to join the existing cluster.

$ kubeadm join --token <token> <master-ip>:<master-port>

You might also have an ca-cert-hash make sure you copy the entire join command from the init output to join the nodes.

Go to master node and see whether new slave has joined or not as-

$ kubectl get nodes

-> If slave is not ready, wait for few seconds, new slave will join soon.

 

Step 5]  Verify your cluster by running sample nginx application

$ vi  sample_nginx.yaml

---------------------------------------------

apiVersion: apps/v1beta1

kind: Deployment

metadata:

 name: nginx-deployment

spec:

 replicas: 2 # tells deployment to run 2 pods matching the template

 template: # create pods using pod definition in this template

   metadata:

     # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is

     # generated from the deployment name

     labels:

       app: nginx

   spec:

     containers:

     - name: nginx

       image: nginx:1.7.9

       ports:

       - containerPort: 80

------------------------------------------------------

$ kubectl create -f sample_nginx.yaml

 

Verify pods are getting created or not.

$ kubectl get pods

$ kubectl get deployments

 

Now , lets expose the deployment so that the service will be accessible to other pods in the cluster.

$ kubectl expose deployment nginx-deployment --name=nginx-service --port=80 --target-port=80 --type=NodePort

 

Above command will create service with the name “nginx-service”. Service will be accessible on the port given by “--port” option for the “--target-port”. Target port will be of pod. Service will be accessible within the cluster only. In order to access it using your host IP “NodePort” option will be used.

 

--type=NodePort :- when this option is given k8s tries to find out  one of free port in the range 30000-32767 on all the VMs of the cluster and binds the underlying service with it. If no such port found then it will return with an error.

 

Check service is created or not

$ kubectl get svc

 

Try to curl -

$ curl <service-IP> 80  

From all the VMs including master. Nginx welcome page should be accessible.

$ curl <master-ip> nodePort

$ curl <slave-IP> nodePort

Execute this from all the VMs. Nginx welcome page should be accessible.

Also, Access nginx home page by using browser.

Comments

Popular posts from this blog

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied. The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. The key features of Terraform are: Infrastructure as Code : Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and

Java 8 coding challenge: Roy and Profile Picture

Problem:  Roy wants to change his profile picture on Facebook. Now Facebook has some restriction over the dimension of picture that we can upload. Minimum dimension of the picture can be  L x L , where  L  is the length of the side of square. Now Roy has  N  photos of various dimensions. Dimension of a photo is denoted as  W x H where  W  - width of the photo and  H  - Height of the photo When any photo is uploaded following events may occur: [1] If any of the width or height is less than L, user is prompted to upload another one. Print " UPLOAD ANOTHER " in this case. [2] If width and height, both are large enough and (a) if the photo is already square then it is accepted. Print " ACCEPTED " in this case. (b) else user is prompted to crop it. Print " CROP IT " in this case. (quotes are only for clarification) Given L, N, W and H as input, print appropriate text as output. Input: First line contains  L . Second line contains  N , number of

Salt stack issues

The function “state.apply” is running as PID Restart salt-minion with command:  service salt-minion restart No matching sls found for ‘init’ in env ‘base’ Add top.sls file in the directory where your main sls file is present. Create the file as follows: 1 2 3 base: 'web*' : - apache If the sls is present in a subdirectory elasticsearch/init.sls then write the top.sls as: 1 2 3 base: '*' : - elasticsearch.init How to execute saltstack-formulas create file  /srv/pillar/top.sls  with content: base : ' * ' : - salt create file  /srv/pillar/salt.sls  with content: salt : master : worker_threads : 2 fileserver_backend : - roots - git gitfs_remotes : - git://github.com/saltstack-formulas/epel-formula.git - git://github.com/saltstack-formulas/git-formula.git - git://github.com/saltstack-formulas/nano-formula.git - git://github.com/saltstack-f