Kubernetes
Kubernetes online trainings
https://www.edx.org/course/introduction-to-kubernetes
https://kubernetes.io/docs/tutorials/
Kubernetes Administration training
Day 2
Two machines/nodes. On the Master node:
sudo apt-get update sudo apt-get install -y apt-transport-https sudo su - curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add cat <<EOF > /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y docker.io apt-get install -y kubelet kubeadm kubectl kubernetes-cni systemctl enable docker.service # kubeadm init # only one of the following 3: # WEAVE kubeadm init kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" #flanel kubeadm init --pod-network-cidr=10.244.0.0/16 kubectl apply -f "https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml" # calico kubeadm init --pod-network-cidr=10.233.64.0/18 kubectl apply -f "https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml" kubectl apply -f "https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml"
To configure your kubectl client, on the Master node run:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
Test that kubectl is working by listening the nodes:
kubectl get nodes
If you made a mistake with the kubeadm init, you can reset it using:
kubeadm reset
On the Worker Node:
sudo apt-get update sudo apt-get install -y apt-transport-https sudo su - curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add cat <<EOF > /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y docker.io apt-get install -y kubelet kubeadm kubernetes-cni systemctl enable docker.service kubeadm join <connection parameters>
For the last command, it was printed after the kubeadm init. It could be for example:
kubeadm join 172.31.44.155:6443 --token gf65x4.cyvrzvyx0w530iwe --discovery-token-ca-cert-hash sha256:6da727e3db049c66d2e26913a413c5188e36791d16ad1e5c6306a638a49ef15d
On the Master Node
Create the following file marti.yaml:
apiVersion: v1 kind: ServiceAccount metadata: name: martiusername namespace: martinamespace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: martiusername roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: martiusername namesapce: martinamespace
Execute it with:
kubectl create -f marti.yaml
Instead of:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Download https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml and replace the last part with:
spec:
type: NodePort
ports:
- port: 8443
nodePort: 30080
kubectl apply -f kubernetes-dashboard-EDITED.yaml
Using Firefox try to connect to the the dashboard using the public ip:30080
To know which token to use in login, run on the master:
# kubectl -n martinamespace describe secret $(kubectl -n kube-system get secret | grep martiusername | awk ' {print $1}')
kubectl -n martinamespace describe secret $(kubectl -n martinamespace get secret | grep martiusername | awk ' {print $1}')
For troubleshooting:
docker ps kubectl get svc -n kube-system kubectl get pods -n kube-system
To see why a pod crashed:
kubectl describe -n kube-system pod kubernetes-dashboard-57df4db6b-n5wmm kubectl describe -n kube-system pod coredns-86c58d9df4-j2vkj To delete the dashboard we created: kubectl delete svc kubernetes-dashboard -n kube-system
Create the following file:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-pod
spec:
podSelector:
matchLabels:
app: destination-pod
ingress:
- from:
- podSelector:
matchLabels:
app: source-pod
ports:
- protocol: TCP
port: 80
nginx-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: nginx-svc
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
ubuntu-pod:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ubuntu-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "while true; do sleep 3600;done"]
To connect to a Pod:
kubectl get pods kubectl exec -it ubuntu-deployment-5b6896777c-4lv8g bash
kubectl describe ubuntu-deployment-5b6896777c-4lv8g
Day 3
https://github.com/helm/helm/releases
kubectl create ns tiller ./helm init --tiller-namespace tiller
copy ingress to local disc:
./helm fetch <chartname> ./helm fetch stable-nginx-ingress
helm install -f values.yaml --name <reponame>/<chartname> <deploymentname> kubehelm install -f values.yaml --name stable/nginx-ingress --name ingressmarti --tiller-namespace tiller
helm install -f values.yaml stable/nginx-ingress --name nginx --dry-run --tiller-namespace tiller
helm create <chartname>
Deployment:https://pastebin.com/S6GmXdYm
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coffee
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
containers:
- name: coffee
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
Ingress: https://pastebin.com/Rd7aNTF7
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
Service: https://pastebin.com/qvU2Q5u4
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: coffee