Ubuntu 18 Kubernetes集羣的安裝和部署 以及Helm的安裝

首先說一下個人環境, 我是在windows 10 上面建了一個ubuntu18的虛擬機,同時因爲某些緣由 不受網絡限制, 因此安裝比較順利。html

Install 

1.安裝並啓用 Docker node

sudo apt install docker.io
sudo systemctl enable docker
docker --versionlinux


2.添加 Kubernetes  signing key 和Repositorynginx

sudo apt install curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"git


3.安裝Kubeadmgithub

sudo apt install kubeadm kubeadm version
#經常使用命令 重啓kubelet服務: systemctl daemon
-reload systemctl restart kubelet sudo systemctl restart kubelet.service sudo systemctl daemon-reload sudo systemctl stop kubelet sudo systemctl enable kubelet sudo systemctl start kubelet

4.禁用  swapoffdocker

sudo swapoff -ajson

sudo sed -i '/ swap / s/^/#/' /etc/fstabubuntu


以上的指令我只在一臺Ubuntu上執行的(若是你有多臺計算機,須要在全部的計算機上執行以上指令,我這裏是經過拷貝虛擬機來實現的)windows

5.準備2臺虛擬機k8s-master和k8s-node(我這裏把上面的計算機命名爲 k8s_master ,copy它並命名爲k8s_node)

sudo hostnamectl set-hostname k8s-master #在k8s-master 上執行 IP:192.168.255.229
sudo hostnamectl set-hostname k8s-node #k8s-node 上執行 IP:192.168.255.230

Deploy

1.在master上初始化 Kubernetes 

使用kubeadm config print init-defaults能夠打印集羣初始化默認的使用的配置,使用kubeadm默認配置初始化的集羣,會在master節點打上node-role.kubernetes.io/master:NoSchedule的污點,阻止master節點接受調度運行工做負載。這裏測試環境只有兩個節點,因此將這個taint的effect從NoSchedule改成PreferNoSchedule 還有就是修訂kubernet版本1.15.2

apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.100.7 bindPort: 6443 nodeRegistration: taints: - effect: PreferNoSchedule key: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.15.2 networking: podSubnet: 10.244.0.0/16
#sudo kubeadm init --pod-network-cidr=192.168.255.229/2 kubeadm init --config kubeadm.yaml #--ignore-preflight-errors=Swap

#sudo kubeadm init --pod-network-cidr=192.168.100.0/2 這裏不該該用現有計算機的ip  這裏有解決方案  Kubernetes-dashboard pod is crashing again and again

若是執行中遇到detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd"

請參考Container runtimes執行

# Setup daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "insecure-registries":["192.168.100.20:80"] } EOF mkdir -p /etc/systemd/system/docker.service.d #我順便吧docker的私有倉庫也加在裏面 # Restart docker. systemctl daemon-reload systemctl restart docker

若是遇到port 10251 and 10252 are in use  錯誤請執行 netstat -lnp | grep 1025 而後kill 進程ID

2在master節點上執行以下:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config #檢查 master kubectl get nodes

3.Deploy a Pod Network  &  view the status

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl get pods --all-namespaces
sudo kubectl get nodes


4.add slave node 

在k8s-node上執行

kubeadm join 192.168.254.229:6443 --token ewlb93.v0ohocpvncaxgl16 --discovery-token-ca-cert-hash sha256:2522834081168fbe4b5b05854b964e76f1ea8bac6f8fc5e2be21c93c6a27c427


在k8s-master上檢查節點信息:

 

 5.安裝 Dashboard 插件 &檢查狀態

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml 
kubectl get deployment kubernetes-dashboard -n kube-system kubectl get svc kubernetes-dashboard -n kube-system
#kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml 

爲了簡單,我這裏直接爲 Dashboard 賦予 Admin 的權限(否者會禁止訪問),參考https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system

啓動代理 

kubectl proxy #或者 若是須要暴露給其餘機器,須要指定address: kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='^*$'

訪問http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

不過我在訪問這個地址 遇到錯誤以下:

root@k8s-master:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5c98db65d4-97xrf 1/1 Running 0 83m kube-system coredns-5c98db65d4-vvtfc 1/1 Running 0 83m kube-system etcd-k8s-master 1/1 Running 0 83m kube-system kube-apiserver-k8s-master 1/1 Running 0 83m kube-system kube-controller-manager-k8s-master 1/1 Running 0 83m kube-system kube-flannel-ds-amd64-gbg49 1/1 Running 1 80m kube-system kube-flannel-ds-amd64-hmrcp 1/1 Running 0 82m kube-system kube-proxy-lbp5k 1/1 Running 0 80m kube-system kube-proxy-szkb8 1/1 Running 0 83m kube-system kube-scheduler-k8s-master 1/1 Running 0 83m kube-system kubernetes-dashboard-7d75c474bb-p5nz5 0/1 CrashLoopBackOff 6 10m root@k8s-master:~# kubectl logs kubernetes-dashboard-7d75c474bb-p5nz5 -n kube-system 2019/08/07 11:34:33 Using in-cluster config to connect to apiserver 2019/08/07 11:34:33 Using service account token for csrf signing 2019/08/07 11:34:33 Starting overwatch 2019/08/07 11:35:03 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
root@k8s-master:~#

後來在https://stackoverflow.com/questions/52273029/kubernetes-dashboard-pod-is-crashing-again-and-again 找到解決方案,

Make sure you understand the difference between Host network, Pod network and Service network. These 3 networks can not overlap. For example --pod-network-cidr=192.168.0.0/16 must notinclude the IP address of your host, change it to 10.0.0.0/16 or something smaller if necessary.

因此 初始化該爲 sudo kubeadm init --pod-network-cidr=192.168.100.0/2  就okay 了

建立帳號

kubectl create serviceaccount dashboard -n default kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode

Copy the decoded password and login to dashboard.

 

 

在Kubernetes的安裝過程當中會默認安裝不少Service帳戶,分別有不一樣的訪問權限,要找到對應的token,你可使用下面的方式:

kubectl -n kube-system get secret
kubectl -n kube-system describe secret certificate-controller-token-4xr9x

上面咱們使用token來登陸, 這裏咱們仍是用上面的dashboard帳號來生成kubeconfig文件,之後用生成kubeconfig文件來登陸

#配置集羣信息 kubectl config set-cluster kubernetes --server=192.168.100.11:6443 --kubeconfig=/root/dashbord.conf #使用token寫入集羣驗證 #kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode #查看token kubectl config set-credentials dashboard --token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRhc2hib2FyZC10b2tlbi12dDl4OSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkYXNoYm9hcmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxZmM1ZDc1ZS0xMjE2LTQwMDgtYThhOS03ZjEwZGQ1NWJjNWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkYXNoYm9hcmQifQ.Neb5_blRSig6IU5oPtRIQlQhcELaeI8uu7jeiVEdiR3CLiCZYyiI7X6uNsrpGKAkR-OkGM1gOp09-pmxFI6m4lKHYu9S7R1MNigmQrxfZB4RJ-iYZCNp3Rra7mFrluwY_yMbzuZ__XeYShSOiO1VAS2ezWFGk9adgtbiWZkef_NxmYdwEmTAGkmazhatK9SGDWBea-1seoJx-SGFyA9j0gNcWqNrX93ozFmuNtYrPZSwhYkul-q4NHOz4Dp4Ux1C7gZzTIgBySaYZd5tiJIAmZ-6CV-ukmPtFn7tVlNaDkK4K5N6jzyDttlvHZJtWqBR7iWTyamAKAbycm_BmaQR4Q --kubeconfig=/root/dashboard.conf 配置上下文和當前上下文 kubectl config set-context dashboard@kubernetes --cluster=kubernetes --user=dashboard --kubeconfig=/root/dashboard.conf kubectl config use-context dashboard@kubernetes --kubeconfig=/root/dashboard.conf kubectl config view --kubeconfig=/root/dashboard.conf

將/root/dashboard.conf 文件發送到宿主機,瀏覽器訪問時選擇Kubeconfig認證,載入該配置文件,點擊登錄,便可實現訪問,如圖: 

 

 

 

卸載集羣

想要撤銷kubeadm作的事,首先要排除節點,並確保在關閉節點以前要清空節點。

在主節點上運行:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
而後在須要移除的節點上,重置kubeadm的安裝狀態:

kubeadm reset

重啓master節點 若是遇到以下錯誤

The connection to the server 10.2.0.165:6443 was refused - did you specify the right host or port? 解決方案The connection to the server <host>:6443 was refused - did you specify the right host or port?

sudo -i swapoff -a exit strace -eopenat kubectl version

#若是以上方法沒有搞定能夠執行如下命令
systemctl status docker #查看docker狀態
systemctl status kubelet #查看kubelet的狀態
netstat -pnlt | grep 6443 #查看6443的端口應該啓動起來了
journalctl -xeu kubelet #查看kubelet的日誌解決問題

systemctl restart kubelet #最後重啓

-------------------------2019-08-20--------------------------

Kubernetes經常使用組件部署

Helm的安裝

Helm由客戶端命helm令行工具和服務端tiller組成,Helm的安裝十分簡單。 下載helm命令行工具到master節點node1的/usr/local/bin下,這裏下載的2.14.1版本:

curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz
tar -zxvf helm-v2.14.1-linux-amd64.tar.gz cd linux-amd64/ cp helm /usr/local/bin/

爲了安裝服務端tiller,還須要在這臺機器上配置好kubectl工具和kubeconfig文件,確保kubectl工具能夠在這臺機器上訪問apiserver且正常使用。 這裏的node1節點已經配置好了kubectl。

由於Kubernetes APIServer開啓了RBAC訪問控制,因此須要建立tiller使用的service account: tiller並分配合適的角色給它。 詳細內容能夠查看helm文檔中的Role-based Access Control。 這裏簡單起見直接分配cluster-admin這個集羣內置的ClusterRole給它。建立helm-rbac.yaml文件

apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system

接下來使用helm部署tiller,tiller默認被部署在k8s集羣中的kube-system這個namespace下

kubectl create -f helm-rbac.yaml helm init --service-account tiller --skip-refresh kubectl get pod -n kube-system -l app=helm helm version

最後在k8s-master上修改helm chart倉庫的地址爲azure提供的鏡像地址:

helm repo add stable http://mirror.azure.cn/kubernetes/charts

使用Helm部署Nginx Ingress

爲了便於將集羣中的服務暴露到集羣外部,須要使用Ingress。接下來使用Helm將Nginx Ingress部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的邊緣節點上,關於Kubernetes邊緣節點的高可用相關的內容能夠查看以前整理的Bare metal環境下Kubernetes Ingress邊緣節點的高可用,Ingress Controller使用hostNetwork

咱們將node1(192.168.100.7)作爲邊緣節點,打上Label:

root@k8s-master:~/linux-amd64# kubectl label node k8s-master node-role.kubernetes.io/edge= node/k8s-master labeled root@k8s-master:~/linux-amd64# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready edge,master 34m v1.15.2 k8s-node Ready <none> 28m v1.15.2 root@k8s-master:~/linux-amd64# 

stable/nginx-ingress chart的值文件ingress-nginx.yaml以下:

controller: replicaCount: 1 hostNetwork: true nodeSelector: node-role.kubernetes.io/edge: '' affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx-ingress - key: component operator: In values: - controller topologyKey: kubernetes.io/hostname tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule defaultBackend: nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule

nginx ingress controller的副本數replicaCount爲1,將被調度到k8s-master這個邊緣節點上。這裏並無指定nginx ingress controller service的externalIPs,而是經過hostNetwork: true設置nginx ingress controller使用宿主機網絡。

helm repo update helm install stable/nginx-ingress -n nginx-ingress --namespace ingress-nginx -f ingress-nginx.yaml helm repo add stable http://mirror.azure.cn/kubernetes/charts

若是訪問http://192.168.100.7返回default backend,則部署完成。

kubectl -n ingress-nginx exec nginx-ingress-controller-xxxxx -- cat /etc/nginx/nginx.confkubectl get pod -n ingress-nginx -o wide

使用Helm部署dashboard

kubernetes-dashboard.yaml

image: repository: k8s.gcr.io/kubernetes-dashboard-amd64 tag: v1.10.1 ingress: enabled: true hosts: - k8s.frognew.com annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" tls: - secretName: frognew-com-tls-secret hosts: - k8s.frognew.com nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule rbac: clusterAdminRole: true

安裝:

helm install stable/kubernetes-dashboard -n kubernetes-dashboard --namespace kube-system -f kubernetes-dashboard.yaml
kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubectl describe -n kube-system secret/kubernetes-dashboard-token-xxx
helm search kubernetes-dashboard
kubectl get pods -n kube-system -o wide

修改本機的hosts文件:

192.168.100.11 k8s.frognew.com

 

 

參考:

kubernetes 安裝遇到的一些問題

K8s折騰日記(零) -- 基於 Ubuntu 18.04 安裝部署K8s集羣

Install and Deploy Kubernetes on Ubuntu 18.04 LTS

Install and Configure Kubernetes (k8s) 1.13 on Ubuntu 18.04 LTS / Ubuntu 18.10

Deploy Kubernetes cluster using kubeadmin on Ubuntu Server

Kubernetes on Ubuntu 18.04 - Master and Dashboard setup

使用kubeadm安裝Kubernetes 1.11

kubeadm 部署的k8s集羣(1.11.1) Dashboard遇到的問題

10分鐘看懂Docker和K8S

18張兒童插畫讓你秒懂Kubernetes

kubernetes外部訪問的幾種方式

深刻玩轉K8S以外網如何訪問業務應用

k8s外網如何訪問業務應用

Troubleshooting kubectl Error: The connection to the server x.x.x.x:6443 was refused – did you specify the right host or port?​​​​​​​

Kubernetes學習之路(十九)之Kubernetes dashboard認證訪問

相關文章
相關標籤/搜索