1、各相關組件及機器環境 (如下步驟每臺master和node都須要操做)html
OS:CentOS 7.5 x86_64node
Container runtime:Docker 18.06.celinux
Kubernetes:1.14.3git
IP地址 | 主機名 | 角色 | CPU | Memory |
192.168.100.150 | master.ilinux.io | master | >=2c | >=2G |
192.168.100.156 | node01.ilinux.io | node | >=2c | >=2G |
192.168.100.157 | node02.ilinux.io | node | >=2c | >=2G |
一、編輯Master和各node的/etc/hosts,解析以下github
192.168.100.150 master.ilinux.io master 192.168.100.156 node01.ilinux.io node01 192.168.100.157 node02.ilinux.io node02
二、主機時間同步(這裏同步互聯網時間)chrome
[root@master ~]# systemctl enable chronyd.service [root@master ~]# systemctl status chronyd.service
三、關閉防火牆和Selinux服務docker
[root@node03 ~]# systemctl stop firewalld && systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@master ~]# setenforce 0 [root@master ~]# vim /etc/selinux/config SELINUX=disabled
四、禁用Swap設備(可選操做)vim
[root@master ~]# swapoff -a [root@master ~]# sed -i 's/.*swap.*/#&/' /etc/fstab
2、部署kubernetes集羣api
五、在Master及各Node安裝Docker、kubelet及kubeadm,並以守護進程的方式啓動Docker和Kuberlet 瀏覽器
Docker的安裝參照以前博客 http://www.javashuo.com/article/p-tihursoc-by.html
一、配置內核參數,將橋接的IPv4流量傳遞到iptables的鏈 (每臺master和node都須要操做) [root@master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF [root@master ~]# sysctl --system 二、配置國內kuberneetes的yum源,因爲網絡緣由,中國沒法直接鏈接到google的網絡,須要配置阿里雲的yum源(每臺master和node都須要操做) [root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF 三、[root@master ~]# yum install -y kubelet kubeadm kubectl [root@node01 ~]# yum install -y kubelet kubeadm Kubelet負責與其餘節點集羣通訊,並進行本節點Pod和容器生命週期的管理。Kubeadm是Kubernetes的自動化部署工具,下降了部署難度,提升效率。Kubectl是Kubernetes集羣管理工具 舒適提示:若是yum安裝提示找不到鏡像之類的,請yum makecache更新下yum源 四、[root@master ~]# systemctl daemon-reload 五、[root@master ~]# systemctl start kubelet && systemctl enable kubelet //master和node節點都要啓動kubelet
六、初始化集羣,在master上執行kubeadm init
[root@master ~]# kubeadm init --kubernetes-version=1.14.3 \ --apiserver-advertise-address=192.168.100.150 \ --image-repository registry.aliyuncs.com/google_containers \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 //如下是執行完畢後輸出的部分信息 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.100.150:6443 --token cxins6.pxbyomo4pp1mnrao \ --discovery-token-ca-cert-hash sha256:35876ef6f2e5fe7eb5c7bb709dbd5e09d0e9e7d3adf41cbe708eec4fb586c8d6
七、配置kubectl工具
[root@master ~]# mkdir -p /root/.kube [root@master ~]# sudo cp /etc/kubernetes/admin.conf /root/.kube/config [root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]# kubectl get cs NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health":"true"} controller-manager Healthy ok scheduler Healthy ok
上面的STATUS結果爲"Healthy",表示組件處於健康狀態,不然須要檢查錯誤,若是排除不了問題,可使用"kubeadm reset" 命令重置集羣后從新初始化
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master.ilinux.io NotReady master 10m v1.14.3
此時的Master處於"NotReady"(未就緒),由於集羣中還沒有安裝網絡插件,部署完網絡後會ready,下面部署flannel
八、部署flannel網絡,只在master上部署
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
下面看下集羣的狀態
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master.ilinux.io Ready master 17m v1.14.3 集羣處於Ready狀態,node節點能夠加入集羣中
九、node節點加入集羣
[root@node01 ~]# kubeadm join 192.168.100.150:6443 --token 2dt1wp.oudskargctjss991 \ --discovery-token-ca-cert-hash sha256:15aa0537c14d50df4fc9f45b6bdff0c30f8ef7114463a12e022e33619936266c //如下是部分輸出信息 This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
執行完畢後稍等一會,在主節點上查看集羣的狀態,到這裏咱們一個最簡單的包含最核心組件的集羣搭建完畢!
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master.ilinux.io Ready master 34m v1.14.3 node01.ilinux.io Ready <none> 6m14s v1.14.3 node02.ilinux.io Ready <none> 6m8s v1.14.3
3、安裝其餘附件組件
十、查看集羣的API通告地址
[root@master ~]# kubectl cluster-info Kubernetes master is running at https://192.168.100.150:6443 KubeDNS is running at https://192.168.100.150:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
查看集羣的版本
[root@master ~]# kubectl version --short Client Version: v1.14.3 Server Version: v1.14.3
十一、安裝dashboard,使用UI界面管理集羣
一、建立dashboard的yaml文件 [root@master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml 二、修改部分配置文件內容 [root@master ~]# sed -i 's/k8s.gcr.io/loveone/g' kubernetes-dashboard.yaml [root@master ~]# sed -i '/targetPort:/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' kubernetes-dashboard.yaml 三、部署dashboard [root@master ~]# kubectl create -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created 四、建立完成後,檢查各服務運行狀態 [root@master ~]# kubectl get deployment kubernetes-dashboard -n kube-system NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-dashboard 1/1 1 1 89s [root@master ~]# kubectl get services -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 61m kubernetes-dashboard NodePort 10.102.234.209 <none> 443:30001/TCP 16m [root@master ~]# netstat -ntlp|grep 30001 tcp6 0 0 :::30001 :::* LISTEN 17306/kube-proxy
使用Firefox瀏覽器輸入Dashboard訪問地址:https://192.168.100.150:30001
這裏使用其餘如chrome會提示安全問題沒法鏈接!!!
查看訪問Dashboard的token [root@master ~]# kubectl create serviceaccount dashboard-admin -n kube-system serviceaccount/dashboard-admin created [root@master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created [root@master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') Name: dashboard-admin-token-9hglw Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 30efdd50-92bd-11e9-91e3-000c296bd9bc Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tOWhnbHciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzBlZmRkNTAtOTJiZC0xMWU5LTkxZTMtMDAwYzI5NmJkOWJjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.Bg9FOIr6RkepjCFav8tbkbTALGEX7bZJMNOYMOrYhFPhnhCs1RSxop7pCGBtdjug_Zpsb9UJ1WNWTsCInUlMYtSHkbaqVLZQEdIgD6jGb177CxIZBcCuxmxxQm0JMJdYjc6Y_1wYSTJGHtmWOHa70pUEcKo9I0LonTUfHCZh5PgS3JrwiTrsqe1RGyz3Jz4p9EIVPfcxmKCowSuapinOTezAWK2XAUhk2h5utXgag6RRnrPcHtlncZzW5fMTSfdAZv5xlaI64AM__qiwOTqyK-14xkda5nbk9DGhN5UwhkHzyvU6ApGT7A9Tr3j3QkMov9gEyVIDbSbBaSj8xBt36Q