2臺 Centos 7.5 cat /etc/hosts 192.168.100.101 k8s-master 192.168.103.102 k8s-node1 service 網絡:10.96.0.0/16 Pod網絡:10.81.0.0/16 主機名 hostnamectl set-hostname k8s-master hostnamectl set-hostname k8s-node1 關閉防火牆和selinux systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 修改內核參數 cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl --system 關閉系統的swap交換分區 swapoff -a 修改/etc/fstab,刪除交換分區
yum install -y epel-release
yum install -y wget
一、備份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
二、下載新的CentOS-Base.repo 到/etc/yum.repos.d/
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
三、以後運行yum makecache生成緩存
yum clean all
yum makecache
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
使用倉庫安裝node
https://yq.aliyun.com/articles/110806
安裝工具包
yum install -y yum-utils device-mapper-persistent-data lvm2
添加docker源
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
啓動穩定版本
yum-config-manager --enable docker-ce-stable
關閉
yum-config-manager --disable docker-ce-stable
查看docker版本
yum list docker-ce --showduplicates | sort -r
安裝
yum install docker-ce -y
yum install -y docker-ce-18.03.0.ce
驗證
docker info
使用第三個鏡像倉庫或者本身事先下載好linux
若是是安裝最新版本的kubernetes,也可使用阿里的鏡像源docker
CentOS / RHEL / Fedora cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF 直接yum安裝便可 yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable kubelet && systemctl start kubelet #注意此時,kubelet並不能啓動成功,初始化完成後會自動啓動
kubeadm init --kubernetes-version=v1.13.1 --pod-network-cidr=10.81.0.0/16 --apiserver-advertise-address=192.168.100.101 爲kubectl準備Kubeconfig文件 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
命令參數:json
--pod-network-cidr:指定Pod網段,默認爲192.168.0.0/16。bootstrap
--service-cidr:指定服務網段,默認爲10.96.0.0.12。centos
--kubernetes-version:指定kubernetes版本,不一樣時間安裝的kubeadm,所支持部署的kubernetes版本也不一樣,若是不支持會有報錯提示。api
--apiserver-advertise-address:指定apiserver監聽地址,默認監聽0.0.0.0。緩存
--apiserver-bind-port:指定apiserver監聽端口,默認6443。bash
--ignore-preflight-errors:忽略指定錯誤信息,默認狀況下若是swap打開會報錯,若是關閉了Swap此項能夠不指定網絡
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml 注意:默認calico網絡在192.168.0.0/16,若是宿主機 ip也在這個段會發生衝突,能夠手動修改配置文件 以下:修改成10.81.0.0 - name: CALICO_IPV4POOL_CIDR value: "10.81.0.0/16"
在Kubernetes集羣中Kube-Proxy組件負載均衡的功能,默認使用iptables,生產環境建議使用ipvs進行負載均衡。
1.添加腳本 #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 [root@linux-node1 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules [root@linux-node1 ~]# source /etc/sysconfig/modules/ipvs.modules 查看模塊是否加載正常 lsmod | grep -e ip_vs -enf_conntrack_ipv4 2.修改kube-proxy的配置 將mode修改成ipvs,以下所示: [root@linux-node1~]# kubectl edit cm kube-proxy -n kube-system … kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 修改完成後,要重啓kube-proxy 3.安裝ipvsadm命令驗證 [root@k8s-master ~]# yum install -y ipvsadm [root@k8s-master ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:443 rr -> 192.168.100.101:6443 Masq 1 0 0 TCP 10.96.0.10:53 rr -> 10.81.0.2:53 Masq 1 0 0 -> 10.81.0.3:53 Masq 1 0 0 TCP 10.105.54.12:5473 rr UDP 10.96.0.10:53 rr -> 10.81.0.2:53 Masq 1 0 0 -> 10.81.0.3:53 Masq 1 0 0 [root@k8s-master ~]# ku
[root@k8s-master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [root@k8s-master ~]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master Ready master 40m v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-master,node-role.kubernetes.io/master= k8s-node1 Ready <none> 37m v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-node1 [root@k8s-master ~]# kubectl label nodes k8s-node1 node-role.kubernetes.io/node= node/k8s-node1 labeled [root@k8s-master ~]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master Ready master 42m v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-master,node-role.kubernetes.io/master= k8s-node1 Ready node 39m v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-node1,node-role.kubernetes.io/node= [root@k8s-master ~]#
格式:kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash> 怎麼查找token [root@k8s-node1 ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 39p4bn.spf1yvgt3n7qc933 <invalid> 2019-01-22T16:34:05+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token [root@k8s-node1 ~]# [root@k8s-master ~]# kubeadm token create 1xlw8v.0iv91yae7c4yw3t0 [root@k8s-master ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 1xlw8v.0iv91yae7c4yw3t0 23h 2019-07-05T18:13:18+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token [root@k8s-master ~]# ##token-ca-cert-hash [root@k8s-node1 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 896ead0cc384e0e41139544e01049948c7b878732216476c2d5608c94c919ed6 [root@k8s-node1 ~]# kubeadm join 192.168.100.102:6443 --token 39p4bn.spf1yvgt3n7qc933 --discovery-token-ca-cert-hash sha256:896ead0cc384e0e41139544e01049948c7b878732216476c2d5608c94c919ed6 也能夠這樣 kubeadm join 192.168.100.101:6443 --token add3mn.tnorrntsgfo64tku --discovery-token-unsafe-skip-ca-verification 若是token過時 先生成 kubeadm token create kubeadm token list kubeadm join 192.168.100.101:6443 --token ijluj4.zkmo9aneom8yw3tz --discovery-token-unsafe-skip-ca-verification
刪除節點
kubectl delete node <node name>
scp root@<master ip>:/etc/kubernetes/admin.conf . kubectl --kubeconfig ./admin.conf get nodes
命令格式 一、kubectl drain <node name> --delete-local-data --force --ignore-daemonsets 實例: [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 16h v1.13.1 k8s-slave1 Ready master 16h v1.13.1 [root@k8s-master ~]# kubectl drain k8s-master --delete-local-data --force --ignore-daemonsets node/k8s-master cordoned WARNING: Ignoring DaemonSet-managed pods: kube-proxy-x7qgx, weave-net-hgx5z pod/coredns-86c58d9df4-z6gms evicted pod/coredns-86c58d9df4-ft67k evicted node/k8s-master evicted [root@k8s-master ~]# [root@k8s-master ~]# kubectl drain k8s-slave1 --delete-local-data --force --ignore-daemonsets node/k8s-slave1 cordoned WARNING: Ignoring DaemonSet-managed pods: kube-proxy-gb5bg, weave-net-dgjll pod/coredns-86c58d9df4-xf884 evicted pod/coredns-86c58d9df4-mrgbl evicted node/k8s-slave1 evicted [root@k8s-master ~]# 格式: 二、kubectl delete node <node name> 實例: [root@k8s-master ~]# kubectl delete node k8s-master node "k8s-master" deleted [root@k8s-master ~]# kubectl delete node k8s-node1 node "k8s-node1" deleted [root@k8s-master ~]# 驗證: [root@k8s-master ~]# kubectl get nodes No resources found. [root@k8s-master ~]# 三、kubeadm reset 實例: [root@k8s-master ~]# kubeadm reset [reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] are you sure you want to proceed? [y/N]: y [preflight] running pre-flight checks [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' W0125 10:01:24.805582 8782 reset.go:213] [reset] Unable to fetch the kubeadm-config ConfigMap, using etcd pod spec as fallback: failed to get node registration: faild to get corresponding node: nodes "k8s-master" not found [reset] stopping the kubelet service [reset] unmounting mounted directories in "/var/lib/kubelet" [reset] deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually. For example: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. [root@k8s-master ~]# [root@k8s-slave1 ~]# kubeadm reset [reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] are you sure you want to proceed? [y/N]: y [preflight] running pre-flight checks [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' W0125 10:01:47.326336 115631 reset.go:213] [reset] Unable to fetch the kubeadm-config ConfigMap, using etcd pod spec as fallback: failed to get config map: Get https://192.168.103.200:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 192.168.103.200:6443: connect: connection refused [reset] stopping the kubelet service [reset] unmounting mounted directories in "/var/lib/kubelet" [reset] deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually. For example: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. [root@k8s-node1 ~]# 四、iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
使用k8s的時候,通常都要從本身的habor倉庫拉去鏡像,爲了免去密碼登陸.能夠配置secret解決密碼登陸的問題.
#kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL #docker-registry 建立一個給 Docker registry 使用的 secret [root@k8s-master values.yaml]# kubectl create secret docker-registry registry-secret --docker-server=dev-hub.xx.net --docker-username=admin --docker-password=Harbor12345 --docker-email=admin@dev-hub.xx.net secret/registry-secret created [root@k8s-master values.yaml]# kubectl get secret NAME TYPE DATA AGE default-token-znlmw kubernetes.io/service-account-token 3 20m registry-secret kubernetes.io/dockerconfigjson 1 8s [root@k8s-master values.yaml]# kubectl get secret registry-secret NAME TYPE DATA AGE registry-secret kubernetes.io/dockerconfigjson 1 20s [root@k8s-master values.yaml]# kubectl get secret registry-secret -o yaml apiVersion: v1 data: .dockerconfigjson: eyJhdXRocyI6eyJkZXYtaHViLmppYXR1aXl1bi5uZXQiOnsiVXNlcm5hbWUiOiJhZG1pbiIsIlBhc3N3b3JkIjoiSGFyYm9yMTIzNDUiLCJFbWFpbCI6ImFkbWluQGRldi1odWIuamlhdHVpeXVuLm5ldCJ9fX0= kind: Secret metadata: creationTimestamp: "2019-06-14T08:57:50Z" name: registry-secret namespace: default resourceVersion: "2113" selfLink: /api/v1/namespaces/default/secrets/registry-secret uid: 7d2c98ef-8e82-11e9-8cc4-000c29a74c85 type: kubernetes.io/dockerconfigjson [root@k8s-master ~]# kubectl get secret registry-secret -o yaml > registry-secret-dev.yaml 刪除多餘配置 [root@k8s-master ~]# cat registry-secret-dev.yaml apiVersion: v1 data: .dockerconfigjson: eyJhdXRocyI6eyJkZXYtaHViLmppYXR1aXl1bi5uZXQiOnsiVXNlcm5hbWUiOiJhZG1pbiIsIlBhc3N3b3JkIjoiSGFyYm9yMTIzNDUiLCJFbWFpbCI6ImFkbWluQGRldi1odWIuamlhdHVpeXVuLm5ldCJ9fX0= kind: Secret metadata: name: registry-secret namespace: default type: kubernetes.io/dockerconfigjson [root@k8s-master ~]# echo "eyJhdXRocyI6eyJkZXYtaHViLmppYXR1aXl1bi5uZXQiOnsiVXNlcm5hbWUiOiJhZG1pbiIsIlBhc3N3b3JkIjoiSGFyYm9yMTIzNDUiLCJFbWFpbCI6ImFkbWluQGRldi1odWIuamlhdHVpeXVuLm5ldCJ9fX0=" |base64 -d {"auths":{"dev-hub.xx.net":{"Username":"admin","Password":"Harbor12345","Email":"admin@dev-hub.xx.net"}}}[root@dev-k8s-master ~]#