一、 傳統方式,如下組件所有運行在系統層面(yum或者rpm包),都爲系統級守護進程html
二、 kubeadm方式,master和node上的組件所有運行爲pod容器,k8s也爲podnode
a) master/nodeslinux
安裝kubelet,kubeadm,dockergit
b) master:kubeadm init(完成集羣初始化)github
c) nodes:kubeadm joindocker
主機 IP json
k8smaster 192.168.2.19bootstrap
k8snode01 192.168.2.21vim
k8snode02 192.168.2.5centos
版本號:
docker: 18.06.1.ce-3.el7
OS: CentOS release 7
kubernetes: 1.11.2
kubeadm 1.11.2
kubectl 1.11.2
kubelet 1.11.2
etcdctl: 3.2.15
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
關閉SELinux和Firewall
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config setenforce 0 systemctl disable firewalld.service systemctl disable firewalld systemctl stop firewalld
防火牆開啓轉發
iptables -P FORWARD ACCEPT
安裝必要和經常使用工具
yum install -y epel-release vim vim-enhanced lrzsz unzip ntpdate sysstat dstat wget mlocate mtr lsof iotop bind-tools git net-tools
其餘
#時間設置
timedatectl set-timezone Asia/Shanghai ntpdate ntp1.aliyun.com rm -f /etc/localtime ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
#設置日誌格式 systemctl restart rsyslog echo 'export HISTTIMEFORMAT="%m/%d %T "' >> ~/.bashrc source ~/.bashrc
#設置鏈接符 cat >> /etc/security/limits.conf << EOF * soft nproc 65535 * hard nproc 65535 * soft nofile 65535 * hard nofile 65535 EOF echo "ulimit -SHn 65535" >> /etc/profile echo "ulimit -SHn 65535" >> /etc/rc.local ulimit -SHn 65535 sed -i 's/4096/10240/' /etc/security/limits.d/20-nproc.conf modprobe ip_conntrack modprobe br_netfilter cat >> /etc/rc.d/rc.local <<EOF modprobe ip_conntrack modprobe br_netfilter EOF chmod 744 /etc/rc.d/rc.local
#設置內核 cat <<EOF >> /etc/sysctl.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 vm.swappiness = 0 vm.overcommit_memory=1 vm.panic_on_oom=0 kernel/panic=10 kernel/panic_on_oops=1 kernel.pid_max = 4194303 vm.max_map_count = 655350 fs.aio-max-nr = 524288 fs.file-max = 6590202 EOF
sysctl -p /etc/sysctl.conf /sbin/sysctl -p # vim /etc/sysconfig/kubelet #因爲雲環境默認沒有swap分區,關閉swap選項 KUBELET_EXTRA_ARGS="--fail-swap-on=false" #設置kubeproxy模式爲ipvs KUBE_PROXY_MODE=ipvs
cd /etc/yum.repos.d/ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo vim kubernetes.repo i [kubernetes] name=Kuberneters Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg enabled=1 wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg rpm --import yum-key.gpg rpm --import rpm-package-key.gpg
因爲本例是實驗環境,若是生產環境存在saltstack和ansible等分發工具,能夠用分發工具去作
1. 在服務器 S 上執行以下命令來生成配對密鑰:
node01: mkdir /root/.ssh -p
node02: mkdir /root/.ssh -p
master: ssh-keygen -t rsa scp .ssh/id_rsa.pub node01:/root/.ssh/authorized_keys scp .ssh/id_rsa.pub node02:/root/.ssh/authorized_keys
二、分發鏡像源和驗證祕鑰
scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node01:/etc/yum.repos.d/ scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node02:/etc/yum.repos.d/ scp /etc/yum.repos.d/yum-key.gpg /etc/yum.repos.d/rpm-package-key.gpg node01:/root scp /etc/yum.repos.d/yum-key.gpg /etc/yum.repos.d/rpm-package-key.gpg node02:/root
rpm --import yum-key.gpg
rpm --import rpm-package-key.gpg
yum install docker-ce kubelet-1.11.2 kubeadm-1.11.2 kubectl-1.11.2 -y
安裝完畢後須要初始化,此時須要下載鏡像
因爲國內沒辦法訪問Google的鏡像源,變通的方法有兩種:
一、是從其餘鏡像源下載後,修改tag。
A、執行下面這個Shell腳本便可。
#!/bin/bash images=(kube-proxy-amd64:v1.11.2 kube-scheduler-amd64:v1.11.2 kube-controller-manager-amd64:v1.11.2 kube-apiserver-amd64:v1.11.2 etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9 k8s-dns-dnsmasq-nanny-amd64:1.14.9 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com done docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1
B、從官方拉取
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.2 docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.2 docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.2 docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd-amd64:3.2.18 docker pull coredns/coredns:1.1.3 docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 k8s.gcr.io/kube-proxy-amd64:v1.11.2 docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.11.2 k8s.gcr.io/kube-scheduler-amd64:v1.11.2 docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.2 k8s.gcr.io/kube-apiserver-amd64:v1.11.2 docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.2 k8s.gcr.io/kube-controller-manager-amd64:v1.11.2 docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18 docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag docker.io/coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
C、從阿里雲拉取
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.8 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.8 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.12 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
二、是有代理服務器
vim /usr/lib/systemd/system/docker.service [Service]下增長: Environment="HTTPS_PROXY=$PROXY_SERVER_IP:PORT" Environment="NO_PROXY=127.0.0.0/8,192.168.0.0/16"
Docker 官方和國內不少雲服務商都提供了國內加速器服務,例如:
當配置某一個加速器地址以後,若發現拉取不到鏡像,請切換到另外一個加速器地址。
國內各大雲服務商均提供了 Docker 鏡像加速服務,建議根據運行 Docker 的雲平臺選擇對應的鏡像加速服務。
使用阿里雲加速器或者其餘加速器:https://yeasy.gitbooks.io/docker_practice/content/install/mirror.html
配置鏡像加速器,您能夠經過修改daemon配置文件/etc/docker/daemon.json來使用加速器
sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://registry-mirror.qiniu.com","https://zsmigm0p.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker
systemctl daemon-reload
systemctl restart docker.service
設置K8S開機自啓
systemctl enable kubelet
systemctl enable docker
kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=all
過程簡介
[init] using Kubernetes version: v1.11.2 [preflight] running pre-flight checks I0829 17:45:00.722283 19542 kernel_validator.go:81] Validating kernel version I0829 17:45:00.722398 19542 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service ############certificates 證書############## [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.19] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.2.19 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" ############ 配置文件 ############## [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 43.503085 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation [bootstraptoken] using token: 0tjvur.56emvwc4k4ghwenz [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS v1.11版本正式命名 # 早期版本演進:skyDNS ---->kubeDNS---->CoreDNS [addons] Applied essential addon: kube-proxy # 做爲附件運行,自託管在K8S上,動態爲service資源生成ipvs(iptables)規則,1.11版本開始默認使用IPvs,不支持的話自動降級到iptables Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube #在家目錄下建立kube文件 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config #admin.conf包含一些kubectl拿來作配置文件指定鏈接K8S的認證信息 sudo chown $(id -u):$(id -g) $HOME/.kube/config #改屬主屬組 You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: #以root在其餘任意節點上執行如下命令加入K8S集羣中。 kubeadm join 192.168.2.19:6443 --token 0tjvur.56emvwc4k4ghwenz --discovery-token-ca-cert-hash sha256:f768ad7522e7bb5ccf9d0e0590c478b2a7161912949d1be3d5cd3f46ace7f4bf 註釋 #--discovery-token-ca-cert-hash #發現master使用的哈希碼驗證
根據提示執行以下操做
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
master部署網絡組件
CNI(容器網絡插件): flannel #不支持網絡策略 calico canel kube-router
注:若是是有代理服務器此處能夠正常部署,若是沒有,那麼節點須要下載鏡像後才能夠部署
nodes:
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 k8s.gcr.io/kube-proxy-amd64:v1.11.2
master:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
下面的命令須要在全部的node上執行。
yum install docker-ce kubelet-1.11.2 kubeadm-1.11.2 kubectl-1.11.2 -y
master端
scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service scp /etc/sysconfig/kubelet node01:/etc/sysconfig/ scp /etc/sysconfig/kubelet node02:/etc/sysconfig/
nodes端
systemctl start docker
systemctl enable docker kubelet
這個token是在master節點上kubeadm初始化時最後給出的結果: kubeadm join 192.168.2.19:6443 --token 0tjvur.56emvwc4k4ghwenz --discovery-token-ca-cert-hash sha256:f768ad7522e7bb5ccf9d0e0590c478b2a7161912949d1be3d5cd3f46ace7f4bf
###########若是token失效了##############
[root@master ~]# kubeadm token create
0wnvl7.kqxkle57l4adfr53
[root@node02 yum.repos.d]# kubeadm join 192.168.2.19:6443 --token 0wnvl7.kqxkle57l4adfr53 --discovery-token-unsafe-skip-ca-verification
此時master執行以下命令,應該會出現master和nodes的狀態爲Ready。
# kubectl get nodes