os | Role | ip | Memory |
---|---|---|---|
Centos 7 | master01 | 192.168.25.30 | 4G |
Centos 7 | node01 | 192.168.25.31 | 4G |
Centos 7 | node02 | 192.168.25.31 | 4G |
sed -i "s/SELINUX\=.*/SELINUX=disabled/g" /etc/selinux/config
systemctl disable firewalld && systemctl stop firewalld
hostnamectl set-hostname Role_name
echo -e "192.168.25.30 master01\n192.168.25.31 node01\n192.168.25.32 node02" >> /etc/hosts
設置內核參數node
cat << EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0 EOF
加載內核模塊mysql
modprobe br_netfilter echo "modprobe br_netfilter" >> /etc/rc.local
使內核參數生效linux
sysctl -p /etc/sysctl.d/k8s.conf
swapoff -a
修改fstab文件,關閉swap的自動掛載。nginx
/sbin/iptables -P FORWARD ACCEPT echo "sleep 60 && /sbin/iptables -P FORWARD ACCEPT" >> /etc/rc.local
yum install -y epel-release yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget
yum -y install ntpdate /usr/sbin/ntpdate -u ntpserver1: ntp1.aliyun.com /usr/sbin/ntpdate -u ntp1.aliyun.com
提示:master節點不須要安裝git
刪除自帶的dockergithub
yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-selinux \ docker-engine-selinux \ docker-engine
安裝依賴包web
yum install -y yum-utils \ device-mapper-persistent-data \ lvm2
安裝yum源sql
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
安裝docker-cedocker
yum -y install docker-ce
啓動,並設置開機自啓[安裝設置好flanneld後,再啓動docker]shell
systemctl start docker && systemctl enable docker
cfssl
export CFSSL_URL="https://pkg.cfssl.org/R1.2" wget "${CFSSL_URL}/cfssl_linux-amd64" -O /usr/local/bin/cfssl wget "${CFSSL_URL}/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
kubernetes系統各組件須要使用TLS證書對通訊進行加密,本本檔使用CloudFlare的工具集cfssl來生成Certificate Authority(CA)證書和密鑰文件,CA是自簽名的證書,用來簽名後續建立的其餘TLS證書。
如下操做都在master節點上執行,證書只須要建立一次便可,之後新增節點時,只須要將/etc/kubernetes/目錄下的證書拷貝到新節點便可。
1.建立CA配置文件
mkdir /root/ssl cd /root/ssl cat > ca-config.json << EOF { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" } } } } EOF
2.建立CA證書籤名請求
cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
3.生成CA證書和私鑰
# cfssl gencert -initca ca-csr.json | cfssljson -bare ca 2018/03/29 14:38:31 [INFO] generating a new CA key and certificate from CSR 2018/03/29 14:38:31 [INFO] generate received request 2018/03/29 14:38:31 [INFO] received CSR 2018/03/29 14:38:31 [INFO] generating key: rsa-2048 2018/03/29 14:38:31 [INFO] encoded CSR 2018/03/29 14:38:31 [INFO] signed certificate with serial number 438768005817886692243142700194592359153651905696
4.建立kubernetes證書籤名請求文件
cat > kubernetes-csr.json << EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.25.30", "192.168.25.31", "192.168.25.32", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
5.生成kubernetes證書和私鑰
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes 2018/03/29 14:46:12 [INFO] generate received request 2018/03/29 14:46:12 [INFO] received CSR 2018/03/29 14:46:12 [INFO] generating key: rsa-2048 2018/03/29 14:46:12 [INFO] encoded CSR 2018/03/29 14:46:12 [INFO] signed certificate with serial number 6955479006214073693226115919937339031303355422 2018/03/29 14:46:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). # ls kubernetes* kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem
6.建立admin證書籤名請求文件
cat > admin-csr.json << EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF
7.生成admin證書和私鑰
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 2018/03/29 14:57:01 [INFO] generate received request 2018/03/29 14:57:01 [INFO] received CSR 2018/03/29 14:57:01 [INFO] generating key: rsa-2048 2018/03/29 14:57:02 [INFO] encoded CSR 2018/03/29 14:57:02 [INFO] signed certificate with serial number 356467939883849041935828635530693821955945645537 2018/03/29 14:57:02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements") # ls admin* admin.csr admin-csr.json admin-key.pem admin.pem
8.建立kube-proxy證書籤名請求文件
cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
9.生成kube-proxy客戶端證書和私鑰
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2018/03/29 15:09:36 [INFO] generate received request 2018/03/29 15:09:36 [INFO] received CSR 2018/03/29 15:09:36 [INFO] generating key: rsa-2048 2018/03/29 15:09:36 [INFO] encoded CSR 2018/03/29 15:09:36 [INFO] signed certificate with serial number 225974417080991591210780916866547658424323006961 2018/03/29 15:09:36 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). # ls kube-proxy* kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
10.證書分發
將生成的證書和密鑰文件(後綴爲pem)拷貝到全部機器的/etc/kubernetes/ssl目錄下
mkdir -p /etc/kubernetes/ssl cp *.pem /etc/kubernetes/ssl ssh node01 "mkdir -p /etc/kubernetes/ssl" scp *.pem node01:/etc/kubernetes/ssl ssh node02 "mkdir -p /etc/kubernetes/ssl" scp *.pem node02:/etc/kubernetes/ssl
在三個節點都須要安裝etcd,下面的操做在每臺機器上操做一遍。
1.下載etcd安裝包並生成命令
wget https://github.com/coreos/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz tar -xvf etcd-v3.2.12-linux-amd64.tar.gz mv etcd-v3.2.12-linux-amd64/etcd* /usr/local/bin # 生成如下兩條命令 # etcd etcd etcdctl
2.建立工做目錄
mkdir -p /var/lib/etcd
3.建立系統服務文件
master01
cat > etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \\ --name master01 \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://192.168.25.30:2380 \\ --listen-peer-urls https://192.168.25.30:2380 \\ --listen-client-urls https://192.168.25.30:2379,http://127.0.0.1:2379 \\ --advertise-client-urls https://192.168.25.30:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster master01=https://192.168.25.30:2380,node01=https://192.168.25.31:2380,node02=https://192.168.25.32:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
node01
cat > etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \\ --name node01 \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://192.168.25.31:2380 \\ --listen-peer-urls https://192.168.25.31:2380 \\ --listen-client-urls https://192.168.25.31:2379,http://127.0.0.1:2379 \\ --advertise-client-urls https://192.168.25.31:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster master01=https://192.168.25.30:2380,node01=https://192.168.25.31:2380,node02=https://192.168.25.32:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
node02
cat > etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \\ --name node02 \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://192.168.25.32:2380 \\ --listen-peer-urls https://192.168.25.32:2380 \\ --listen-client-urls https://192.168.25.32:2379,http://127.0.0.1:2379 \\ --advertise-client-urls https://192.168.25.32:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster master01=https://192.168.25.30:2380,node01=https://192.168.25.31:2380,node02=https://192.168.25.32:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
4.啓動etcd服務
cp etcd.service /etc/systemd/system/ systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd
5.驗證etcd服務
# etcdctl \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ cluster-health member 2ea4d6efe7f32da is healthy: got healthy result from https://192.168.25.32:2379 member 5246473f59267039 is healthy: got healthy result from https://192.168.25.31:2379 member be723b813b44392b is healthy: got healthy result from https://192.168.25.30:2379 cluster is healthy
在node節點上都須要部署安裝
1.下載安裝Flannel
wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz mkdir flannel tar -xzvf flannel-v0.9.1-linux-amd64.tar.gz -C flannel cp flannel/{flanneld,mk-docker-opts.sh} /usr/local/bin
2.向etcd中寫入網段信息,只須要在一臺執行便可
etcdctl --endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mkdir /kubernetes/network etcdctl --endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mk /kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'
3.建立服務啓動文件
cat > flanneld.service << EOF [Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify ExecStart=/usr/local/bin/flanneld \\ -etcd-cafile=/etc/kubernetes/ssl/ca.pem \\ -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\ -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\ -etcd-endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \\ -etcd-prefix=/kubernetes/network ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service EOF
4.啓動Flanneld服務
mv flanneld.service /etc/systemd/system/ systemctl daemon-reload systemctl enable flanneld systemctl start flanneld systemctl status flanneld
5.檢查flanneld服務狀態
# /usr/local/bin/etcdctl \ --endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ ls /kubernetes/network/subnets /kubernetes/network/subnets/172.30.82.0-24 /kubernetes/network/subnets/172.30.1.0-24 /kubernetes/network/subnets/172.30.73.0-24
6.配置docker使用flanneld網絡
/usr/lib/systemd/system/docker.service
[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker # 修改 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS # 新增 EnvironmentFile=/run/flannel/docker ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process # restart the docker process if it exits prematurely Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
7.啓動docker
systemctl daemon-reload && systemctl start docker && systemctl enable docker
kubectl是kubernetes的集羣管理工具,任何節點經過kubetcl均可以管理整個k8s集羣。本文檔部署在master01這個節點,部署成功後會生成/root/.kube/config文件,kubectl就是經過這個獲取kube-apiserver地址,證書,用戶名等信息。
1.下載安裝包
wget https://dl.k8s.io/v1.8.6/kubernetes-client-linux-amd64.tar.gz tar -xzvf kubernetes-client-linux-amd64.tar.gz sudo cp kubernetes/client/bin/kube* /usr/local/bin/ chmod a+x /usr/local/bin/kube* export PATH=/root/local/bin:$PATH
2.建立/root/.kube/config文件
# 設置集羣參數,--server指定Master節點ip kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://192.168.25.30:6443 # 設置客戶端認證參數 kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem # 設置上下文參數 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin # 設置默認上下文 kubectl config use-context kubernetes
3.建立bootstartp.kubeconfig文件
#生成token 變量 export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF mv token.csv /etc/kubernetes/ # 設置集羣參數--server爲master節點ip kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://192.168.25.30:6443 \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig mv bootstrap.kubeconfig /etc/kubernetes/
4.建立kube-proxy.kubeconfig
# 設置集羣參數 --server參數爲master ip kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://192.168.25.30:6443 \ --kubeconfig=kube-proxy.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig mv kube-proxy.kubeconfig /etc/kubernetes/
5.將生成的配置文件拷貝到其餘的節點
scp /etc/kubernetes/kube-proxy.kubeconfig node01:/etc/kubernetes/ scp /etc/kubernetes/kube-proxy.kubeconfig node02:/etc/kubernetes/ scp /etc/kubernetes/bootstrap.kubeconfig node01:/etc/kubernetes/ scp /etc/kubernetes/bootstrap.kubeconfig node02:/etc/kubernetes/
wget https://dl.k8s.io/v1.8.6/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/
配置kube-apiserver服務管理文件
cat > kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --logtostderr=true \\ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \\ --advertise-address=192.168.25.30 \\ --bind-address=192.168.25.30 \\ --insecure-bind-address=127.0.0.1 \\ --authorization-mode=Node,RBAC \\ --runtime-config=rbac.authorization.k8s.io/v1alpha1 \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/etc/kubernetes/token.csv \\ --service-cluster-ip-range=10.254.0.0/16 \\ --service-node-port-range=8400-10000 \\ --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --client-ca-file=/etc/kubernetes/ssl/ca.pem \\ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\ --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\ --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\ --etcd-servers=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \\ --enable-swagger-ui=true \\ --allow-privileged=true \\ --apiserver-count=3 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/lib/audit.log \\ --event-ttl=1h \\ --v=2 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
確實狀況下,kubernetes對像保存在etcd的/registry路徑下,能夠經過--etcd-prefix參數進行跳轉
啓動服務,並設置開啓自啓
cp kube-apiserver.service /etc/systemd/system/ systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver
生成服務啓動腳本
cat > kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --logtostderr=true \\ --address=127.0.0.1 \\ --master=http://127.0.0.1:8080 \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=10.254.0.0/16 \\ --cluster-cidr=172.30.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/etc/kubernetes/ssl/ca.pem \\ --leader-elect=true \\ --v=2 Restart=on-failure LimitNOFILE=65536 RestartSec=5 [Install] WantedBy=multi-user.target EOF
啓動服務
cp kube-controller-manager.service /etc/systemd/system/ systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manage
配置kube-scheduler
cat > kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --logtostderr=true \\ --address=127.0.0.1 \\ --master=http://127.0.0.1:8080 \\ --leader-elect=true \\ --v=2 Restart=on-failure LimitNOFILE=65536 RestartSec=5 [Install] WantedBy=multi-user.target EOF
啓動kube-scheduler
cp kube-scheduler.service /etc/systemd/system/ systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler
# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}
kubelet在啓動時向kube-apiserver發送TLS bootstrapping請求,須要先將bootstrap token文件中的kubelet-bootstrap用戶賦予system:node-bootstrapper角色,而後kubelet纔有權限建立認證請求。
受權,在master上運行一次便可
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
下載和安裝kubelet和kube-proxy
wget https://dl.k8s.io/v1.8.6/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cp -r kubernetes/server/bin/{kube-proxy,kubelet} /usr/local/bin/
建立kubelet工做目錄
mkdir /var/lib/kubelet
配置kubelt
master01
cat > kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet \\ --address=192.168.25.30 \\ --hostname-override=192.168.25.30 \\ --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \\ --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --require-kubeconfig \\ --cert-dir=/etc/kubernetes/ssl \\ --container-runtime=docker \\ --cluster-dns=10.254.0.2 \\ --cluster-domain=cluster.local \\ --hairpin-mode promiscuous-bridge \\ --allow-privileged=true \\ --serialize-image-pulls=false \\ --register-node=true \\ --logtostderr=true \\ --cgroup-driver=cgroupfs \\ --v=2 Restart=on-failure KillMode=process LimitNOFILE=65536 RestartSec=5 [Install] WantedBy=multi-user.target EOF
啓動kubelet服務
cp kubelet.service /etc/systemd/system/kubelet.service systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet
kubelet首次啓動時像kube-apiserver發送證書籤名求情,必須經過受權後,纔會添加到集羣。
查詢受權請求
# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-450A0zCMYrGxWozsNukv6vh2NdBspA-hr6Rsz-LA9ro 3m kubelet-bootstrap Pending node-csr-5t_AUkaEhT98xX1g7zTpzaNzRB9rXh453i2Fu_yxvvs 3m kubelet-bootstrap Pending node-csr-p9r9gusX2kTGpyYFPlkoaSGyatLQmtDmL8NBee2D_s8 3m kubelet-bootstrap Pending
贊成受權請求
# kubectl certificate approve node-csr-450A0zCMYrGxWozsNukv6vh2NdBspA-hr6Rsz-LA9ro certificatesigningrequest "node-csr-450A0zCMYrGxWozsNukv6vh2NdBspA-hr6Rsz-LA9ro" approved # kubectl certificate approve node-csr-5t_AUkaEhT98xX1g7zTpzaNzRB9rXh453i2Fu_yxvvs certificatesigningrequest "node-csr-5t_AUkaEhT98xX1g7zTpzaNzRB9rXh453i2Fu_yxvvs" approved # kubectl certificate approve node-csr-p9r9gusX2kTGpyYFPlkoaSGyatLQmtDmL8NBee2D_s8 certificatesigningrequest "node-csr-p9r9gusX2kTGpyYFPlkoaSGyatLQmtDmL8NBee2D_s8" approved
查看全部集羣節點
# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.25.30 Ready <none> 15m v1.8.6 192.168.25.31 Ready <none> 15m v1.8.6 192.168.25.32 Ready <none> 15m v1.8.6
建立工做目錄
mkdir -p /var/lib/kube-proxy
配置kube-proxy服務
cat > kube-proxy.service << EOF [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy \\ --bind-address=192.168.25.30 \\ --hostname-override=192.168.25.30 \\ --cluster-cidr=10.254.0.0/16 \\ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\ --logtostderr=true \\ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
啓動kube-proxy服務
cp kube-proxy.service /etc/systemd/system/ systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy systemctl status kube-proxy
因爲默認鏡像爲谷歌鏡像,因此是須要修改的,因此用docker hup作了跳轉,修改好的yamk文件下載地址以下:
百度網盤(o3z9)
wget https://github.com/kubernetes/kubernetes/releases/download/v1.8.6/kubernetes.tar.gz tar xzvf kubernetes.tar.gz cd /root/kubernetes/cluster/addons/dns mv kubedns-svc.yaml.sed kubedns-svc.yaml #把文件中$DNS_SERVER_IP替換成10.254.0.2 sed -i 's/$DNS_SERVER_IP/10.254.0.2/g' ./kubedns-svc.yaml mv ./kubedns-controller.yaml.sed ./kubedns-controller.yaml #把$DNS_DOMAIN替換成cluster.local sed -i 's/$DNS_DOMAIN/cluster.local/g' ./kubedns-controller.yaml ls *.yaml kubedns-cm.yaml kubedns-controller.yaml kubedns-sa.yaml kubedns-svc.yaml kubectl create -f .
下載部署文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.1/src/deploy/recommended/kubernetes-dashboard.yaml
修改部署文件
kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: # 新增 type: NodePort ports: - port: 443 targetPort: 8443 # 新增 nodePort: 8510 selector: k8s-app: kubernetes-dashboard
建立pod
kubectl create -f kubernetes-dashboard.yaml
部署認證服務
cat > ./kubernetes-dashboard-admin.rbac.yaml << EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: dashboard-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system EOF kubectl create -f kubernetes-dashboard-admin.rbac.yaml
訪問地址,目前只能火狐訪問
https://192.168.25.30:8510
下載安裝文件
wget https://github.com/kubernetes/heapster/archive/v1.5.0.tar.gz tar xzvf ./v1.5.0.tar.gz cd ./heapster-1.5.0/ kubectl create -f deploy/kube-config/influxdb/ kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml123456
確認全部pod都正常啓動
kubectl get pods --all-namespaces
部署文件:
apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: registry.cn-qingdao.aliyuncs.com/k8/nginx:1.9.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: NodePort sessionAffinity: ClientIP selector: app: nginx ports: # 將容器的80端口映射到master主機的8888端口 - port: 80 nodePort: 8888
部署文件:
apiVersion: v1 kind: Pod metadata: name: mysql labels: app: mysql spec: containers: - name: mysql image: mysql # 環境變量 env: - name: MYSQL_ROOT_PASSWORD value: "123456" imagePullPolicy: IfNotPresent # 容器暴露端口 ports: - containerPort: 3306 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: mysql-service spec: type: NodePort sessionAffinity: ClientIP selector: app: mysql ports: - port: 3306 nodePort: 9306
journalctl -u kubelet -f
kubectl get pods --all-namespaces
kubectl get pods --all-namespaces