以前的博文中已經介紹過使用kubeadm自動化安裝Kubernetes ,可是因爲各個組件都是以容器的方式運行,對於具體的配置細節沒有太多涉及,爲了更好的理解Kubernetes中各個組件的做用,本篇博文將使用二進制的方式安裝Kubernetes集羣,對於各個組件的配置作進一步的詳細說明。node
在1.10版本中,已經逐步廢棄掉了非安全端口(默認8080)的鏈接方式,這裏會介紹使用ca證書雙向認證的方式來創建集羣,配置過程稍複雜。linux
一、兩臺CentOS7 主機,解析主機名,關閉防火牆,Selinux,同步系統時間:
10.0.0.1 node-1 Master
10.0.0.2 node-2 Node
Master上部署: nginx
Node上部署:git
二、下載官方的軟件包https://github.com/kubernetes/kubernetes/ ,這裏咱們下載二進制文件,這裏咱們選擇了1.10.2的版本:github
因爲使用的是二進制包,解壓後直接將對應的文件拷貝到執行目錄便可:web
# tar xf kubernetes-server-linux-amd64.tar.gz # cd kubernetes/server/bin # cp `ls|egrep -v "*.tar|*_tag"` /usr/bin/
下面對具體的服務配置進行說明。docker
etcd服務是Kubernetes集羣的核心數據庫,在安裝各個服務以前須要先安裝啓動。這裏演示的是部署etcd單節點,固然也能夠配置3節點的集羣。若是想配置更加簡單,推薦直接使用yum方式安裝。數據庫
# wget https://github.com/coreos/etcd/releases/download/v3.2.20/etcd-v3.2.20-linux-amd64.tar.gz # tar xf etcd-v3.2.20-linux-amd64.tar.gz # cd etcd-v3.2.20-linux-amd64 # cp etcd etcdctl /usr/bin/ # mkdir /var/lib/etcd # mkdir /etc/etcd
編輯systemd管理文件:vim
vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target [Service] Type=simple WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/usr/bin/etcd [Install] WantedBy=multi-user.target
啓動服務:centos
systemctl daemon-reload systemctl start etcd systemctl status etcd.service
查看服務狀態:
[root@node-1 ~]# netstat -lntp|grep etcd tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 18794/etcd tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 18794/etcd [root@node-1 ~]# etcdctl cluster-health member 8e9e05c52164694d is healthy: got healthy result from http://localhost:2379 cluster is healthy
說明: etcd 會啓用兩個端口,其中2380 是集羣的通訊端口,2379是服務端口。若是是配置etcd集羣,則要修改配置文件,設置監聽IP和端口。
一、編輯systemd的啓動文件:
vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://kubernetes.io/docs/concepts/overview After=network.target After=etcd.service [Service] EnvironmentFile=/etc/kubernetes/apiserver ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target
二、配置參數文件(須要先建立配置目錄):
# cat /etc/kubernetes/apiserver KUBE_API_ARGS="--storage-backend=etcd3 \ --etcd-servers=http://127.0.0.1:2379 \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --service-cluster-ip-range=10.222.0.0/16 \ --service-node-port-range=1-65535 \ --client-ca-file=/etc/kubernetes/ssl/ca.crt \ --tls-private-key-file=/etc/kubernetes/ssl/server.key \ --tls-cert-file=/etc/kubernetes/ssl/server.crt \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2"
三、建立日誌目錄和證書目錄,若是沒有配文件目錄也須要建立:
mkdir /var/log/kubernetes mkdir /etc/kubernetes mkdir /etc/kubernetes/ssl
一、配置systemd的啓動文件:
# cat /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://kubernetes.io/docs/setup After=kube-apiserver.service Requires=kube-apiserver.service [Service] EnvironmentFile=/etc/kubernetes/controller-manager ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
二、配置啓動參數文件:
# cat /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="--master=https://10.0.0.1:6443 \ --service-account-private-key-file=/etc/kubernetes/ssl/server.key \ --root-ca-file=/etc/kubernetes/ssl/ca.crt --kubeconfig=/etc/kubernetes/kubeconfig"
一、配置systemd啓動文件:
# cat /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Controller Manager Documentation=https://kubernetes.io/docs/setup After=kube-apiserver.service Requires=kube-apiserver.service [Service] EnvironmentFile=/etc/kubernetes/scheduler ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
二、配置參數文件:
# cat /etc/kubernetes/scheduler KUBE_SCHEDULER_ARGS="--master=https://10.0.0.1:6443 --kubeconfig=/etc/kubernetes/kubeconfig"
# cat /etc/kubernetes/kubeconfig apiVersion: v1 kind: Config users: - name: controllermanager user: client-certificate: /etc/kubernetes/ssl/cs_client.crt client-key: /etc/kubernetes/ssl/cs_client.key clusters: - name: local cluster: certificate-authority: /etc/kubernetes/ssl/ca.crt contexts: - context: cluster: local user: controllermanager name: my-context current-context: my-context
一、配置kube-apiserver的CA證書和私鑰文件,:
# cd /etc/kubernetes/ssl/ # openssl genrsa -out ca.key 2048 # openssl req -x509 -new -nodes -key ca.key -subj "/CN=10.0.0.1" -days 5000 -out ca.crt # CN指定Master的IP地址 # openssl genrsa -out server.key 2048
二、建立master_ssl.cnf文件:
# cat master_ssl.cnf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster.local DNS.5 = k8s_master IP.1 = 10.222.0.1 # ClusterIP 地址 IP.2 = 10.0.0.1 # master IP地址
三、基於上述文件,建立server.csr和 server.crt文件,執行以下命令:
# openssl req -new -key server.key -subj "/CN=node-1" -config master_ssl.cnf -out server.csr # CN指定主機名 # openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.cnf -out server.crt
提示: 執行以上命令後會生成6個文件,ca.crt ca.key ca.srl server.crt server.csr server.key。
四、設置kube-controller-manager相關證書:
# cd /etc/kubernetes/ssl/ # openssl genrsa -out cs_client.key 2048 # openssl req -new -key cs_client.key -subj "/CN=node-1" -out cs_client.csr # CN指定主機名 # openssl x509 -req -in cs_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cs_client.crt -days 5000
五、確保/etc/kubernetes/ssl/ 目錄下有以下文件:
[root@node-1 ssl]# ll total 36 -rw-r--r-- 1 root root 1090 May 25 15:34 ca.crt -rw-r--r-- 1 root root 1675 May 25 15:33 ca.key -rw-r--r-- 1 root root 17 May 25 15:41 ca.srl -rw-r--r-- 1 root root 973 May 25 15:41 cs_client.crt -rw-r--r-- 1 root root 887 May 25 15:41 cs_client.csr -rw-r--r-- 1 root root 1675 May 25 15:40 cs_client.key -rw-r--r-- 1 root root 1192 May 25 15:37 server.crt -rw-r--r-- 1 root root 1123 May 25 15:36 server.csr -rw-r--r-- 1 root root 1675 May 25 15:34 server.key
一、啓動kube-apiserver:
# systemctl daemon-reload # systemctl enable kube-apiserver # systemctl start kube-apiserver
說明:kube-apiserver 默認會啓動兩個端口(8080和6443),其中,8080是各個組件之間通訊的端口,在新的版本中已經不多使用,kube-apiserver所在的主機通常稱爲Master, 另外一個端口6443是爲HTTPS提供身份驗證和受權的端口。
二、啓動kube-controller-manager:
# systemctl daemon-reload # systemctl enable kube-controller-manager # systemctl start kube-controller-manager
說明:此服務會啓動一個10252的端口
三、啓動kube-scheduler
# systemctl daemon-reload # systemctl enable kube-scheduler # systemctl start kube-scheduler
說明: 此服務會啓動一個10251的端口
五、啓動各項服務時,分別查看對應的日誌和啓動狀態信息,確認服務沒有報錯
# systemctl status KUBE-SERVEICE-NAME
Node節點上部署的服務很是簡單,只須要部署 docker、kubelet和kube-proxy服務便可。
先配置以下文件:
# cat /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
上傳Kubernetes的Node節點二進制包,解壓後執行以下命令:
tar xf kubernetes-node-linux-amd64.tar.gz cd /kubernetes/node/bin cp kubectl kubelet kube-proxy /usr/bin/ mkdir /var/lib/kubelet mkdir /var/log/kubernetes mkdir /etc/kubernetes
一、安裝Docker17.03版本:
yum install docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y yum install docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y
二、配置啓動參數:
vim /usr/lib/systemd/system/docker.service ... ExecStart=/usr/bin/dockerd --registry-mirror https://qxx96o44.mirror.aliyuncs.com ...
三、啓動:
systemctl daemon-reload systemctl enable docker systemctl start docker
每臺Node節點上都須要配置kubelet的客戶端證書。
複製Master上的ca.crt,ca.key到Node節點上的ssl目錄,執行以下命令生成kubelet_client.crt和kubelet_client.csr文件:
# cd /etc/kubernetes/ssl/ # openssl genrsa -out kubelet_client.key 2048 # openssl req -new -key kubelet_client.key -subj "/CN=10.0.0.2" -out kubelet_client.csr # CN指定Node節點的IP # openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000
一、配置啓動文件:
# cat /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes API Server Documentation=https://kubernetes.io/doc After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubeconfig.yaml --logtostderr=false --log-dir=/var/log/kubernetes --v=2 Restart=on-failure [Install] WantedBy=multi-user.target
二、配置文件:
# cat /etc/kubernetes/kubeconfig.yaml apiVersion: v1 kind: Config users: - name: kubelet user: client-certificate: /etc/kubernetes/ssl/kubelet_client.crt client-key: /etc/kubernetes/ssl/kubelet_client.key clusters: - name: local cluster: certificate-authority: /etc/kubernetes/ssl/ca.crt server: https://10.0.0.1:6443 contexts: - context: cluster: local user: kubelet name: my-context current-context: my-context
三、啓動服務:
# systemctl daemon-reload # systemctl start kubelet # systemctl enable kubelet
四、在master上驗證:
[root@node-1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node-2 Ready <none> 36m v1.10.2
說明:kubelet充當了一個agent的角色,安裝好kubelet就能夠在master上查看到節點信息。kubelet的配置文件是一個yaml格式文件,對master的指定須要在配置文件中說明。默認監聽1024八、10250、1025五、4194端口。
一、建立systemd啓動文件:
# cat /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes kubelet agent Documentation=https://kubernetes.io/doc After=network.service Requires=network.service [Service] EnvironmentFile=/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
二、建立參數文件:
# cat /etc/kubernetes/proxy KUBE_PROXY_ARGS="--master=https://10.0.0.1:6443 --kubeconfig=/etc/kubernetes/kubeconfig.yaml"
三、啓動服務:
# systemctl daemon-reload # systemctl start kube-proxy # systemctl enable kube-proxy
說明:啓動服務後默認監聽10249,10256.
完成上述的部署後,就能夠建立應用了,可是在開始前,每一個Node節點上必需要有pause的鏡像,不然國內因爲沒法訪問谷歌鏡像,建立不會成功。
在Node節點執行以下命令,解決鏡像問題:
docker pull mirrorgooglecontainers/pause-amd64:3.1 docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
下面會用一個建立簡單的應用,來驗證咱們的集羣是否能正常工做。
一、編輯nginx.yaml文件:
apiVersion: v1 kind: ReplicationController metadata: name: myweb spec: replicas: 2 selector: app: myweb template: metadata: labels: app: myweb spec: containers: - name: myweb image: nginx ports: - containerPort: 80
二、執行:
# kubectl create -f nginx.yaml
三、查看狀態:
[root@node-1 ~]# kubectl get rc NAME DESIRED CURRENT READY AGE myweb 2 2 2 3h [root@node-1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myweb-qtgrv 1/1 Running 0 1h myweb-z9d2c 1/1 Running 0 1h [root@node-2 ~]# docker ps|grep nginx 067db96d0c97 nginx@sha256:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884 "nginx -g 'daemon ..." About an hour ago Up About an hour k8s_myweb_myweb-qtgrv_default_3213ec67-5fef-11e8-9e43-000c295f81fb_0 dd8f7458e410 nginx@sha256:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884 "nginx -g 'daemon ..." About an hour ago Up About an hour k8s_myweb_myweb-z9d2c_default_3214600e-5fef-11e8-9e43-000c295f81fb_0
四、建立一個service,映射到本地端口:
# cat nginx-service.yaml apiVersion: v1 kind: Service metadata: name: myweb spec: type: NodePort # 定義外網訪問模式 ports: - port: 80 nodePort: 30001 # 外網訪問的端口,映射的本地宿主機端口 selector: app: myweb # 建立service # kubectl create -f nginx-service.yaml # 驗證: [root@node-1 ~]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.222.0.1 <none> 443/TCP 1d myweb NodePort 10.222.35.97 <none> 80:30001/TCP 1h
五、會在全部安裝proxy服務的節點上映射一個30001的端口,訪問此端口就能夠訪問到nginx的默認起始頁。
# netstat -lntp|grep 30001 tcp6 0 0 :::30001 :::* LISTEN 7713/kube-proxy
以上內容讓咱們實現了一個k8s的集羣,可是在實際應用中,咱們還須要添加網絡服務來實現pod之間的相互通訊。k8s自己不提供網絡支持,可是可使用多種第三方網絡插件來實現,在後序的博文中,咱們將會介紹kubernetes網絡模塊。