Kuerbernetes 1.11 二進制安裝

Kuerbernetes 1.11 二進制安裝

標籤(空格分隔): k8s
2019年06月13日php

本文截選 https://k.i4t.comhtml


更多文章請持續關注https://i4t.comnode

什麼是Kubernetes?

Kubernetes是一個完備的分佈式系統支撐平臺。Kubernetes具備完備的集羣管理能力,包括多層次的安全防禦和准入機制/多租戶應用支撐能力、透明的服務註冊和服務發現機制、內建智能負載均衡器、強大的故障發現和自我修復功能、服務滾動升級和在線擴容能力、可擴展的資源自動調度機制,以及多粒度的資源配額管理能力。同時kubernetes提供了完善的管理工具,這些工具覆蓋了包括開發、測試部署、運維監控在內的各個環節;所以kubernetes是一個全新的基於容器技術的分佈式架構解決方案,而且是一個一站式的完備的分佈式系統開發和支撐平臺
16.png-99.2kBmysql

###Kubernetes 基礎服務簡介
在這裏咱們只是簡單的介紹一下Kubernetes基礎組件,後面文章會有詳細介紹!
</br>linux

</br>
</br>
####Kubernetes Service介紹
Service(服務)是分佈式集羣架構的核心,一個Server 對象擁有以下關鍵特徵nginx

(1) 擁有一個惟一指定的名字(好比mysql-server)
(2) 擁有一個虛擬IP (Cluster IP、Service IP或VIP)和端口號
(3) 可以提供某種遠程服務能力
(4) 被映射到了提供這種服務能力的一組容器應用上git

Service的服務進程目前都基於Socker通訊方式對外提供服務,好比redis、memcache、MySQL、Web Server,或者是實現了某個具體業務的一個特定的TCP Server進程。雖然一個Service一般由多個相關的服務進程來提供服務,每一個服務進程都有一個獨立的Endpoint(IP+Port)訪問點,但Kubernetes 可以讓咱們經過Service虛擬Cluster IP+Service Port鏈接到指定的Service上。有了Kubernetes內建的透明負載均衡和故障恢復機制,無論後端有多少服務進程,也無論某個服務進程是否會因爲發生故障而從新部署到其餘機器,都不會影響到咱們對服務的正常調用。更重要的是這個Service自己一旦建立就再也不變化,這意味着Kubernetes集羣中,咱們不再用爲了服務的IP地址變來變去的問題而頭疼。github

17.png-29.7kB

####Kubernetes Pod介紹web

Pod概念
Pod運行在一個咱們稱之爲節點Node的環境中,能夠是私有云也能夠是公有云的虛擬機或者物理機,一般在一個節點上運行幾百個Pod;其次,每一個Pod裏運行着一個特殊的被稱之爲Pause的容器,其餘容器則爲業務容器,這些業務容器共享Pause容器的網絡棧和Volume掛載卷,所以他們之間的通信和數據交換更爲高效。在設計時咱們能夠充分利用這一特徵將一組密切相關的服務進程放入同一個Pod中。redis

並非每一個Pod和它裏面運行的容器都能映射到一個Service 上,只有那些提供服務(不管是對內仍是對外)的一組Pod纔會被映射成一個服務。
19.png-67.1kB

####Service 和Pod如何關聯

容器提供了強大的隔離功能,因此有必要把爲Service提供服務的這組進程放入到容器中隔離。Kubernetes設計了Pod對象,將每一個服務進程包裝到相應的Pod中,使其成爲Pod中運行的一個容器Container。爲了創建Service 和Pod間的關聯關係,Kubernetes 首先給每一個Pod填上了一個標籤Label,給運行MySQL的Pod貼上name=mysql標籤,給運行PHP的Pod貼上name=php標籤,而後給相應的Service定義標籤選擇器Label Selector,好比MySQL Service的標籤選擇器的選擇條件爲name=mysql,意爲該Service 要做用於全部包含name=mysql Label的Pod上。這樣就巧妙的解決了ServicePod關聯問題

####Kubernetes RC介紹

RC介紹
在Kubernetes集羣中,你只須要爲須要擴容的Service關聯的Pod建立一個RC Replication Controller則該Service的擴容以致於後來的Service升級等頭疼問題均可以迎刃而解。
定義一個RC文件包括如下3個關鍵點

  • (1) 目標Pod的定義
  • (2) 目標Pod須要運行的副本數量(Replicas)
  • (3) 要監控的目標Pod的標籤(Label)

在建立好RC系統自動建立好Pod後,kubernetes會經過RC中定義的Label篩選出對應的Pod實力並實時監控其狀態和數量,若是實例數量少於定義的副本數量Replicas,則會用RC中定義的Pod模板來建立一個新的Pod,而後將此Pod調度到合適的Node上運行,直到Pod實例的數量達到預約目標。這個過程徹底是自動化的,無需人干預。只要修改RC中的副本數量便可。


####Kubernetes Master介紹

Master介紹
Kubernetes 裏的Master指的是集羣控制節點,每一個Kubernetes集羣裏須要有一個Master節點來負責整個集羣的管理和控制,基本上Kubernetes全部的控制命令都發給它,它負責具體的執行過程,咱們後面執行的全部命令基本上都是在Master節點上運行的。若是Master宕機或不可用,那麼集羣內容器的管理都將失效

Master節點上運行着如下一組關鍵進程:

  • [ ] Kubernetes API Server (kube-apiserver):提供了HTTP Rest接口的關鍵服務進程,是Kubernetes裏全部資源的增、刪、改、查等操做的惟一入口,也是集羣控制的入口進程
  • [ ] Kubernetes Controller Manager (kube-controller-manager):Kubernetes裏全部的資源對象的自動化控制中心
  • [ ] Kubernetes Scheduler (kube-scheduler):負責資源調度(Pod調度)的進程

另外在Master節點上還須要啓動一個etcd服務,由於Kubernetes裏的全部資源對象的數據所有是保存在etcd

20.png-82.3kB

####Kubernetes Node介紹

Node介紹
除了Master,集羣中其餘機器被稱爲Node節點,每一個Node都會被Master分配一些工做負載Docker容器,當某個Node宕機時,其上的工做負載會被Master自動轉移到其餘節點上去。
</br>

每一個Node節點上都運行着如下一組關鍵進程。

  • [x] kubelet:負責Pod對應容器的建立、中止等任務,同時與Master節點密切協做,實現集羣管理的基本功能
  • [x] kube-proxy:實現Kubernetes Service的通訊與負載均衡機制的重要組件。
  • [x] Docker Engine(Docker): Docker引擎,負責本機的容器建立和管理工做。

21.png-94.7kB


####Kubernetes 中MasterNode工做內容

在集羣管理方面,Kubernets將集羣中的機器劃分爲一個Master節點和一羣工做節點(Node),其中,在Master節點上運行着集羣管理相關的一組進程kube-apiserver、kube-controller-manager和kube-scheduler,這些進程實現了整個集羣的資源管理、Pod調度、彈性收縮、安全控制、系統監控和糾錯等管理功能,而且都是全自動完成的。Node做爲集羣中的工做節點,運行真正的應用程序,在Node上Kubernetes管理的最小運行單元是Pod。Node上運行着Kubernetes的kubelet、kube-proxy服務進程,這些服務進程負責Pod建立、啓動、監控、重啓、銷燬、以及實現軟件模式的負載均衡
k8s詳細介紹請參考https://k.i4t.com

22.png-99.2kB

舒適提示:整個環境只須要修改IP地址!不要其餘刪除的


1、環境準備

本次咱們安裝Kubernetes不使用集羣版本安裝,使用單點安裝。
環境準備須要master和node都要進行操做

環境以下:

IP 主機名 節點 服務
192.168.60.24 master master etcd、kube-apiserver、kube-controller-manage、kube-scheduler 若是master上不安裝Node能夠不安裝如下服務docker、kubelet、kube-proxy、calico
192.168.60.25 node node docker、kubelet、kube-proxy、nginx(master上node節點能夠buanzhuangnginx)
  • [ ] k8s組件版本:v1.11
  • [ ] docker版本:v17.03
  • [ ] etcd版本:v3.2.22
  • [ ] calico版本:v3.1.3
  • [ ] dns版本:1.14.7

爲了防止你們的系統和個人不同,我這裏提供Centos7.4的下載地址,請你們的版本和我保持一致
百度雲 密碼q2xj

Kubernetes版本
本次版本採用v1.11

查看系統及內核版本

➜ cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)

➜ uname -a
3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

#咱們要升級內核版本

舒適提示:下面的操做須要在兩臺服務器上執行

設置主機名

➜ hostnamectl set-hostname [master|node]
➜ bash

master 設置互信

➜ yum ×××tall expect wget -y
➜ for i in 192.168.60.25;do
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.60.25
        expect {
                \"*yes/no*\" {send \"yes\r\"; exp_continue}
                \"*password*\" {send \"123456\r\"; exp_continue}
                \"*Password*\" {send \"123456\r\";}
        } "
done

設置host

➜ echo "192.168.60.25 node" >>/etc/hosts
➜ echo "192.168.60.24 master" >>/etc/hosts

設置時間同步

yum -y ×××tall ntp
 systemctl enable ntpd
 systemctl start ntpd
 ntpdate -u cn.pool.ntp.org
 hwclock --systohc
 timedatectl set-timezone Asia/Shanghai

關閉swap分區

➜ swapoff -a     #臨時關閉swap分區
➜ vim /etc/fstab  #永久關閉swap分區
swap was on /dev/sda11 during ×××tallation
UUID=0a55fdb5-a9d8-4215-80f7-f42f75644f69 none  swap    sw      0       0
#註釋掉SWAP分區項,便可
#不聽個人kubelet啓動報錯本身百度

設置Yum源

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
 wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
 yum makecache
 yum ×××tall wget vim lsof net-tools lrzsz -y

關閉防火牆

systemctl stop firewalld
 systemctl disable firewalld
 setenforce 0
 sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

升級內核

不要問我爲何
yum update 
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel ×××tall kernel-ml -y&&
sed -i s/saved/0/g /etc/default/grub&&
grub2-mkconfig -o /boot/grub2/grub.cfg && reboot

#不重啓不生效!

Kubernetes 升級內核失敗

查看內核

➜ uname -a
Linux master 4.17.6-1.el7.elrepo.x86_64 #1 SMP Wed Jul 11 17:24:30 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

設置內核參數

echo "* soft nofile 190000" >> /etc/security/limits.conf
echo "* hard nofile 200000" >> /etc/security/limits.conf
echo "* soft nproc 252144" >> /etc/security/limits.conf
echo "* hadr nproc 262144" >> /etc/security/limits.conf
tee /etc/sysctl.conf <<-'EOF'
# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5).

net.ipv4.tcp_tw_recycle = 0
net.ipv4.ip_local_port_range = 10000 61000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_forward = 1
net.core.netdev_max_backlog = 2000
net.ipv4.tcp_mem = 131072  262144  524288
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_low_latency = 0
net.core.rmem_default = 256960
net.core.rmem_max = 513920
net.core.wmem_default = 256960
net.core.wmem_max = 513920
net.core.somaxconn = 2048
net.core.optmem_max = 81920
net.ipv4.tcp_mem = 131072  262144  524288
net.ipv4.tcp_rmem = 8760  256960  4088000
net.ipv4.tcp_wmem = 8760  256960  4088000
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_syn_retries = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
EOF
echo "options nf_conntrack hashsize=819200" >> /etc/modprobe.d/mlx4.conf 
modprobe br_netfilter
sysctl -p

2、Kubernetes Install

Master配置

2.1 安裝CFSSL工具

工具說明:
client certificate 用於服務端認證客戶端,例如etcdctl、etcd proxy、fleetctl、docker客戶端
server certificate 服務端使用,客戶端以此驗證服務端身份,例如docker服務端、kube-apiserver
peer certificate 雙向證書,用於etcd集羣成員間通訊

安裝CFSSL工具

➜ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/bin/cfssl

➜ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/bin/cfssljson

➜ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.2 生成ETCD證書

etcd做爲Kubernetes集羣的主數據庫,在安裝Kubernetes各服務以前須要首先安裝和啓動

建立CA證書

#建立etcd目錄,用戶生成etcd證書,請步驟和我保持一致
➜ mkdir /root/etcd_ssl && cd /root/etcd_ssl

cat > etcd-root-ca-csr.json << EOF
{
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "O": "etcd",
      "OU": "etcd Security",
      "L": "beijing",
      "ST": "beijing",
      "C": "CN"
    }
  ],
  "CN": "etcd-root-ca"
}
EOF

etcd集羣證書

cat >  etcd-gencert.json << EOF  
{                                 
  "signing": {                    
    "default": {                  
      "expiry": "87600h"           
    },                            
    "profiles": {                 
      "etcd": {             
        "usages": [               
            "signing",            
            "key encipherment",   
            "server auth", 
            "client auth"  
        ],  
        "expiry": "87600h"  
      }  
    }  
  }  
}  
EOF

# 過時時間設置成了 87600h
ca-config.json:能夠定義多個 profiles,分別指定不一樣的過時時間、使用場景等參數;後續在簽名證書時使用某個 profile;
signing:表示該證書可用於簽名其它證書;生成的 ca.pem 證書中 CA=TRUE;
server auth:表示client能夠用該 CA 對server提供的證書進行驗證;
client auth:表示server能夠用該CA對client提供的證書進行驗證;

etcd證書籤名請求

cat > etcd-csr.json << EOF
{
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "O": "etcd",
      "OU": "etcd Security",
      "L": "beijing",
      "ST": "beijing",
      "C": "CN"
    }
  ],
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "localhost",
    "192.168.60.24"
  ]
}
EOF

$ hosts寫master地址

生成證書

cfssl gencert --initca=true etcd-root-ca-csr.json \
| cfssljson --bare etcd-root-ca

建立根CA

cfssl gencert --ca etcd-root-ca.pem \
--ca-key etcd-root-ca-key.pem \
--config etcd-gencert.json \
-profile=etcd etcd-csr.json | cfssljson --bare etcd

ETCD所需證書以下

➜ ll
total 36
-rw-r--r-- 1 root root 1765 Jul 12 10:48 etcd.csr
-rw-r--r-- 1 root root  282 Jul 12 10:48 etcd-csr.json
-rw-r--r-- 1 root root  471 Jul 12 10:48 etcd-gencert.json
-rw------- 1 root root 3243 Jul 12 10:48 etcd-key.pem
-rw-r--r-- 1 root root 2151 Jul 12 10:48 etcd.pem
-rw-r--r-- 1 root root 1708 Jul 12 10:48 etcd-root-ca.csr
-rw-r--r-- 1 root root  218 Jul 12 10:48 etcd-root-ca-csr.json
-rw------- 1 root root 3243 Jul 12 10:48 etcd-root-ca-key.pem
-rw-r--r-- 1 root root 2078 Jul 12 10:48 etcd-root-ca.pem

2.3 安裝啓動ETCD

ETCD 只有apiserver和Controller Manager須要鏈接

yum ×××tall etcd -y && 上傳rpm包,使用rpm -ivh 安裝

分發etcd證書
 ➜ mkdir -p /etc/etcd/ssl && cd /root/etcd_ssl

查看etcd證書
➜ ll /root/etcd_ssl/
total 36
-rw-r--r--. 1 root root 1765 Jul 20 10:46 etcd.csr
-rw-r--r--. 1 root root  282 Jul 20 10:42 etcd-csr.json
-rw-r--r--. 1 root root  471 Jul 20 10:40 etcd-gencert.json
-rw-------. 1 root root 3243 Jul 20 10:46 etcd-key.pem
-rw-r--r--. 1 root root 2151 Jul 20 10:46 etcd.pem
-rw-r--r--. 1 root root 1708 Jul 20 10:46 etcd-root-ca.csr
-rw-r--r--. 1 root root  218 Jul 20 10:40 etcd-root-ca-csr.json
-rw-------. 1 root root 3243 Jul 20 10:46 etcd-root-ca-key.pem
-rw-r--r--. 1 root root 2078 Jul 20 10:46 etcd-root-ca.pem

複製證書到相關目錄
mkdir /etc/etcd/ssl
\cp *.pem /etc/etcd/ssl/
chown -R etcd:etcd /etc/etcd/ssl
chown -R etcd:etcd /var/lib/etcd
chmod -R 644 /etc/etcd/ssl/
chmod 755 /etc/etcd/ssl/

配置修改ETCD-master配置
➜ cp /etc/etcd/etcd.conf{,.bak} && >/etc/etcd/etcd.conf

cat >/etc/etcd/etcd.conf <<EOF
# [member]
ETCD_NAME=etcd
ETCD_DATA_DIR="/var/lib/etcd/etcd.etcd"
ETCD_WAL_DIR="/var/lib/etcd/wal"
ETCD_SNAPSHOT_COUNT="100"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.60.24:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.60.24:2379,http://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
#ETCD_CORS=""

# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.60.24:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd=https://192.168.60.24:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.60.24:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"

# [proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"

# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
ETCD_PEER_AUTO_TLS="true"

# [logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
EOF

###須要將192.168.60.24修改爲master的地址

啓動etcd

systemctl daemon-reload
systemctl restart etcd
systemctl enable etcd

測試是否可使用

export ETCDCTL_API=3
etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.60.24:2379 endpoint health

##測試的時候把ip更換成master的ip便可,多個ip以逗號分隔

可用狀態以下:

[root@master ~]# export ETCDCTL_API=3
[root@master ~]# etcdctl --cacert=/etc/etcd/ssl/etcd-root-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.60.24:2379 endpoint health
https://192.168.60.24:2379 is healthy: successfully committed proposal: took = 643.432µs

查看2379 ETCD端口

➜ netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 192.168.60.24:2379      0.0.0.0:*               LISTEN      2016/etcd           
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      2016/etcd           
tcp        0      0 192.168.60.24:2380      0.0.0.0:*               LISTEN      2016/etcd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      965/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1081/master         
tcp6       0      0 :::22                   :::*                    LISTEN      965/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1081/master         
udp        0      0 127.0.0.1:323           0.0.0.0:*                           721/chronyd         
udp6       0      0 ::1:323                 :::*                                721/chronyd

########### 以上ETCD安裝並配置完成 ###############

2.4 安裝Docker

下載Docker安裝包

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm

因爲網絡常常超時,咱們已經把鏡像上傳上去,能夠直接下載我提供的安裝包安裝便可
docker及K8S包下載 密碼:1zov

安裝修改配置

➜ yum ×××tall docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y
➜ yum ×××tall docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y

設置開機啓動並啓動docker
systemctl enable docker 
systemctl start docker 

替換docker相關配置
sed -i '/ExecStart=\/usr\/bin\/dockerd/i\ExecStartPost=\/sbin/iptables -I FORWARD -s 0.0.0.0\/0 -d 0.0.0.0\/0 -j ACCEPT' /usr/lib/systemd/system/docker.service
sed -i '/dockerd/s/$/ \-\-storage\-driver\=overlay2/g' /usr/lib/systemd/system/docker.service

重啓docker
systemctl daemon-reload 
systemctl restart docker

若是以前已安裝舊版本,能夠卸載安裝新的

yum remove docker \
                      docker-common \
                      docker-selinux \
                      docker-engine

2.5 安裝Kubernetes

如何下載Kubernetes

壓縮包kubernetes.tar.gz內包含了Kubernetes的服務程序文件、文檔和示例;壓縮包kubernetes-src.tar.gz內則包含了所有源代碼。也能夠直接Server Binaries中的kubernetes-server-linux-amd64.tar.gz文件,其中包含了Kubernetes須要運行的所有服務程序文件

Kubernetes 下載地址:https://github.com/kubernetes/kubernetes/releases
123.jpg-531.3kB
1234.jpg-786.8kB

GitHub下載

docker及K8S包下載 密碼:1zov

Kubernetes配置

tar xf kubernetes-server-linux-amd64.tar.gz
for i in hyperkube kube-apiserver kube-scheduler kubelet kube-controller-manager kubectl kube-proxy;do
cp ./kubernetes/server/bin/$i /usr/bin/
chmod 755 /usr/bin/$i
done

2.6 生成分發Kubernetes證書

設置證書目錄

mkdir /root/kubernets_ssl && cd /root/kubernets_ssl

k8s-root-ca-csr.json證書

cat > k8s-root-ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

k8s-gencert.json證書

cat >  k8s-gencert.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

kubernetes-csr.json 證書

$ hosts字段填寫上全部你要用到的節點ip(master),建立 kubernetes 證書籤名請求文件 kubernetes-csr.json:

cat >kubernetes-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "10.254.0.1",
        "192.168.60.24",
        "localhost",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

kube-proxy-csr.json 證書

cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

admin-csr.json證書

cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

生成Kubernetes證書

➜ cfssl gencert --initca=true k8s-root-ca-csr.json | cfssljson --bare k8s-root-ca

➜ for targetName in kubernetes admin kube-proxy; do
    cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName
done

#生成boostrap配置

export KUBE_APISERVER="https://127.0.0.1:6443"
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
echo "Tokne: ${BOOTSTRAP_TOKEN}"
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

配置證書信息

# Master 上該地址應爲 https://MasterIP:6443

進入Kubernetes證書目錄/root/kubernetes_ssl

export KUBE_APISERVER="https://127.0.0.1:6443"

# 設置集羣參數

kubectl config set-cluster kubernetes \
  --certificate-authority=k8s-root-ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數

kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設置默認上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

# echo "Create kube-proxy kubeconfig..."

kubectl config set-cluster kubernetes \
  --certificate-authority=k8s-root-ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

# kube-proxy

kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

# kube-proxy_config

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

# 生成高級審計配置

cat >> audit-policy.yaml <<EOF
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF

#分發kubernetes證書#####

cd /root/kubernets_ssl
mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
\cp *.kubeconfig token.csv audit-policy.yaml /etc/kubernetes
useradd -s /sbin/nologin -M kube
chown -R kube:kube /etc/kubernetes/ssl

# 生成kubectl的配置

cd /root/kubernets_ssl
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443

kubectl config set-credentials admin \
  --client-certificate=/etc/kubernetes/ssl/admin.pem \
  --embed-certs=true \
  --client-key=/etc/kubernetes/ssl/admin-key.pem

kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin

kubectl config use-context kubernetes

# 設置 log 目錄權限

mkdir -p /var/log/kube-audit /usr/libexec/kubernetes
chown -R kube:kube /var/log/kube-audit /usr/libexec/kubernetes
chmod -R 755 /var/log/kube-audit /usr/libexec/kubernetes

2.7 服務配置配置

Master操做

證書與 rpm 都安裝完成後,只須要修改配置(配置位於 /etc/kubernetes 目錄)後啓動相關組件便可

cd /etc/kubernetes

config 通用配置

如下操做不提示默認便可,須要修改已註釋

cat > /etc/kubernetes/config <<EOF
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
EOF

apiserver 配置

cat > /etc/kubernetes/apiserver <<EOF
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=0.0.0.0 --×××ecure-bind-address=0.0.0.0 --bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--×××ecure-port=8080 --secure-port=6443"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS=--etcd-servers=https://192.168.60.24:2379

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction"

# Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC,Node \
               --endpoint-reconciler-type=lease \
               --runtime-config=batch/v2alpha1=true \
               --anonymous-auth=false \
               --kubelet-https=true \
               --enable-bootstrap-token-auth \
               --token-auth-file=/etc/kubernetes/token.csv \
               --service-node-port-range=30000-50000 \
               --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
               --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
               --client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
               --service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
               --etcd-quorum-read=true \
               --storage-backend=etcd3 \
               --etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem \
               --etcd-certfile=/etc/etcd/ssl/etcd.pem \
               --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
               --enable-swagger-ui=true \
               --apiserver-count=3 \
               --audit-policy-file=/etc/kubernetes/audit-policy.yaml \
               --audit-log-maxage=30 \
               --audit-log-maxbackup=3 \
               --audit-log-maxsize=100 \
               --audit-log-path=/var/log/kube-audit/audit.log \
               --event-ttl=1h "
EOF

#須要修改的地址是etcd的,集羣逗號爲分隔符填寫
將192.168.60.24:2379修改成master的ip

controller-manager 配置

cat > /etc/kubernetes/controller-manager <<EOF
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \
                              --service-cluster-ip-range=10.254.0.0/16 \
                              --cluster-name=kubernetes \
                              --cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
                              --cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
                              --service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
                              --root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
                              --leader-elect=true \
                              --node-monitor-grace-period=40s \
                              --node-monitor-period=5s \
                              --pod-eviction-timeout=60s"
EOF

scheduler 配置

cat >scheduler <<EOF
###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0"
EOF

設置服務啓動腳本
Kubernetes服務的組件配置已經生成,接下來咱們配置組件的啓動腳本
###kube-apiserver.service服務腳本###

vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=root
ExecStart=/usr/bin/kube-apiserver \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBELET_PORT \
        $KUBE_ALLOW_PRIV \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

###kube-controller-manager.service服務腳本###

vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=root
ExecStart=/usr/bin/kube-controller-manager \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

###kube-scheduler.service服務腳本###

vim /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=root
ExecStart=/usr/bin/kube-scheduler \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

啓動kube-apiserver、kube-controller-manager、kube-schedule

systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler

設置開機啓動
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler

提示:kube-apiserver是主要服務,若是apiserver啓動失敗其餘的也會失敗

驗證是否成功

[root@master system]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}

#建立ClusterRoleBinding
因爲 kubelet 採用了 TLS Bootstrapping,全部根絕 RBAC 控制策略,kubelet 使用的用戶 kubelet-bootstrap 是不具有任何訪問 API 權限的
這是須要預先在集羣內建立 ClusterRoleBinding 授予其 system:node-bootstrapper Role

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

刪除命令------ 不執行!
kubectl delete  clusterrolebinding kubelet-bootstrap

2.8 Master 上安裝node節點

對於node節點,master也能夠進行安裝

master上node節點安裝kube-proxy、kubelet

######Kuberlet配置

cat >/etc/kubernetes/kubelet <<EOF
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.60.24"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=master"

# location of the api-server
# KUBELET_API_SERVER=""

# Add your own!
KUBELET_ARGS="--cgroup-driver=cgroupfs \
              --cluster-dns=10.254.0.2 \
              --resolv-conf=/etc/resolv.conf \
              --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
              --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
              --cert-dir=/etc/kubernetes/ssl \
              --cluster-domain=cluster.local. \
              --hairpin-mode promiscuous-bridge \
              --serialize-image-pulls=false \
              --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
EOF

將IP地址修改成master上的IP地址和主機名,其餘不須要修改

建立服務腳本
###kubelet.service服務腳本###
文件名稱:kubelet.service

vim /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBELET_API_SERVER \
        $KUBELET_ADDRESS \
        $KUBELET_PORT \
        $KUBELET_HOSTNAME \
        $KUBE_ALLOW_PRIV \
        $KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

建立工程目錄

/var/lib/kubelet    這個目錄若是沒有須要咱們手動建立
mkdir /var/lib/kubelet -p

#kube-proxy配置

cat >/etc/kubernetes/proxy <<EOF
###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.60.24 \
                 --hostname-override=master \
                 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
                 --cluster-cidr=10.254.0.0/16"
EOF

#master ip && name

kube-proxy啓動腳本

vim /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

啓動kubelet and Kube-proxy

systemctl daemon-reload
systemctl restart kube-proxy
systemctl restart kubelet

當咱們啓動完成以後,在kubelet日誌中能夠看到下面的日誌,提示咱們已經建立好了,可是須要咱們經過一下認證。

1.jpg-694.6kB

經過kubectl get csr查看
2.jpg-123.6kB


3、Kubernetes Node Install

Node節點配置

3.1 Docker安裝

很少BB
docker及K8S包下載 密碼:1zov

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm

yum ×××tall docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y
yum ×××tall docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y

systemctl enable docker 
systemctl start docker 

sed -i '/ExecStart=\/usr\/bin\/dockerd/i\ExecStartPost=\/sbin/iptables -I FORWARD -s 0.0.0.0\/0 -d 0.0.0.0\/0 -j ACCEPT' /usr/lib/systemd/system/docker.service
sed -i '/dockerd/s/$/ \-\-storage\-driver\=overlay2/g' /usr/lib/systemd/system/docker.service

systemctl daemon-reload 
systemctl restart docker

3.2 分配證書

咱們須要去Master上分配證書kubernetes``etcd給Node
雖然 Node 節點上沒有 Etcd,可是若是部署網絡組件,如 calico、flannel 等時,網絡組件須要聯通 Etcd 就會用到 Etcd 的相關證書。

從Mster節點上將hyperkuber kubelet kubectl kube-proxy 拷貝至node上。拷貝證書的這幾步都是在master上操做的

for i in hyperkube kubelet kubectl kube-proxy;do
scp ./kubernetes/server/bin/$i 192.168.60.25:/usr/bin/
ssh 192.168.60.25 chmod 755 /usr/bin/$i
done

##這裏的IP是node節點ip
在K8S二進制上一級,for循環看不懂就別玩K8s了

分發K8s證書
cd K8S證書目錄

cd /root/kubernets_ssl/
for IP in 192.168.60.25;do
    ssh $IP mkdir -p /etc/kubernetes/ssl
    scp *.pem $IP:/etc/kubernetes/ssl
    scp *.kubeconfig token.csv audit-policy.yaml $IP:/etc/kubernetes
    ssh $IP useradd -s /sbin/nologin/ kube
    ssh $IP chown -R kube:kube /etc/kubernetes/ssl
done

#master上執行

分發ETCD證書

for IP in 192.168.60.25;do
    cd /root/etcd_ssl
    ssh $IP mkdir -p /etc/etcd/ssl
    scp *.pem $IP:/etc/etcd/ssl
    ssh $IP chmod -R 644 /etc/etcd/ssl/*
    ssh $IP chmod 755 /etc/etcd/ssl
done

#master上執行

給Node設置文件權限

ssh root@192.168.60.25 mkdir -p /var/log/kube-audit /usr/libexec/kubernetes &&
ssh root@192.168.60.25 chown -R kube:kube /var/log/kube-audit /usr/libexec/kubernetes &&
ssh root@192.168.60.25 chmod -R 755 /var/log/kube-audit /usr/libexec/kubernetes

#master上執行

3.3 Node節點配置

node 節點上配置文件一樣位於 /etc/kubernetes 目錄
node 節點只須要修改 config kubelet proxy這三個配置文件,修改以下

#config 通用配置

注意: config 配置文件(包括下面的 kubelet、proxy)中所有未 定義 API Server 地址,由於 kubelet 和 kube-proxy 組件啓動時使用了 --require-kubeconfig 選項,該選項會使其從 *.kubeconfig 中讀取 API Server 地址,而忽略配置文件中設置的;
因此配置文件中設置的地址實際上是無效的

cat > /etc/kubernetes/config <<EOF
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
# KUBE_MASTER="--master=http://127.0.0.1:8080"
EOF

# kubelet 配置

cat >/etc/kubernetes/kubelet <<EOF
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.60.25"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node"

# location of the api-server
# KUBELET_API_SERVER=""

# Add your own!
KUBELET_ARGS="--cgroup-driver=cgroupfs \
              --cluster-dns=10.254.0.2 \
              --resolv-conf=/etc/resolv.conf \
              --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
              --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
              --cert-dir=/etc/kubernetes/ssl \
              --cluster-domain=cluster.local. \
              --hairpin-mode promiscuous-bridge \
              --serialize-image-pulls=false \
              --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
EOF

#這裏的IP地址是node的IP地址和主機名

複製啓動腳本

vim /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBELET_API_SERVER \
        $KUBELET_ADDRESS \
        $KUBELET_PORT \
        $KUBELET_HOSTNAME \
        $KUBE_ALLOW_PRIV \
        $KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

mkdir /var/lib/kubelet -p
工程目錄咱們設置在/var/lib/kubele須要咱們手動建立

啓動kubelet

sed -i 's#127.0.0.1#192.168.60.24#g' /etc/kubernetes/bootstrap.kubeconfig
#這裏的地址是master地址
#這裏是爲了測試kubelet是否能夠鏈接到master上,後面啓動nginx的做用是爲了master的高可用

systemctl daemon-reload
systemctl restart kubelet
systemctl enable kubelet

#修改kube-proxy配置

cat >/etc/kubernetes/proxy <<EOF
###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.60.25 \
                 --hostname-override=node \
                 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
                 --cluster-cidr=10.254.0.0/16"
EOF

#替換node IP
--bind-address= node ip地址
--hostname-override= node主機名

kube-proxy啓動腳本

vim /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

3.4 建立 nginx 代理

此時全部 node 應該鏈接本地的 nginx 代理,而後 nginx 來負載全部 api server;如下爲 nginx 代理相關配置
咱們也能夠不用nginx代理。須要修改 bootstrap.kubeconfig kube-proxy.kubeconfig中的 API Server 地址便可

注意: 對於在 master 節點啓動 kubelet 來講,不須要 nginx 作負載均衡;能夠跳過此步驟,並修改 kubelet.kubeconfig、kube-proxy.kubeconfig 中的 apiserver 地址爲當前 master ip 6443 端口便可

# 建立配置目錄

mkdir -p /etc/nginx

# 寫入代理配置

cat > /etc/nginx/nginx.conf <<EOF
error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1024;
}

stream {
    upstream kube_apiserver {
        least_conn;
        server 192.168.60.24:6443 weight=20 max_fails=1 fail_timeout=10s;
        #server中代理master的IP
    }

    server {
        listen        0.0.0.0:6443;
        proxy_pass    kube_apiserver;
        proxy_timeout 10m;
        proxy_connect_timeout 1s;
    }
}
EOF

##servcer 中代理的ip應該是master中的apiserver端口

# 更新權限

chmod +r /etc/nginx/nginx.conf

#啓動nginx的docker容器。運行轉發

docker run -it -d -p 127.0.0.1:6443:6443 -v /etc/nginx:/etc/nginx  --name nginx-proxy --net=host --restart=on-failure:5 --memory=512M  nginx:1.13.5-alpine

小提示:能夠提早拉nginx鏡像
docker pull daocloud.io/library/nginx:1.13.5-alpine

爲了保證 nginx 的可靠性,綜合便捷性考慮,node 節點上的 nginx 使用 docker 啓動,同時 使用 systemd 來守護, systemd 配置以下

cat >/etc/systemd/system/nginx-proxy.service <<EOF 
[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service

[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker start nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s

[Install]
WantedBy=multi-user.target
EOF

➜ systemctl daemon-reload
➜ systemctl start nginx-proxy
➜ systemctl enable nginx-proxy

咱們要確保有6443端口,才能夠啓動kubelet

sed -i 's#192.168.60.24#127.0.0.1#g' /etc/kubernetes/bootstrap.kubeconfig

查看6443端口

[root@node kubernetes]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      2042/kube-proxy     
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      1925/nginx: master  
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      966/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1050/master         
tcp6       0      0 :::10256                :::*                    LISTEN      2042/kube-proxy     
tcp6       0      0 :::22                   :::*                    LISTEN      966/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1050/master         
udp        0      0 127.0.0.1:323           0.0.0.0:*                           717/chronyd         
udp6       0      0 ::1:323                 :::*                                717/chronyd  

[root@node kubernetes]# lsof -i:6443
lsof: no pwd entry for UID 100
lsof: no pwd entry for UID 100
COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
kubelet 1765     root    3u  IPv4  27573      0t0  TCP node1:39246->master:sun-sr-https (ESTABLISHED)
nginx   1925     root    4u  IPv4  29028      0t0  TCP *:sun-sr-https (LISTEN)
lsof: no pwd entry for UID 100
nginx   1934      100    4u  IPv4  29028      0t0  TCP *:sun-sr-https (LISTEN)
lsof: no pwd entry for UID 100
nginx   1935      100    4u  IPv4  29028      0t0  TCP *:sun-sr-https (LISTEN)

啓動kubelet-proxy
在啓動kubelet以前最好將kube-proxy重啓一下

systemctl restart kube-proxy
systemctl enable kubelet

systemctl daemon-reload
systemctl restart kubelet
systemctl enable kubelet

記得檢查kubelet狀態!

3.5 認證

因爲採用了 TLS Bootstrapping,因此 kubelet 啓動後不會當即加入集羣,而是進行證書申請,從日誌中能夠看到以下輸出
3.jpg-765.4kB

7月 24 13:55:50 master kubelet[1671]: I0724 13:55:50.877027    1671 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file

此時只須要在 master 容許其證書申請便可
# 查看 csr

➜  kubectl get csr
NAME        AGE       REQUESTOR           CONDITION
csr-l9d25   2m        kubelet-bootstrap   Pending
'

若是咱們將2臺都啓動了kubelet都配置好了而且啓動了,這裏會顯示2臺,一個master一個node

# 簽發證書

kubectl certificate approve csr-l9d25  
#csr-l9d25 爲證書名稱

或者執行kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

# 查看 node
簽發完成證書

[root@master ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     <none>    40m       v1.11.0
node      Ready     <none>    39m       v1.11.0

認證後自動生成了kubelet kubeconfig 文件和公私鑰:

$ ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2280 Nov  7 10:26 /etc/kubernetes/kubelet.kubeconfig
$ ls -l /etc/kubernetes/ssl/kubelet*
-rw-r--r-- 1 root root 1046 Nov  7 10:26 /etc/kubernetes/ssl/kubelet-client.crt
-rw------- 1 root root  227 Nov  7 10:22 /etc/kubernetes/ssl/kubelet-client.key
-rw-r--r-- 1 root root 1115 Nov  7 10:16 /etc/kubernetes/ssl/kubelet.crt
-rw------- 1 root root 1675 Nov  7 10:16 /etc/kubernetes/ssl/kubelet.key

#注意:
apiserver若是不啓動後續無法操做
kubelet裏面配置的IP地址都是本機(master配置node)
Node服務上先啓動nginx-proxy在啓動kube-proxy。kube-proxy裏面地址配置本機127.0.0.1:6443實際上就是master:6443


4、K8s組件安裝

4.1 Calico介紹

4.jpg-161.1kB

calico是一個比較有趣的虛擬網絡解決方案,徹底利用路由規則實現動態組網,經過BGP協議通告路由。
calico的好處是endpoints組成的網絡是單純的三層網絡,報文的流向徹底經過路由規則控制,沒有overlay等額外開銷。
calico的endpoint能夠漂移,而且實現了acl。
calico的缺點是路由的數目與容器數目相同,很是容易超過路由器、三層交換、甚至node的處理能力,從而限制了整個網絡的擴張。
calico的每一個node上會設置大量(海量)的iptables規則、路由,運維、排障難度大。
calico的原理決定了它不可能支持VPC,容器只能從calico設置的網段中獲取ip。
calico目前的實現沒有流量控制的功能,會出現少數容器搶佔node多數帶寬的狀況。
calico的網絡規模受到BGP網絡規模的限制。

名詞解釋

endpoint: 接入到calico網絡中的網卡稱爲endpoint
AS: 網絡自治系統,經過BGP協議與其它AS網絡交換路由信息
ibgp: AS內部的BGP Speaker,與同一個AS內部的ibgp、ebgp交換路由信息。
ebgp: AS邊界的BGP Speaker,與同一個AS內部的ibgp、其它AS的ebgp交換路由信息。

workloadEndpoint: 虛擬機、容器使用的endpoint
hostEndpoints: 物理機(node)的地址

組網原理

calico組網的核心原理就是IP路由,每一個容器或者虛擬機會分配一個workload-endpoint(wl)。

從nodeA上的容器A內訪問nodeB上的容器B時:
5.jpg-210.7kB

核心問題是,nodeA怎樣得知下一跳的地址?答案是node之間經過BGP協議交換路由信息。

每一個node上運行一個軟路由軟件bird,而且被設置成BGP Speaker,與其它node經過BGP協議交換路由信息。

能夠簡單理解爲,每個node都會向其它node通知這樣的信息:

我是X.X.X.X,某個IP或者網段在我這裏,它們的下一跳地址是我。
經過這種方式每一個node知曉了每一個workload-endpoint的下一跳地址。

Calico組件介紹:
6.jpg-131.1kB

Felix:Calico agent 運行在每臺node上,爲容器設置網絡信息:IP,路由規則,iptable規則等
etcd:calico後端存儲
BIRD: BGP Client: 負責把Felix在各node上設置的路由信息廣播到Calico網絡( 經過BGP協議)。
BGP Route Reflector: 大規模集羣的分級路由分發。
calico: calico命令行管理工具

calico-node:calico服務程序,用於設置Pod的網絡資源,保證pod的網絡與各Node互聯互通,它還須要以HostNetwork模式運行,直接使用宿主機網絡。

×××tall-cni:在各Node上安裝CNI二進制文件到/opt/cni/bin目錄下,並安裝相應的網絡配置文件到/etc/cni/net.d目錄下。

Calico做爲一款針對數據中心的虛擬網絡工具,藉助BGP、路由表和iptables,實現了一個無需解包封包的三層網絡,而且有調試簡單的特色。雖然目前還有些小缺陷,好比stable版本還沒法支持私有網絡,但但願在後面的版本中會更增強大。

7.jpg-484kB
8.jpg-244.9kB

參考:
https://blog.csdn.net/ptmozhu/article/details/70159919

http://www.lijiaocn.com/%E9%A1%B9%E7%9B%AE/2017/04/11/calico-usage.html

4.2 Calico 安裝配置

Calico 目前部署也相對比較簡單,只須要建立一下 yml 文件便可

# 獲取相關 Cliaco.yaml 版本咱們使用3.1,低版本會有Bug

wget http://docs.projectcalico.org/v3.1/getting-started/kubernetes/×××tallation/hosted/calico.yaml
wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/×××tallation/rbac.yaml

若是有網絡問題請往下找我百度雲的鏈接

#替換 Etcd 地址-master這裏的IP地址爲etcd的地址

sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://192.168.60.24:2379\"@gi' calico.yaml

# 替換 Etcd 證書
修改 Etcd 相關配置,如下列出主要修改部分(etcd 證書內容須要被 base64 轉碼)

export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d '\n'`
export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d '\n'`
export ETCD_CA=`cat /etc/etcd/ssl/etcd-root-ca.pem | base64 | tr -d '\n'`

sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml
sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml
sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml

sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml
sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml
sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml

# 設定calico的地址池,注意不要與集羣IP與宿主機IP段相同

sed -i s/192.168.0.0/172.16.0.0/g calico.yaml

修改kubelet配置

Cliaco 官方文檔要求 kubelet 啓動時要配置使用 cni 插件 --network-plugin=cni,同時 kube-proxy
不能使用 --masquerade-all 啓動(會與 Calico policy 衝突),因此須要修改全部 kubelet 和 proxy 配置文件

#修改全部(master & node都須要修改)的kubelet配置,在運行參數中加上如下參數

vim /etc/kubernetes/kubelet
              --network-plugin=cni 

#注意在這部的時候最好重啓下kubelet服務與docker服務,避免配置更新不及時形成的錯誤
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

systemctl start kube-proxy.service
systemctl enable kube-proxy.service

執行部署操做,注意,在開啓 RBAC 的狀況下須要單首創建 ClusterRole 和 ClusterRoleBinding
https://www.kubernetes.org.cn/1879.html
RoleBinding和ClusterRoleBinding
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding

##提示有些鏡像是須要咱們去docker hub下載,咱們這裏能夠將鏡像導入
鏡像下載地址 密碼:ibyt
導入鏡像(master和node都須要導入)
pause.tar

不導入鏡像會超時

Events:
  Type     Reason                  Age               From               Message
  ----     ------                  ----              ----               -------
  Normal   Scheduled               51s               default-scheduler  Successfully assigned default/nginx-deployment-7c5b578d88-lckk2 to node
  Warning  FailedCreatePodSandBox  5s (x3 over 43s)  kubelet, node      Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection timed out

提示:由於calico的鏡像在國外,我這裏已經將鏡像處處,你們使用docker load &lt;calio.tar將鏡像導入便可
calico鏡像及yaml文件打包 密碼:wxi1

建議將calico的master節點和node節點鏡像都相同

[root@node ~]# docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
nginx                       1.13.2-alpine       2d92198f40ec        12 months ago       15.5 MB
daocloud.io/library/nginx   1.13.2-alpine       2d92198f40ec        12 months ago       15.5 MB
[root@node ~]# 
[root@node ~]# 
[root@node ~]# docker load < calico-node.tar
cd7100a72410: Loading layer [==================================================>] 4.403 MB/4.403 MB
ddc4cb8dae60: Loading layer [==================================================>]  7.84 MB/7.84 MB
77087b8943a2: Loading layer [==================================================>] 249.3 kB/249.3 kB
c7227c83afaf: Loading layer [==================================================>] 4.801 MB/4.801 MB
2e0e333a66b6: Loading layer [==================================================>] 231.8 MB/231.8 MB
Loaded image: quay.io/calico/node:v3.1.3

master有如下鏡像
[root@master ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
quay.io/calico/node                    v3.1.3              7eca10056c8e        7 weeks ago         248 MB
quay.io/calico/kube-controllers        v3.1.3              240a82836573        7 weeks ago         55 MB
quay.io/calico/cni                     v3.1.3              9f355e076ea7        7 weeks ago         68.8 MB
gcr.io/google_containers/pause-amd64   3.0                 99e59f495ffa        2 years ago         747 kB
[root@master ~]#

@@@@@@@@@@@@@@@@@@@@@@@@@

Node有如下鏡像
[root@node ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
quay.io/calico/node                    v3.1.3              7eca10056c8e        7 weeks ago         248 MB
quay.io/calico/cni                     v3.1.3              9f355e076ea7        7 weeks ago         68.8 MB
nginx                                  1.13.5-alpine       ea7bef82810a        9 months ago        15.5 MB
gcr.io/google_containers/pause-amd64   3.0                 99e59f495ffa        2 years ago         747 kB

建立pod及rbac

kubectl apply -f rbac.yaml 
kubectl create -f calico.yaml

啓動以後,查看pod

[root@master ~]# kubectl get pod -o wide --namespace=kube-system
NAME                                        READY     STATUS    RESTARTS   AGE       IP              NODE
calico-node-8977h                           2/2       Running   0          2m        192.168.60.25   node
calico-node-bl9mf                           2/2       Running   0          2m        192.168.60.24   master
calico-policy-controller-79bc74b848-7l6zb   1/1       Running   0          2m        192.168.60.24   master

Pod Yaml參考https://mritd.me/2017/07/31/calico-yml-bug/

calicoctl
calicoctl 1.0以後calicoctl管理的都是資源(resource),以前版本的ip pool,profile, policy等都是資源。資源經過yaml或者json格式方式來定義,經過calicoctl create或者apply來建立和應用,經過calicoctl get命令來查看

calicoctl 下載

wget https://github.com/projectcalico/calicoctl/releases/download/v3.1.3/calicoctl
chmod +x calicoctl 
mv calicoctl /usr/bin/

#下載不下來往上翻,我已經上傳到百度雲

檢查calicoctl是否安裝成功

[root@master yaml]# calicoctl version
Version:      v1.3.0
Build date:   
Git commit:   d2babb6

配置calicoctl的datastore

[root@master ~]# mkdir -p /etc/calico/

#編輯calico控制器的配置文件

下載的默認是3.1,修改版本便可下載2.6
2.6版本配置以下

cat > /etc/calico/calicoctl.cfg<<EOF
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
  datastoreType: "etcdv2"
  etcdEndpoints: "https://192.168.60.24:2379"
  etcdKeyFile: "/etc/etcd/ssl/etcd-key.pem"
  etcdCertFile: "/etc/etcd/ssl/etcd.pem"
  etcdCACertFile: "/etc/etcd/ssl/etcd-root-ca.pem"
EOF
#須要鏈接ETCD,此處的地址是etcd的(Master上)

3.1的只須要根據相關的修改就能夠

apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "etcdv3"
  etcdEndpoints: "https://192.168.60.24:2379"
  etcdKeyFile: "/etc/etcd/ssl/etcd-key.pem"
  etcdCertFile: "/etc/etcd/ssl/etcd.pem"
  etcdCACertFile: "/etc/etcd/ssl/etcd-root-ca.pem"

官方文檔:https://docs.projectcalico.org/v3.1/usage/calicoctl/configure/

不一樣版本有不一樣版本的配置,建議參考官方文檔~

#查看calico狀態

[root@master calico]# calicoctl node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.60.25 | node-to-node mesh | up    | 06:13:41 | Established |
+---------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

查看deployment

[root@master ~]# kubectl get deployment  --namespace=kube-system 
NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
calico-kube-controllers    1         1         1            1           4h
calico-policy-controller   0         0         0            0           4h

[root@master ~]# kubectl get pods --namespace=kube-system  -o wide
NAME                                      READY     STATUS    RESTARTS   AGE       IP              NODE
calico-kube-controllers-b785696ff-b7kjv   1/1       Running   0          4h        192.168.60.25   node
calico-node-szl6m                         2/2       Running   0          4h        192.168.60.25   node
calico-node-tl4xc                         2/2       Running   0          4h        192.168.60.24   master

查看建立後的

[root@master ~]# kubectl get pod,svc -n kube-system
NAME                                          READY     STATUS    RESTARTS   AGE
pod/calico-kube-controllers-b785696ff-b7kjv   1/1       Running   0          4h
pod/calico-node-szl6m                         2/2       Running   0          4h
pod/calico-node-tl4xc                         2/2       Running   0          4h
pod/kube-dns-66544b5b44-vg8lw                 2/3       Running   5          4m

測試
建立完calico,咱們須要測試一下是否正常

cat > test.service.yaml << EOF
kind: Service
apiVersion: v1
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 31000
  type: NodePort
EOF

  ##暴露的端口是31000

編輯deploy文件

cat > test.deploy.yaml << EOF
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13.0-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
EOF

建立yaml文件

[root@master k8s_yaml]# kubectl create -f test.service.yaml
service/nginx-service created
[root@master k8s_yaml]# kubectl create -f test.deploy.yaml
deployment.apps/nginx-deployment created

pod正常啓動後咱們就能夠繼續查看了
[root@master k8s_yaml]# kubectl get pod
NAME                                READY     STATUS    RESTARTS   AGE
nginx-deployment-5ffbbc5c94-9zvh9   1/1       Running   0          45s
nginx-deployment-5ffbbc5c94-jc8zw   1/1       Running   0          45s
nginx-deployment-5ffbbc5c94-lcrlt   1/1       Running   0          45s

這時候咱們能夠經過ip+31000訪問
9.jpg-269kB


4.3 DNS

10.jpg-69.1kB

kubernetes 提供了 service 的概念能夠經過 VIP 訪問 pod 提供的服務,可是在使用的時候還有一個問題:怎麼知道某個應用的 VIP?好比咱們有兩個應用,一個 app,一個 是 db,每一個應用使用 rc 進行管理,並經過 service 暴露出端口提供服務。app 須要鏈接到 db 應用,咱們只知道 db 應用的名稱,可是並不知道它的 VIP 地址。

最簡單的辦法是從 kubernetes 提供的 API 查詢。但這是一個糟糕的作法,首先每一個應用都要在啓動的時候編寫查詢依賴服務的邏輯,這自己就是重複和增長應用的複雜度;其次這也致使應用須要依賴 kubernetes,不可以單獨部署和運行(固然若是經過增長配置選項也是能夠作到的,但這又是增長負責度)。

開始的時候,kubernetes 採用了 docker 使用過的方法——環境變量。每一個 pod 啓動時候,會把經過環境變量設置全部服務的 IP 和 port 信息,這樣 pod 中的應用能夠經過讀取環境變量來獲取依賴服務的地址信息。這種方式服務和環境變量的匹配關係有必定的規範,使用起來也相對簡單,可是有個很大的問題:依賴的服務必須在 pod 啓動以前就存在,否則是不會出如今環境變量中的。

更理想的方案是:應用可以直接使用服務的名字,不須要關心它實際的 ip 地址,中間的轉換可以自動完成。名字和 ip 之間的轉換就是 DNS 系統的功能,所以 kubernetes 也提供了 DNS 方法來解決這個問題。

Kube-DNS安裝
DNS yaml文件下載 密碼:8nzg

kube-dns下載
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-dns/kube-dns.yaml.in

手動下載而且修更名字

##新版本新增長了不少東西,若是怕改錯請直接下載個人包,這裏面設計對接kubelet的配置,例如10.254.0.2以及cluster.local

##建議使用我提供的yaml

sed -i 's/$DNS_DOMAIN/cluster.local/gi' kube-dns.yaml
sed -i 's/$DNS_SERVER_IP/10.254.0.2/gi' kube-dns.yaml

導入鏡像

docker load -i kube-dns.tar

##能夠不導入鏡像,默認會去yaml文件指定的地方下載,若是使用導入的鏡像,請yaml也是用相同的!

建立Pod

kubectl create -f kube-dns.yaml

#須要修改yaml的imag地址,和本地鏡像對接

查看pod

[root@master ~]# kubectl get pods --namespace=kube-system 
NAME                                      READY     STATUS    RESTARTS   AGE
calico-kube-controllers-b49d9b875-8bwz4   1/1       Running   0          3h
calico-node-5vnsh                         2/2       Running   0          3h
calico-node-d8gqr                         2/2       Running   0          3h
kube-dns-864b8bdc77-swfw5                 3/3       Running   0          2h

驗證

#建立一組pod和Server 查看pod內網通訊是否正常

[root@master test]# cat demo.deploy.yml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: demo-deployment
spec:
  replicas: 5
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: demo
        image: daocloud.io/library/tomcat:6.0-jre7
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

順便驗證一下內外網的通訊
11.jpg-963kB


4.4 部署 DNS 自動擴容部署

GitHub上下載
GitHub:https://github.com/kubernetes/kubernetes/tree/release-1.8/cluster/addons/dns-horizontal-autoscaler

dns-horizontal-autoscaler-rbac.yaml文件解析:
實際它就建立了三個資源:ServiceAccount、ClusterRole、ClusterRoleBinding ,建立賬戶,建立角色,賦予權限,將賬戶綁定到角色上面。

12.jpg-410.1kB
13.jpg-237kB

導入鏡像,要不太慢了

### node 和master都須要哦~

root@node ~]# docker load -i gcr.io_google_containers_cluster-proportional-autoscaler-amd64_1.1.2-r2.tar 
3fb66f713c9f: Loading layer 4.221 MB/4.221 MB
a6851b15f08c: Loading layer 45.68 MB/45.68 MB
Loaded image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2-r2

查看鏡像
[root@master ~]# docker images|grep cluster
gcr.io/google_containers/cluster-proportional-autoscaler-amd64   1.1.2-r2            7d892ca550df        13 months ago       49.6 MB

確保對應yaml的鏡像

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml

還須要下載一個rbac文件
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-dns/kube-dns.yaml.in

 kubectl create -f dns-horizontal-autoscaler-rbac.yaml  
 kubectl create -f dns-horizontal-autoscaler.yaml 
## 直接下載須要修改配置

自動擴容yaml文件

[root@master calico]# cat dns-horizontal-autoscaler.yaml
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

kind: ServiceAccount
apiVersion: v1
metadata:
  name: kube-dns-autoscaler
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-dns-autoscaler
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["list"]
  - apiGroups: [""]
    resources: ["replicationcontrollers/scale"]
    verbs: ["get", "update"]
  - apiGroups: ["extensions"]
    resources: ["deployments/scale", "replicasets/scale"]
    verbs: ["get", "update"]
# Remove the configmaps rule once below issue is fixed:
# kubernetes-incubator/cluster-proportional-autoscaler#16
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-dns-autoscaler
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
  - kind: ServiceAccount
    name: kube-dns-autoscaler
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: system:kube-dns-autoscaler
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns-autoscaler
  namespace: kube-system
  labels:
    k8s-app: kube-dns-autoscaler
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  template:
    metadata:
      labels:
        k8s-app: kube-dns-autoscaler
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      containers:
      - name: autoscaler
        image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2-r2
        resources:
            requests:
                cpu: "20m"
                memory: "10Mi"
        command:
          - /cluster-proportional-autoscaler
          - --namespace=kube-system
          - --configmap=kube-dns-autoscaler
          # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base
          - --target=Deployment/kube-dns
          # When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
          # If using small nodes, "nodesPerReplica" should dominate.
          - --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}}
          - --logtostderr=true
          - --v=2
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      serviceAccountName: kube-dns-autoscaler
[root@master calico]#

演示
14.jpg-595.6kB

詳情參考 Autoscale the DNS Service in a Cluster

相關文章
相關標籤/搜索