容器生態圈之旅--第一章《從零開始》

這是一本書!!!html

一本寫我在容器生態圈的所學!!!java

 

第1章 從零開始

 

重點先知:node

1. centos 7.6安裝優化python

2. k8s 1.15.1 高可用部署linux

3. 網絡插件caliconginx

4. dashboard 插件git

5. metrics-server 插件github

6. kube-state-metrics 插件golang

原文分享:http://note.youdao.com/noteshare?id=c9f647765493d11099a939d7e5e102c9&sub=A837AA253CA54660AABADEF435A40714web

1.1 前言

一直想寫點內容,來記錄我在IT這條路上的旅程。這個念頭持續了好久。終於在2019年的7月21日成行。

我將我在IT這條路上的所學、所作、所聞看成旅途中的所看、所聽、所感,一一記錄下來。

IT是一條不歸路。高手之上還有高手。而我單單的但願和愈來愈強的前輩過招。

我將個人IT方向,軌到容器開發。容器是容器生態圈的簡稱,開發是Go語言開發的簡稱。

我我的認爲運維的趨勢是容器化運維,開發的趨勢是容器化開發。因此我走的是容器開發的路。

今年是相對清閒的一年,能夠沉下心來作兩件大事:1. 個人容器生態圈之旅 2.Go語言從小白到大神之旅

我但願的是用時6個月初步達到容器開發的級別,由於我具有必定的基礎,應該仍是能夠的。

我但願的是在2020年的5月份時,能夠初步完成這兩件大事。

I can do it because I'm young !

筆落心堅。拭目以待。

1.2 內容介紹

  • 容器引擎:docker
  • 容器編排:kubernetes
  • 容器存儲:ceph
  • 容器監控:prometheus
  • 日誌分析:elk
  • 服務網絡: istio

1.3 資源

所需軟件分享連接: 連接:https://pan.baidu.com/s/1IvUG_hdqDvReDJS9O1k9OA     提取碼:7wfh

內容來源:官網、博文、其餘

1.3.1 物理機

硬件性能

從上圖能夠看出來:硬件內存達到24.0GB,因此能夠支持開啓衆多虛擬機,更有效的模擬真實生成環境。

1.3.2 虛擬機工具

VMware Workstation Pro 14

VMware Workstation(中文名「威睿工做站」)是一款功能強大的桌面虛擬計算機軟件,提供用戶可在單一的桌面上同時運行不一樣的操做系統,和進行開發、測試 、部署新的應用程序的最佳解決方案。VMware Workstation可在一部實體機器上模擬完整的網絡環境,以及可便於攜帶的虛擬機器,其更好的靈活性與先進的技術賽過了市面上其餘的虛擬計算機軟件。對於企業的 IT開發人員和系統管理員而言, VMware在虛擬網路,實時快照,拖曳共享文件夾,支持 PXE 等方面的特色使它成爲必不可少的工具。

VMware Workstation容許操做系統(OS)和應用程序(Application)在一臺虛擬機內部運行。虛擬機是獨立運行主機操做系統的離散環境。在 VMware Workstation 中,你能夠在一個窗口中加載一臺虛擬機,它能夠運行本身的操做系統和應用程序。你能夠在運行於桌面上的多臺虛擬機之間切換,經過一個網絡共享虛擬機(例如一個公司局域網),掛起和恢復虛擬機以及退出虛擬機,這一切不會影響你的主機操做和任何操做系統或者其它正在運行的應用程序。

1.3.3 遠程連接工具

Xshell是一個強大的安全終端模擬軟件,它支持SSH1, SSH2, 以及Microsoft Windows 平臺的TELNET 協議。Xshell 經過互聯網到遠程主機的安全鏈接以及它創新性的設計和特點幫助用戶在複雜的網絡環境中享受他們的工做。

Xshell能夠在Windows界面下用來訪問遠端不一樣系統下的服務器,從而比較好的達到遠程控制終端的目的。除此以外,其還有豐富的外觀配色方案以及樣式選擇。

1.4 虛機

1.4.1 centos 7.6 系統安裝

1.4.2 模板機優化

查看系統版本和內核

[root@mobanji ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@mobanji ~]# uname -r
3.10.0-957.el7.x86_64

別名設置

#進入網絡配置文件
[root@mobanji ~]# yum install -y vim
[root@mobanji ~]# alias vimn="vim /etc/sysconfig/network-scripts/ifcfg-eth0"
[root@mobanji ~]# vim ~/.bashrc
alias vimn="vim /etc/sysconfig/network-scripts/ifcfg-eth0"

網絡優化

[root@mobanji ~]# vimn
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=20.0.0.5
PREFIX=24
GATEWAY=20.0.0.2
DNS1=233.5.5.5
DNS2=8.8.8.8
DNS3=119.29.29.29
DNS4=114.114.114.114

更新yum源及必要軟件安裝

[root@mobanji ~]# yum install -y wget
[root@mobanji ~]# cp -r /etc/yum.repos.d /etc/yum.repos.d.bak
[root@mobanji ~]# rm -f /etc/yum.repos.d/*.repo
[root@mobanji ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo \
 && wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@mobanji ~]# yum clean all && yum makecache
[root@mobanji ~]# yum install  bash-completion lrzsz  nmap  nc  tree  htop iftop  net-tools ntpdate  lsof screen tcpdump conntrack ntp ipvsadm ipset jq sysstat libseccomp nmon iptraf mlocate strace nethogs iptraf iftop bridge-utils bind-utils nc nfs-tuils rpcbind dnsmasq python python-devel tree telnet git sshpass bind-utils -y

配置時間

#配置時間同步
[root@mobanji ~]# ntpdate -u pool.ntp.org
[root@mobanji ~]# crontab -e
#dingshi time
*/15 * * * * /usr/sbin/ntpdate -u pool.ntp.org >/dev/null 2>&1

#調整系統TimeZone
[root@mobanji ~]# timedatectl set-timezone Asia/Shanghai

#將當前的 UTC 時間寫入硬件時鐘
[root@mobanji ~]# timedatectl set-local-rtc 0

# 重啓依賴於系統時間的服務 [root@mobanji ~]# systemctl restart rsyslog [root@mobanji ~]# systemctl restart crond

ssh優化

[root@mobanji ~]# sed  -i  '79s@GSSAPIAuthentication yes@GSSAPIAuthentication no@;115s@#UseDNS yes@UseDNS no@' /etc/ssh/sshd_config
[root@mobanji ~]# systemctl restart sshd

關閉防火牆和SElinux

#關閉防火牆,清理防火牆規則,設置默認轉發策略
[root@mobanji ~]# systemctl stop firewalld
[root@mobanji ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@mobanji ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
[root@mobanji ~]# iptables -P FORWARD ACCEPT
[root@mobanji ~]#  firewall-cmd --state
not running
關閉SELinux,不然後續K8S掛載目錄時可能 setenforce
0 報錯 Permission denied [root@mobanji ~]# setenforce 0 [root@mobanji ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

關閉無關的服務

[root@mobanji ~]# systemctl list-unit-files |grep "enabled"
[root@mobanji ~]#  systemctl status postfix &&  systemctl stop postfix && systemctl disable postfix

設置limits.conf

[root@mobanji ~]# cat >> /etc/security/limits.conf <<EOF
# End of file
* soft nofile 65525
* hard nofile 65525
* soft nproc 65525
* hard nproc 65525
EOF

升級系統內核

CentOS 7.x系統自帶的3.10.x內核存在一些Bugs,致使運行的Docker、Kubernetes不穩定,例如:

-> 高版本的 docker(1.13 之後) 啓用了3.10 kernel實驗支持的kernel memory account功能(沒法關閉),當節點壓力大如頻繁啓動和中止容器時會致使 cgroup memory leak;

-> 網絡設備引用計數泄漏,會致使相似於報錯:"kernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1";

     

解決方案以下:

-> 升級內核到 4.4.X 以上;

-> 或者,手動編譯內核,disable CONFIG_MEMCG_KMEM 特性;

-> 或者安裝修復了該問題的 Docker 18.09.1 及以上的版本。但因爲 kubelet 也會設置 kmem(它 vendor 了 runc),因此須要從新編譯 kubelet 並指定 GOFLAGS="-tags=nokmem";

[root@mobanji ~]# uname -r
3.10.0-957.el7.x86_64
[root@mobanji ~]# yum update -y
[root@mobanji ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@mobanji ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
[root@mobanji ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
kernel-lt.x86_64                               4.4.185-1.el7.elrepo              elrepo-kernel     <---長期文檔版
......
kernel-ml.x86_64                               5.2.1-1.el7.elrepo                elrepo-kernel     <---最新主線穩定版
......
#安裝內核源文件 [root@mobanji
~]# yum --enablerepo=elrepo-kernel install kernel-lt-devel kernel-lt -y

爲了讓新安裝的內核成爲默認啓動選項
須要以下修改 GRUB 配置,打開並編輯 /etc/default/grub 並設置 GRUB_DEFAULT=0
意思是 GRUB 初始化頁面的第一個內核將做爲默認內核.

#查看默認啓動順序
[root@mobanji ~]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
CentOS Linux (4.4.185-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-957.21.3.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-957.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-b4c601a613824f9f827cb9787b605efb) 7 (Core)

由上面能夠看出新內核(4.4.185)目前位置在0,原來的內核(3.10.0)目前位置在1,因此若是想生效最新的內核,還須要咱們修改內核的啓動順序爲0

#編輯/etc/default/grub文件
[root@mobanji ~]# vim /etc/default/grub
GRUB_DEFAULT=0   <--- saved改成0
#運行grub2-mkconfig命令來從新建立內核配置
#重啓系統
[root@mobanji ~]# reboot

關閉NUMA

[root@mobanji ~]# cp /etc/default/grub{,.bak}
[root@mobanji ~]# vim /etc/default/grub   
.........
GRUB_CMDLINE_LINUX="...... numa=off"      # 即添加"numa=0ff"內容
     
從新生成 grub2 配置文件:
# cp /boot/grub2/grub.cfg{,.bak}
# grub2-mkconfig -o /boot/grub2/grub.cfg

設置rsyslogd 和systemd journald

systemd 的 journald 是 Centos 7 缺省的日誌記錄工具,它記錄了全部系統、內核、Service Unit 的日誌。相比 systemd,journald 記錄的日誌有以下優點:

-> 能夠記錄到內存或文件系統;(默認記錄到內存,對應的位置爲 /run/log/jounal);

-> 能夠限制佔用的磁盤空間、保證磁盤剩餘空間;

-> 能夠限制日誌文件大小、保存的時間;

-> journald 默認將日誌轉發給 rsyslog,這會致使日誌寫了多份,/var/log/messages 中包含了太多無關日誌,不方便後續查看,同時也影響系統性能。

[root@mobanji ~]# mkdir /var/log/journal     <---#持久化保存日誌的目錄
[root@mobanji ~]# mkdir /etc/systemd/journald.conf.d
[root@mobanji ~]# cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
> [Journal]
> # 持久化保存到磁盤
> Storage=persistent
>      
> # 壓縮歷史日誌
> Compress=yes
>      
> SyncIntervalSec=5m
> RateLimitInterval=30s
> RateLimitBurst=1000
>      
> # 最大佔用空間 10G
> SystemMaxUse=10G
>      
> # 單日誌文件最大 200M
> SystemMaxFileSize=200M
>      
> # 日誌保存時間 2> MaxRetentionSec=2week
>      
> # 不將日誌轉發到 syslog
> ForwardToSyslog=no
> EOF
[root@mobanji ~]# systemctl restart systemd-journald
[root@mobanji ~]# systemctl status systemd-journald

加載內核模塊

[root@mobanji ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> modprobe -- br_netfilter
> EOF
/etc/sysconfig/modules/ipvs.modules[root@mobanji ~]#
[root@mobanji ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
[root@mobanji ~]# lsmod  | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter

優化內核參數

[root@mobanji ~]# cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0  #因爲tcp_tw_recycle與kubernetes的NAT衝突,必須關閉!不然會致使服務不通。
vm.swappiness=0            #禁止使用 swap 空間,只有當系統 OOM 時才容許使用它
vm.overcommit_memory=1     #不檢查物理內存是否夠用
vm.panic_on_oom=0          #開啓 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1  #關閉不使用的ipv6協議棧,防止觸發docker BUG.
net.netfilter.nf_conntrack_max=2310720
EOF
[root@mobanji ~]# sysctl -p /etc/sysctl.d/k8s.conf

注:

必須關閉 tcp_tw_recycle,不然和 NAT 衝突,會致使服務不通;

關閉 IPV6,防止觸發 docker BUG;

個性vim配置

https://blog.csdn.net/zisefeizhu/article/details/89407487

[root@mobanji ~]# cat ~/.vimrc
set nocompatible
filetype on
set paste
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#begin()
 
 
" 這裏根據本身須要的插件來設置,如下是個人配置 "
"
" YouCompleteMe:語句補全插件
set runtimepath+=~/.vim/bundle/YouCompleteMe
autocmd InsertLeave * if pumvisible() == 0|pclose|endif "離開插入模式後自動關閉預覽窗口"
let g:ycm_collect_identifiers_from_tags_files = 1           " 開啓 YCM基於標籤引擎
let g:ycm_collect_identifiers_from_comments_and_strings = 1 " 註釋與字符串中的內容也用於補全
let g:syntastic_ignore_files=[".*\.py$"]
let g:ycm_seed_identifiers_with_syntax = 1                  " 語法關鍵字補全
let g:ycm_complete_in_comments = 1
let g:ycm_confirm_extra_conf = 0                            " 關閉加載.ycm_extra_conf.py提示
let g:ycm_key_list_select_completion = ['<c-n>', '<Down>']  " 映射按鍵,沒有這個會攔截掉tab, 致使其餘插件的tab不能用.
let g:ycm_key_list_previous_completion = ['<c-p>', '<Up>']
let g:ycm_complete_in_comments = 1                          " 在註釋輸入中也能補全
let g:ycm_complete_in_strings = 1                           " 在字符串輸入中也能補全
let g:ycm_collect_identifiers_from_comments_and_strings = 1 " 註釋和字符串中的文字也會被收入補全
let g:ycm_global_ycm_extra_conf='~/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/ycm/.ycm_extra_conf.py'
let g:ycm_show_diagnostics_ui = 0                           " 禁用語法檢查
inoremap <expr> <CR> pumvisible() ? "\<C-y>" : "\<CR>"             " 回車即選中當前項
nnoremap <c-j> :YcmCompleter GoToDefinitionElseDeclaration<CR>     " 跳轉到定義處
let g:ycm_min_num_of_chars_for_completion=2                 " 從第2個鍵入字符就開始羅列匹配項
"
 
 
 
" github 倉庫中的插件 "
Plugin 'VundleVim/Vundle.vim'
 
 
Plugin 'vim-airline/vim-airline'
"vim-airline配置:優化vim界面"
"let g:airline#extensions#tabline#enabled = 1
" airline設置
" 顯示顏色
set t_Co=256
set laststatus=2
" 使用powerline打過補丁的字體
let g:airline_powerline_fonts = 1
" 開啓tabline
let g:airline#extensions#tabline#enabled = 1
" tabline中當前buffer兩端的分隔字符
let g:airline#extensions#tabline#left_sep = ' '
" tabline中未激活buffer兩端的分隔字符
let g:airline#extensions#tabline#left_alt_sep = ' '
" tabline中buffer顯示編號
let g:airline#extensions#tabline#buffer_nr_show = 1
" 映射切換buffer的鍵位
nnoremap [b :bp<CR>
nnoremap ]b :bn<CR>
" 映射<leader>num到num buffer
map <leader>1 :b 1<CR>
map <leader>2 :b 2<CR>
map <leader>3 :b 3<CR>
map <leader>4 :b 4<CR>
map <leader>5 :b 5<CR>
map <leader>6 :b 6<CR>
map <leader>7 :b 7<CR>
map <leader>8 :b 8<CR>
map <leader>9 :b 9<CR>
 
 
 
" vim-scripts 中的插件 "
Plugin 'taglist.vim'
"ctags 配置:F3快捷鍵顯示程序中的各類tags,包括變量和函數等。
map <F3> :TlistToggle<CR>
let Tlist_Use_Right_Window=1
let Tlist_Show_One_File=1
let Tlist_Exit_OnlyWindow=1
let Tlist_WinWidt=25
 
Plugin 'The-NERD-tree'
"NERDTree 配置:F2快捷鍵顯示當前目錄樹
map <F2> :NERDTreeToggle<CR>
let NERDTreeWinSize=25
 
Plugin 'indentLine.vim'
Plugin 'delimitMate.vim'
 
" 非 github 倉庫的插件"
" Plugin 'git://git.wincent.com/command-t.git'
" 本地倉庫的插件 "
 
call vundle#end()
 
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
"""""新文件標題
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
"新建.c,.h,.sh,.java文件,自動插入文件頭
autocmd BufNewFile *.sh,*.yaml exec ":call SetTitle()"
""定義函數SetTitle,自動插入文件頭
func SetTitle()
"若是文件類型爲.sh文件
if &filetype == 'sh'
call setline(1, "##########################################################################")
        call setline(2,"#Author:                     zisefeizhu")
        call setline(3,"#QQ:                         2********0")
        call setline(4,"#Date:                       ".strftime("%Y-%m-%d"))
        call setline(5,"#FileName:                   ".expand("%"))
        call setline(6,"#URL:                        https://www.cnblogs.com/zisefeizhu/")
        call setline(7,"#Description:                The test script")                         
        call setline(8,"#Copyright (C):              ".strftime("%Y")." All rights reserved")
call setline(9, "##########################################################################")
call setline(10, "#!/bin/bash")
call setline(11,"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin")
call setline(12, "export $PATH")
call setline(13, "")
endif
if &filetype == 'yaml'
call setline(1, "##########################################################################")
        call setline(2,"#Author:                     zisefeizhu")
        call setline(3,"#QQ:                         2********0")
        call setline(4,"#Date:                       ".strftime("%Y-%m-%d"))
        call setline(5,"#FileName:                   ".expand("%"))
        call setline(6,"#URL:                        https://www.cnblogs.com/zisefeizhu/")
        call setline(7,"#Description:                The test script")                                                 
        call setline(8,"#Copyright (C):              ".strftime("%Y")." All rights reserved")
call setline(9, "###########################################################################")
call setline(10, "")
endif
"新建文件後,自動定位到文件末尾
autocmd BufNewFile * normal G
endfunc
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
"鍵盤命令
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
 
nmap <leader>w :w!<cr>
nmap <leader>f :find<cr>
 
" 映射全選+複製 ctrl+a
map <C-A> ggVGY
map! <C-A> <Esc>ggVGY
map <F12> gg=G
" 選中狀態下 Ctrl+c 複製
vmap <C-c> "+y
 
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
""實用設置
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
" 設置當文件被改動時自動載入
set autoread
" quickfix模式
autocmd FileType c,cpp map <buffer> <leader><space> :w<cr>:make<cr>
"代碼補全
set completeopt=preview,menu
"容許插件  
filetype plugin on
"共享剪貼板  
set clipboard=unnamed
"從不備份  
set nobackup
"make 運行
:set makeprg=g++\ -Wall\ \ %
"自動保存
set autowrite
set ruler                   " 打開狀態欄標尺
set cursorline              " 突出顯示當前行
set magic                   " 設置魔術
set guioptions-=T           " 隱藏工具欄
set guioptions-=m           " 隱藏菜單欄
"set statusline=\ %<%F[%1*%M%*%n%R%H]%=\ %y\ %0(%{&fileformat}\ %{&encoding}\ %c:%l/%L%)\
" 設置在狀態行顯示的信息
set foldcolumn=0
set foldmethod=indent
set foldlevel=3
set foldenable              " 開始摺疊
" 不要使用vi的鍵盤模式,而是vim本身的
set nocompatible
" 語法高亮
set syntax=on
" 去掉輸入錯誤的提示聲音
set noeb
" 在處理未保存或只讀文件的時候,彈出確認
set confirm
" 自動縮進
set autoindent
set cindent
" Tab鍵的寬度
set tabstop=2
" 統一縮進爲2
set softtabstop=2
set shiftwidth=2
" 不要用空格代替製表符
set noexpandtab
" 在行和段開始處使用製表符
set smarttab
" 顯示行號
" set number
" 歷史記錄數
set history=1000
"禁止生成臨時文件
set nobackup
set noswapfile
"搜索忽略大小寫
set ignorecase
"搜索逐字符高亮
set hlsearch
set incsearch
"行內替換
set gdefault
"編碼設置
set enc=utf-8
set fencs=utf-8,ucs-bom,shift-jis,gb18030,gbk,gb2312,cp936
"語言設置
set langmenu=zh_CN.UTF-8
set helplang=cn
" 個人狀態行顯示的內容(包括文件類型和解碼)
set statusline=%F%m%r%h%w\ [FORMAT=%{&ff}]\ [TYPE=%Y]\ [POS=%l,%v][%p%%]\ %{strftime(\"%d/%m/%y\ -\ %H:%M\")}
set statusline=[%F]%y%r%m%*%=[Line:%l/%L,Column:%c][%p%%]
" 老是顯示狀態行
set laststatus=2
" 命令行(在狀態行下)的高度,默認爲1,這裏是2
set cmdheight=2
" 偵測文件類型
filetype on
" 載入文件類型插件
filetype plugin on
" 爲特定文件類型載入相關縮進文件
filetype indent on
" 保存全局變量
set viminfo+=!
" 帶有以下符號的單詞不要被換行分割
set iskeyword+=_,$,@,%,#,-
" 字符間插入的像素行數目
set linespace=0
" 加強模式中的命令行自動完成操做
set wildmenu
" 使回格鍵(backspace)正常處理indent, eol, start等
set backspace=2
" 容許backspace和光標鍵跨越行邊界
set whichwrap+=<,>,h,l
" 能夠在buffer的任何地方使用鼠標(相似office中在工做區雙擊鼠標定位)
set mouse=a
set selection=exclusive
set selectmode=mouse,key
" 經過使用: commands命令,告訴咱們文件的哪一行被改變過
set report=0
" 在被分割的窗口間顯示空白,便於閱讀
set fillchars=vert:\ ,stl:\ ,stlnc:\
" 高亮顯示匹配的括號
set showmatch
" 匹配括號高亮的時間(單位是十分之一秒)
set matchtime=1
" 光標移動到buffer的頂部和底部時保持3行距離
set scrolloff=3
" 爲C程序提供自動縮進
set smartindent
" 高亮顯示普通txt文件(須要txt.vim腳本)
 au BufRead,BufNewFile *  setfiletype txt
"自動補全
:inoremap ( ()<ESC>i
:inoremap ) <c-r>=ClosePair(')')<CR>
":inoremap { {<CR>}<ESC>O
":inoremap } <c-r>=ClosePair('}')<CR>
:inoremap [ []<ESC>i
:inoremap ] <c-r>=ClosePair(']')<CR>
:inoremap " ""<ESC>i
:inoremap ' ''<ESC>i
function! ClosePair(char)
if getline('.')[col('.') - 1] == a:char
return "\<Right>"
else
return a:char
endif
endfunction
filetype plugin indent on
"打開文件類型檢測, 加了這句才能夠用智能補全
set completeopt=longest,menu
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""

設置sysctl.conf

[root@mobanji ~]# [ ! -e "/etc/sysctl.conf_bk" ] && /bin/mv /etc/sysctl.conf{,_bk} \
> && cat > /etc/sysctl.conf << EOF
> fs.file-max=1000000
> fs.nr_open=20480000
> net.ipv4.tcp_max_tw_buckets = 180000
> net.ipv4.tcp_sack = 1
> net.ipv4.tcp_window_scaling = 1
> net.ipv4.tcp_rmem = 4096 87380 4194304
> net.ipv4.tcp_wmem = 4096 16384 4194304
> net.ipv4.tcp_max_syn_backlog = 16384
> net.core.netdev_max_backlog = 32768
> net.core.somaxconn = 32768
> net.core.wmem_default = 8388608
> net.core.rmem_default = 8388608
> net.core.rmem_max = 16777216
> net.core.wmem_max = 16777216
> net.ipv4.tcp_timestamps = 0
> net.ipv4.tcp_fin_timeout = 20
> net.ipv4.tcp_synack_retries = 2
> net.ipv4.tcp_syn_retries = 2
> net.ipv4.tcp_syncookies = 1
> #net.ipv4.tcp_tw_len = 1
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_mem = 94500000 915000000 927000000
> net.ipv4.tcp_max_orphans = 3276800
> net.ipv4.ip_local_port_range = 1024 65000
> #net.nf_conntrack_max = 6553500
> #net.netfilter.nf_conntrack_max = 6553500
> #net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
> #net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
> #net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
> #net.netfilter.nf_conntrack_tcp_timeout_established = 3600
> EOF
[root@mobanji ~]# sysctl -p

科目目錄

#腳本目錄
[root@mobanji ~]#mkdir -p /service/scripts

至此:模板機優化完畢

1.4.3 虛機準備

節點名稱

IP

安裝軟件

角色

jumpserver

20.0.0.200

jumpserver

堡壘機

k8s-master01

20.0.0.201

kubeadm、kubelet、kubectl、docker、etcd

 

master節點

k8s-master02

20.0.0.202

ceph

k8s-master03

20.0.0.203

k8s-node01

20.0.0.204

kubeadm、kubelet、kubectl、docker

 

業務節點

k8s-node02

20.0.0.205

k8s-node03

20.0.0.206

k8s-ha01

20.0.0.207

20.0.0.208

VIP:20.0.0.250

haproxy、keepalived、ceph

 

 

VIP

k8s-ha02

k8s-ceph

20.0.0.209

ceph

存儲節點


以k8s-master01爲例

#改主機名
[root@mobanji ~]# hostnamectl set-hostname k8s-master01
[root@mobanji ~]# bash
[root@k8s-master01 ~]#
#改IP
[root@k8s-master01 ~]# vimn
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=20.0.0.201
PREFIX=24
GATEWAY=20.0.0.2
DNS1=223.5.5.5
[root@k8s-master01 ~]# systemctl restart network
[root@k8s-master01 ~]# ping www.baidu.com
PING www.baidu.com (61.135.169.121) 56(84) bytes of data.
64 bytes from 61.135.169.121 (61.135.169.121): icmp_seq=1 ttl=128 time=43.3 ms
^C
--- www.baidu.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 43.348/43.348/43.348/0.000 ms
[root@k8s-master01 ~]# hostname -I
20.0.0.201
注:
init 0   -->  快照

至此:虛機準備完畢

1.5 集羣

1.5.1 部署負載均衡高可用

以k8s-ha01爲例

1.5.1.1 軟件安裝
#k8s-ha01和k8s-ha02
[root@k8s-ha01 ~]#  yum -y install keepalived haproxy -y

1.5.1.2 部署keepalived #k8s-ha01和k8s-ha02 [root@k8s-ha01 ~]# cp /etc/keepalived/keepalived.conf{,.bak} [root@k8s-ha01 ~]# > /etc/keepalived/keepalived.conf #k8s-ha01 [root@k8s-ha01 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { lhmp@zisefeizhu.cn devops@zisefeizhu.cn } notification_email_from lhmp@zisefeizhi.cn smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id master-node } vrrp_script chk_haproxy_port { script "/service/scripts/chk_hapro.sh" interval 2 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 101 advert_int 1 unicast_src_ip 20.0.0.207 unicast_peer { 20.0.0.208 } authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 20.0.0.250 dev eth0 label eth0:1 } track_script { chk_haproxy_port } } [root@k8s-ha01 ~]# scp /etc/keepalived/keepalived.conf 20.0.0.208:/etc/keepalived/keepalived.conf #k8s-ha02 [root@k8s-ha02 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { lhmp@zisefeizhu.cn devops@zisefeizhu.cn } notification_email_from lhmp@zisefeizhi.cn smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id master-node } vrrp_script chk_http_port { script "/service/scripts/chk_hapro.sh" interval 3 weight -2 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 90 advert_int 1 unicast_src_ip 20.0.0.208 unicast_peer { 20.0.0.207 } authentication { auth_type PASS auth_pas s 1111 } virtual_ipaddress { 20.0.0.250 dev eth0 label eth0:1 } track_script { check_haproxy } }

1.5.1.3 部署haproxy #k8s-ha01和k8s-ha02 [root@k8s-ha01 ~]# cp /etc/haproxy/haproxy.cfg{,.bak} [root@k8s-ha01 ~]# > /etc/haproxy/haproxy.cfg #k8s-ha01 [root@k8s-ha01 ~]# vim /etc/haproxy/haproxy.cfg [root@k8s-ha01 ~]# cat /etc/haproxy/haproxy.cfg #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global maxconn 100000 #chroot /var/haproxy/lib/haproxy #stats socket /var/lib/haproxy/haproxy.sock mode 600 level admin uid 99 gid 99 daemon nbproc 2 cpu-map 1 0 cpu-map 2 1 #pidfile /var/haproxy/run/haproxy.pid log 127.0.0.1 local3 info defaults option http-keep-alive option forwardfor maxconn 100000 mode http timeout connect 300000ms timeout client 300000ms timeout server 300000ms listen stats mode http bind 0.0.0.0:9999 stats enable log global stats uri /haproxy-status stats auth admin:zisefeizhu #K8S-API-Server frontend K8S_API bind *:8443 mode tcp default_backend k8s_api_nodes_6443 backend k8s_api_nodes_6443 mode tcp balance leastconn server 20.0.0.201 20.0.0.201:6443 check inter 2000 fall 3 rise 5 server 20.0.0.202 20.0.0.202:6443 check inter 2000 fall 3 rise 5 server 20.0.0.203 20.0.0.203:6443 check inter 2000 fall 3 rise 5 #k8s-ha02 [root@k8s-ha01 ~]# scp /etc/haproxy/haproxy.cfg 20.0.0.208:/etc/haproxy/haproxy.cfg 1.5.1.4 設置服務啓動順序及依賴關係 #k8s-ha01和k8s-ha02 [root@k8s-ha01 ~]# vim /usr/lib/systemd/system/keepalived.service [Unit] Description=LVS and VRRP High Availability Monitor After=syslog.target network-online.target haproxy.service Requires=haproxy.service ......

1.5.1.5 檢查腳本 [root@k8s-ha01 ~]# vim /service/scripts/chk_hapro.sh [root@k8s-ha01 ~]# cat /service/scripts/chk_hapro.sh ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2019-07-26 #FileName: /service/scripts/chk_hapro.sh #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2019 All rights reserved ########################################################################## #!/bin/bash counts=$(ps -ef|grep -w "haproxy"|grep -v grep|wc -l) if [ "${counts}" = "0" ]; then systemctl restart keepalived.service sleep 2 counts=$(ps -ef|grep -w "haproxy"|grep -v grep|wc -l) if [ "${counts}" = "0" ]; then systemctl stop keepalived.service fi fi

1.5.1.6 啓動服務 [root@k8s-ha01 ~]# systemctl enable keepalived && systemctl start keepalived && systemctl enable haproxy && systemctl start haproxy && systemctl status keepalived && systemctl status haproxy

1.5.1.7 測試

[root@k8s-ha01 ~]# systemctl stop keepalived
#刷新瀏覽器
[root@k8s-ha01 ~]# systemctl start keepalived
[root@k8s-ha01 ~]# systemctl stop haproxy
#刷新瀏覽器

1.5.2 部署kubernetes集羣

1.5.2.1 虛機初始化

以k8s-master01爲例

爲每臺虛機添加host解析記錄

[root@k8s-master01 ~]# cat >> /etc/hosts << EOF
> 20.0.0.201  k8s-master01
> 20.0.0.202  k8s-master02
> 20.0.0.203  k8s-master03
> 20.0.0.204  k8s-node01
> 20.0.0.205  k8s-node02
> 20.0.0.206  k8s-node03
> EOF

免密鑰登錄

[root@k8s-master01 ~]# vim /service/scripts/ssh-copy.sh
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2019-07-27
#FileName:                   /service/scripts/ssh-copy.sh
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2019 All rights reserved
##########################################################################
#!/bin/bash
#目標主機列表
IP="
20.0.0.201
k8s-master01
20.0.0.202
k8s-master02
20.0.0.203
k8s-master03
20.0.0.204
k8s-node01
20.0.0.205
k8s-node02
20.0.0.206
k8s-node03
"
for node in ${IP};do
  sshpass -p 1 ssh-copy-id  ${node}  -o StrictHostKeyChecking=no
  if [ $? -eq 0 ];then
    echo "${node} 祕鑰copy完成"
  else
    echo "${node} 祕鑰copy失敗"
  fi
done
[root@k8s-master01 ~]# ssh-keygen -t rsa
[root@k8s-master01 ~]# sh /service/scripts/ssh-copy.sh

關閉交換分區

[root@k8s-master01 ~]# swapoff -a
[root@k8s-master01 ~]# yes | cp /etc/fstab /etc/fstab_bak
[root@k8s-master01 ~]# cat /etc/fstab_bak |grep -v swap > /etc/fstab

添加k8s源

[root@k8s-master01 ~]# cat << EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

1.5.2.2 安裝docker

k8s-master01爲例

安裝必要的一些系統工具

[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

安裝docker

[root@k8s-master01 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master01 ~]# yum list docker-ce --showduplicates | sort -r
[root@k8s-master01 ~]# yum -y install docker-ce-18.06.3.ce-3.el7

配置daemon.json

#獲取鏡像加速
阿里雲
   打開網址:https://cr.console.aliyun.com/#/accelerator
        註冊、登陸、設置密碼
        而後在頁面上能夠看到加速器地址,相似於:https://123abc.mirror.aliyuncs.com
騰訊雲(非騰訊雲主機不可用)
加速地址:https://mirror.ccs.tencentyun.com
 
#配置
[root@k8s-master01 ~]# mkdir -p /etc/docker/ \
&& cat > /etc/docker/daemon.json << EOF
{
     "registry-mirrors":[
         "https://c6ai9izk.mirror.aliyuncs.com"
     ],
     "max-concurrent-downloads":3,
     "data-root":"/data/docker",
     "log-driver":"json-file",
     "log-opts":{
         "max-size":"100m",
         "max-file":"1"
     },
     "max-concurrent-uploads":5,
     "storage-driver":"overlay2",
     "storage-opts": [
     "overlay2.override_kernel_check=true"
   ],
  "live-restore": true,       <--- 保證 docker daemon重啓,但容器不重啓
    "exec-opts": [
    "native.cgroupdriver=systemd"
    ]
 }
 EOF

啓動檢查docker

[root@k8s-master01 ~]# systemctl enable docker \
> && systemctl restart docker \
> && systemctl status docker

注:daemon.json詳解

{
    "authorization-plugins": [],   //訪問受權插件
    "data-root": "",   //docker數據持久化存儲的根目錄
    "dns": [],   //DNS服務器
    "dns-opts": [],   //DNS配置選項,如端口等
    "dns-search": [],   //DNS搜索域名
    "exec-opts": [],   //執行選項
    "exec-root": "",   //執行狀態的文件的根目錄
    "experimental": false,   //是否開啓試驗性特性
    "storage-driver": "",   //存儲驅動器
    "storage-opts": [],   //存儲選項
    "labels": [],   //鍵值對式標記docker元數據
    "live-restore": true,   //dockerd掛掉是否保活容器(避免了docker服務異常而形成容器退出)
    "log-driver": "",   //容器日誌的驅動器
    "log-opts": {},   //容器日誌的選項
    "mtu": 0,   //設置容器網絡MTU(最大傳輸單元)
    "pidfile": "",   //daemon PID文件的位置
    "cluster-store": "",   //集羣存儲系統的URL
    "cluster-store-opts": {},   //配置集羣存儲
    "cluster-advertise": "",   //對外的地址名稱
    "max-concurrent-downloads": 3,   //設置每一個pull進程的最大併發
    "max-concurrent-uploads": 5,   //設置每一個push進程的最大併發
    "default-shm-size": "64M",   //設置默認共享內存的大小
    "shutdown-timeout": 15,   //設置關閉的超時時限(who?)
    "debug": true,   //開啓調試模式
    "hosts": [],   //監聽地址(?)
    "log-level": "",   //日誌級別
    "tls": true,   //開啓傳輸層安全協議TLS
    "tlsverify": true,   //開啓輸層安全協議並驗證遠程地址
    "tlscacert": "",   //CA簽名文件路徑
    "tlscert": "",   //TLS證書文件路徑
    "tlskey": "",   //TLS密鑰文件路徑
    "swarm-default-advertise-addr": "",   //swarm對外地址
    "api-cors-header": "",   //設置CORS(跨域資源共享-Cross-origin resource sharing)頭
    "selinux-enabled": false,   //開啓selinux(用戶、進程、應用、文件的強制訪問控制)
    "userns-remap": "",   //給用戶命名空間設置 用戶/組
    "group": "",   //docker所在組
    "cgroup-parent": "",   //設置全部容器的cgroup的父類(?)
    "default-ulimits": {},   //設置全部容器的ulimit
    "init": false,   //容器執行初始化,來轉發信號或控制(reap)進程
    "init-path": "/usr/libexec/docker-init",   //docker-init文件的路徑
    "ipv6": false,   //開啓IPV6網絡
    "iptables": false,   //開啓防火牆規則
    "ip-forward": false,   //開啓net.ipv4.ip_forward
    "ip-masq": false,   //開啓ip掩蔽(IP封包經過路由器或防火牆時重寫源IP地址或目的IP地址的技術)
    "userland-proxy": false,   //用戶空間代理
    "userland-proxy-path": "/usr/libexec/docker-proxy",   //用戶空間代理路徑
    "ip": "0.0.0.0",   //默認IP
    "bridge": "",   //將容器依附(attach)到橋接網絡上的橋標識
    "bip": "",   //指定橋接ip
    "fixed-cidr": "",   //(ipv4)子網劃分,即限制ip地址分配範圍,用以控制容器所屬網段實現容器間(同一主機或不一樣主機間)的網絡訪問
    "fixed-cidr-v6": "",   //(ipv6)子網劃分
    "default-gateway": "",   //默認網關
    "default-gateway-v6": "",   //默認ipv6網關
    "icc": false,   //容器間通訊
    "raw-logs": false,   //原始日誌(無顏色、全時間戳)
    "allow-nondistributable-artifacts": [],   //不對外分發的產品提交的registry倉庫
    "registry-mirrors": [],   //registry倉庫鏡像
    "seccomp-profile": "",   //seccomp配置文件
    "insecure-registries": [],   //非https的registry地址
    "no-new-privileges": false,   //禁止新優先級(??)
    "default-runtime": "runc",   //OCI聯盟(The Open Container Initiative)默認運行時環境
    "oom-score-adjust": -500,   //內存溢出被殺死的優先級(-1000~1000)
    "node-generic-resources": ["NVIDIA-GPU=UUID1", "NVIDIA-GPU=UUID2"],   //對外公佈的資源節點
    "runtimes": {   //運行時
        "cc-runtime": {
            "path": "/usr/bin/cc-runtime"
        },
        "custom": {
            "path": "/usr/local/bin/my-runc-replacement",
            "runtimeArgs": [
                "--debug"
            ]
        }
    }
}

1.5.2.3 使用kubeadm部署kubernetes

以k8s-master01爲例

 

 

安裝必備軟件

[root@k8s-master01 ~]# yum list  kubelet kubeadm kubectl --showduplicates | sort -r
[root@k8s-master01 ~]# yum install -y kubelet-1.15.1 kubeadm-1.15.1 kubectl-1.15.1 ipvsadm ipset
 
##設置kubelet開機自啓動,注意:這一步不能直接執行 systemctl start kubelet,會報錯,成功初始化完後kubelet會自動起來
[root@k8s-master01 ~]# systemctl enable kubelet
 
#kubectl命令補全
[root@k8s-master01 ~]# source /usr/share/bash-completion/bash_completion
[root@k8s-master01 ~]# source <(kubectl completion bash)
[root@k8s-master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

修改初始化配置

使用kubeadm config print init-defaults > kubeadm-init.yaml 打印出默認配置,而後在根據本身的環境修改配置
注意
須要修改advertiseAddress、controlPlaneEndpoint、imageRepository、serviceSubnet、kubernetesVersion
advertiseAddress爲master01的ip
controlPlaneEndpoint爲VIP+8443端口
imageRepository修改成阿里的源
serviceSubnet找網絡組要一段沒有使用的IP段
kubernetesVersion和上一步的版本一致
[root@k8s-master01 ~]# cd /data/
[root@k8s-master01 data]# ll
[root@k8s-master01 data]# mkdir tmp
[root@k8s-master01 data]# cd tmp
[root@k8s-master01 tmp]# kubeadm config print init-defaults > kubeadm-init.yaml
[root@k8s-master01 tmp]# cp kubeadm-init.yaml{,.bak}
[root@k8s-master01 tmp]# vim kubeadm-init.yaml
[root@k8s-master01 tmp]# diff kubeadm-init.yaml{,.bak}
12c12
<   advertiseAddress: 20.0.0.201
---
>   advertiseAddress: 1.2.3.4
26d25
< controlPlaneEndpoint: "20.0.0.250:8443"
33c32
< imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
---
> imageRepository: k8s.gcr.io
35c34
< kubernetesVersion: v1.15.1
---
> kubernetesVersion: v1.14.0
38c37
<   serviceSubnet: 10.0.0.0/16
---
>   serviceSubnet: 10.96.0.0/12

下載鏡像

#查看所需鏡像版本
[root@k8s-master01 tmp]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1
k8s.gcr.io/kube-proxy:v1.15.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
#下載所需鏡像
[root@k8s-master01 tmp]# kubeadm config images pull --config kubeadm-init.yaml
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1

初始化

[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.0.0.1 20.0.0.201 20.0.0.250]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [20.0.0.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [20.0.0.201 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 57.514816 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
 
  kubeadm join 20.0.0.250:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:cdfa555306ee75391e03eef75b8fa16ba121f5a9effe85e81874f6207b610c9f \
    --control-plane   
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 20.0.0.250:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:cdfa555306ee75391e03eef75b8fa16ba121f5a9effe85e81874f6207b610c9f

注:kubeadm init主要執行了如下操做

[init]:指定版本進行初始化操做
[preflight] :初始化前的檢查和下載所須要的Docker鏡像文件
[kubelet-start] :生成kubelet的配置文件」/var/lib/kubelet/config.yaml」,沒有這個文件kubelet沒法啓動,因此初始化以前的kubelet實際上啓動失敗。
[certificates]:生成Kubernetes使用的證書,存放在/etc/kubernetes/pki目錄中。
[kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目錄中,組件之間通訊須要使用對應文件。
[control-plane]:使用/etc/kubernetes/manifest目錄下的YAML文件,安裝 Master 組件。
[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安裝Etcd服務。
[wait-control-plane]:等待control-plan部署的Master組件啓動。
[apiclient]:檢查Master組件服務狀態。
[uploadconfig]:更新配置
[kubelet]:使用configMap配置kubelet。
[patchnode]:更新CNI信息到Node上,經過註釋的方式記錄。
[mark-control-plane]:爲當前節點打標籤,打了角色Master,和不可調度標籤,這樣默認就不會使用Master節點來運行Pod。
[bootstrap-token]:生成token記錄下來,後邊使用kubeadm join往集羣中添加節點時會用到
[addons]:安裝附加組件CoreDNS和kube-proxy

爲kubectl 準備kubeconfig文件

#kubectl默認會在執行的用戶家目錄下面的.kube目錄下尋找config文件。這裏是將在初始化時[kubeconfig]步驟生成的admin.conf拷貝到.kube/config
[root@k8s-master01 tmp]# mkdir -p $HOME/.kube
[root@k8s-master01 tmp]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 tmp]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看組件狀態

[root@k8s-master01 tmp]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@k8s-master01 tmp]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   6m23s   v1.15.1
 
目前只有一個節點,角色是Master,狀態是NotReady,狀態是NotReady狀態是由於尚未安裝網絡插件

部署其餘master

在k8s-master01將證書文件拷貝至k8s-master0二、k8s-master03節點
在k8s-master01上部署
#拷貝證書至k8s-master02節點
[root@k8s-master01 ~]# vim /service/scripts/k8s-master-zhengshu.sh
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2019-07-27
#FileName:                   /service/scripts/k8s-master-zhengshu-master02.sh
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2019 All rights reserved
##########################################################################
#!/bin/bash
USER=root
CONTROL_PLANE_IPS="k8s-master02 k8s-master03"
for host in ${CONTROL_PLANE_IPS}; do
    ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
    scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
    scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
don
[root@k8s-master01 ~]# sh -x /service/scripts/k8s-master-zhengshu-master02.sh
 
#在k8s-master02上執行,注意注意--experimental-control-plane參數
[root@k8s-master02 ~]# kubeadm join 20.0.0.250:8443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:cdfa555306ee75391e03eef75b8fa16ba121f5a9effe85e81874f6207b610c9f  \
>    --experimental-control-plane
Flag --experimental-control-plane has been deprecated, use --control-plane instead
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master02 localhost] and IPs [20.0.0.202 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master02 localhost] and IPs [20.0.0.202 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.0.0.1 20.0.0.202 20.0.0.250]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
 
This node has joined the cluster and a new control plane instance was created:
 
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
 
To start administering your cluster from this node, you need to run the following as a regular user:
 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Run 'kubectl get nodes' to see this node join the cluster.
[root@k8s-master02 ~]# mkdir -p $HOME/.kube
[root@k8s-master02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
#在k8s-master02上執行,注意注意--experimental-control-plane參數
[root@k8s-master03 ~]# kubeadm join 20.0.0.250:8443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:cdfa555306ee75391e03eef75b8fa16ba121f5a9effe85e81874f6207b610c9f  \
>    --experimental-control-plane
Flag --experimental-control-plane has been deprecated, use --control-plane instead
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master03 localhost] and IPs [20.0.0.203 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master03 localhost] and IPs [20.0.0.203 127.0.0.1 ::1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master03 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.0.0.1 20.0.0.203 20.0.0.250]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master03 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master03 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
 
This node has joined the cluster and a new control plane instance was created:
 
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
 
To start administering your cluster from this node, you need to run the following as a regular user:
 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Run 'kubectl get nodes' to see this node join the cluster.
 
[root@k8s-master03 ~]# mkdir -p $HOME/.kube
[root@k8s-master03 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master03 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
[root@k8s-master03 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   38m     v1.15.1
k8s-master02   NotReady   master   4m52s   v1.15.1
k8s-master03   NotReady   master   84s     v1.15.1

node節點部署

在k8s-node0一、k8s-node0二、k8s-node03執行,注意沒有--experimental-control-plane參數

注意**:token有效期是有限的,若是舊的token過時,能夠在master節點上使用kubeadm token create --print-join-command從新建立一條token。

在業務節點上執行下面這條命令

以k8s-node01爲例

[root@k8s-node01 ~]# kubeadm join 20.0.0.250:8443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:cdfa555306ee75391e03eef75b8fa16ba121f5a9effe85e81874f6207b610c9f
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
 
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
 
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
 
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   45m     v1.15.1
k8s-master02   NotReady   master   12m     v1.15.1
k8s-master03   NotReady   master   8m49s   v1.15.1
k8s-node01     NotReady   <none>   3m46s   v1.15.1
k8s-node02     NotReady   <none>   3m42s   v1.15.1
k8s-node03     NotReady   <none>   24s     v1.15.1

網絡插件calico

#下載calico.yaml文件
[root@k8s-master01 tmp]# wget -c https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
 
#修改calico.yaml,修改CALICO_IPV4POOL_CIDR這個下面的vaule值。在前面設置的serviceSubnet的值
[root@k8s-master01 tmp]# cp calico.yaml{,.bak}
[root@k8s-master01 tmp]# vim calico.yaml
[root@k8s-master01 tmp]# diff calico.yaml{,.bak}
598c598
<               value: "10.0.0.0/16"
---
>               value: "192.168.0.0/16"
 
#安裝
[root@k8s-master01 tmp]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

查看節點狀態

[root@k8s-master01 tmp]# kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   Ready      master   59m   v1.15.1
k8s-master02   Ready      master   25m   v1.15.1
k8s-master03   Ready      master   22m   v1.15.1
k8s-node01     NotReady   <none>   17m   v1.15.1
k8s-node02     NotReady   <none>   17m   v1.15.1
k8s-node03     NotReady   <none>   14m   v1.15.1

kube-proxy開啓ipvs

#修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: "ipvs":
[root@k8s-master01 ~]# kubectl edit cm kube-proxy -n kube-system
 
#重啓kube-proxy pod
[root@k8s-master01 ~]#  kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-4skt5" deleted
pod "kube-proxy-fxjl5" deleted
pod "kube-proxy-k5q6x" deleted
pod "kube-proxy-q47jk" deleted
pod "kube-proxy-rc6pg" deleted
pod "kube-proxy-wwm49" deleted
 
#查看Kube-proxy pod狀態
[root@k8s-master01 ~]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-7vg6s                           1/1     Running   0          82s
kube-proxy-dtpvd                           1/1     Running   0          2m2s
kube-proxy-hd8sk                           1/1     Running   0          114s
kube-proxy-lscgw                           1/1     Running   0          97s
kube-proxy-ssv94                           1/1     Running   0          106s
kube-proxy-vdlx7                           1/1     Running   0          79s
 
#查看是否開啓了ipvs
[root@k8s-master01 ~]# kubectl logs kube-proxy-ssv94 -n kube-system
I0727 02:23:52.411755       1 server_others.go:170] Using ipvs Proxier.
W0727 02:23:52.412270       1 proxier.go:395] clusterCIDR not specified, unable to distinguish between internal and external traffic
W0727 02:23:52.412302       1 proxier.go:401] IPVS scheduler not specified, use rr by default
I0727 02:23:52.412480       1 server.go:534] Version: v1.15.1
I0727 02:23:52.427788       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0727 02:23:52.428163       1 config.go:187] Starting service config controller
I0727 02:23:52.428199       1 config.go:96] Starting endpoints config controller
I0727 02:23:52.428221       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0727 02:23:52.428233       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0727 02:23:52.628536       1 controller_utils.go:1036] Caches are synced for service config controller
I0727 02:23:52.628636       1 controller_utils.go:1036] Caches are synced for endpoints config controller
[root@k8s-master01 ~]# kubectl logs kube-proxy-ssv94 -n kube-system  | grep "ipvs"
I0727 02:23:52.411755       1 server_others.go:170] Using ipvs Proxier.

查看ipvs狀態

[root@k8s-master01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.1:443 rr
  -> 20.0.0.201:6443              Masq    1      0          0         
  -> 20.0.0.202:6443              Masq    1      0          0         
  -> 20.0.0.203:6443              Masq    1      0          0         
TCP  10.0.0.10:53 rr
  -> 10.0.122.129:53              Masq    1      0          0         
  -> 10.0.195.0:53                Masq    1      0          0         
TCP  10.0.0.10:9153 rr
  -> 10.0.122.129:9153            Masq    1      0          0         
  -> 10.0.195.0:9153              Masq    1      0          0         
UDP  10.0.0.10:53 rr
  -> 10.0.122.129:53              Masq    1      0          0         
  -> 10.0.195.0:53                Masq    1      0          0       

查看集羣狀態

[root@k8s-master01 ~]# kubectl get all -n kube-system
NAME                                           READY   STATUS    RESTARTS   AGE
pod/calico-kube-controllers-7c4d64d599-w24xk   1/1     Running   0          23m
pod/calico-node-9hzdk                          1/1     Running   0          23m
pod/calico-node-c7xbq                          1/1     Running   0          23m
pod/calico-node-gz967                          1/1     Running   0          23m
pod/calico-node-hkcjr                          1/1     Running   0          23m
pod/calico-node-pb9h4                          1/1     Running   0          23m
pod/calico-node-w75b8                          1/1     Running   0          23m
pod/coredns-6967fb4995-wv2j5                   1/1     Running   0          77m
pod/coredns-6967fb4995-ztrlt                   1/1     Running   1          77m
pod/etcd-k8s-master01                          1/1     Running   0          76m
pod/etcd-k8s-master02                          1/1     Running   0          44m
pod/etcd-k8s-master03                          1/1     Running   0          40m
pod/kube-apiserver-k8s-master01                1/1     Running   0          76m
pod/kube-apiserver-k8s-master02                1/1     Running   0          44m
pod/kube-apiserver-k8s-master03                1/1     Running   0          39m
pod/kube-controller-manager-k8s-master01       1/1     Running   4          76m
pod/kube-controller-manager-k8s-master02       1/1     Running   1          44m
pod/kube-controller-manager-k8s-master03       1/1     Running   2          39m
pod/kube-proxy-7vg6s                           1/1     Running   0          13m
pod/kube-proxy-dtpvd                           1/1     Running   0          14m
pod/kube-proxy-hd8sk                           1/1     Running   0          13m
pod/kube-proxy-lscgw                           1/1     Running   0          13m
pod/kube-proxy-ssv94                           1/1     Running   0          13m
pod/kube-proxy-vdlx7                           1/1     Running   0          13m
pod/kube-scheduler-k8s-master01                1/1     Running   4          76m
pod/kube-scheduler-k8s-master02                1/1     Running   1          44m
pod/kube-scheduler-k8s-master03                1/1     Running   1          39m
 
 
NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.0.0.10    <none>        53/UDP,53/TCP,9153/TCP   77m
 
NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
daemonset.apps/calico-node   6         6         6       6            6           beta.kubernetes.io/os=linux   23m
daemonset.apps/kube-proxy    6         6         6       6            6           beta.kubernetes.io/os=linux   77m
 
NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/calico-kube-controllers   1/1     1            1           23m
deployment.apps/coredns                   2/2     2            2           77m
 
NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/calico-kube-controllers-7c4d64d599   1         1         1       23m
replicaset.apps/coredns-6967fb4995                   2         2         2       77m  

1.5.2.4 測試

#運行一個nginx pod
[root@k8s-master01 ~]# mkdir /data/yaml
[root@k8s-master01 ~]# cd /data/yaml
[root@k8s-master01 yaml]# vim nginx.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2019-07-27
#FileName:                   nginx.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2019 All rights reserved
###########################################################################
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.14
        ports:
        - containerPort: 80
[root@k8s-master01 yaml]# kubectl apply -f nginx.yaml
deployment.extensions/my-nginx created
 
#查看nginx pod
[root@k8s-master01 yaml]# kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
my-nginx-6b8796c8f4-2bgjg   1/1     Running   0          49s   10.0.135.130   k8s-node03   <none>           <none>
my-nginx-6b8796c8f4-t2hk6   1/1     Running   0          49s   10.0.58.194    k8s-node02   <none>           <none>
my-nginx-6b8796c8f4-t56rp   1/1     Running   0          49s   10.0.85.194    k8s-node01   <none>           <none>
 
#經過curl命令測試
[root@k8s-master01 yaml]# curl 10.0.135.130
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
 
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
 
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
 
#export 該 Deployment, 生成 my-nginx 服務
[root@k8s-master01 yaml]# kubectl expose deployment my-nginx
service/my-nginx exposed
[root@k8s-master01 yaml]# kubectl get service --all-namespaces | grep "my-nginx"
default       my-nginx     ClusterIP   10.0.225.139   <none>        80/TCP                   22s
注:能顯示出Welcome to nginx,說明pod運行正常,間接也說明集羣能夠正常使用
#測試dns
[root@k8s-master01 yaml]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-6bf6db5c4f-9mpcs:/ ]$ nslookup kubernetes.default
Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
 
Name:      kubernetes.default
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
#init 6 --> kubectl get pods --all-namespaces --> init 0 --> 快照

1.5.3 dashboard

能夠從微軟中國提供的 gcr.iohttp://mirror.azure.cn/help/gcr-proxy-cache.html免費代理下載被牆的鏡像

docker pull gcr.azk8s.cn/google_containers/<imagename>:<version>

下載文件

下載三個文件:https://github.com/gjmzj/kubeasz/tree/master/manifests/dashboard

[root@k8s-master01 ~]# mkdir /data/tmp/dashboard
[root@k8s-master01 ~]# cd /data/tmp/dashboard
[root@k8s-master01 dashboard]# ll
總用量 16
-rw-r--r-- 1 root root  844 7月  27 16:10 admin-user-sa-rbac.yaml
-rw-r--r-- 1 root root 5198 7月  27 16:13 kubernetes-dashboard.yaml
-rw-r--r-- 1 root root 2710 7月  27 16:12 read-user-sa-rbac.yaml

部署dashboard主yaml配置文件

#修改鏡像下載地址
image: gcr.azk8s.cn/google_containers/kubernetes-dashboard-amd64:v1.10.1
[root@k8s-master01 dashboard]# kubectl apply -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

建立可讀可寫admin Service Account

[root@k8s-master01 dashboard]# kubectl apply -f admin-user-sa-rbac.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

建立只讀 read Service Account

[root@k8s-master01 dashboard]# kubectl apply -f read-user-sa-rbac.yaml
serviceaccount/dashboard-read-user created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-read-binding created
clusterrole.rbac.authorization.k8s.io/dashboard-read-clusterrole created

查看

#查看pod運行狀態
[root@k8s-master01 dashboard]# kubectl get pod -n kube-system | grep dashboard
kubernetes-dashboard-fcfb4cbc-xrbkx        1/1     Running   0          2m38s
 
#查看dashboard service
[root@k8s-master01 dashboard]# kubectl get svc -n kube-system|grep dashboard
kubernetes-dashboard   NodePort    10.0.71.179   <none>        443:31021/TCP            2m47s
 
#查看集羣服務
[root@k8s-master01 dashboard]# kubectl cluster-info
Kubernetes master is running at https://20.0.0.250:8443
KubeDNS is running at https://20.0.0.250:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://20.0.0.250:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
 
#查看pod運行日誌
[root@k8s-master01 dashboard]# kubectl logs kubernetes-dashboard-fcfb4cbc-xrbkx -n kube-system

生成證書

供本地google瀏覽器使用
#生成client-certificate-data
[root@k8s-master01 dashboard]# grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
 
#生成client-key-data
[root@k8s-master01 dashboard]# grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
 
#生成p12
[root@k8s-master01 dashboard]# openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"
Enter Export Password:   1
Verifying - Enter Export Password:     1
[root@k8s-master01 dashboard]# ll
總用量 28
-rw-r--r-- 1 root root  844 7月  27 16:10 admin-user-sa-rbac.yaml
-rw-r--r-- 1 root root 1082 7月  27 16:23 kubecfg.crt
-rw-r--r-- 1 root root 1679 7月  27 16:23 kubecfg.key
-rw-r--r-- 1 root root 2464 7月  27 16:23 kubecfg.p12
-rw-r--r-- 1 root root 5198 7月  27 16:13 kubernetes-dashboard.yaml
-rw-r--r-- 1 root root 2710 7月  27 16:12 read-user-sa-rbac.yaml
[root@k8s-master01 dashboard]# sz kubecfg.p12

谷歌瀏覽器導入證書:
備註把上一步驟的kubecfg.p12 文件導入證書後須要重啓瀏覽器:

導出令牌

[root@k8s-master01 dashboard]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-ggxf6
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: a4cc757e-e710-49ea-8321-d4642d38bbf5
 
Type:  kubernetes.io/service-account-token
 
Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWdneGY2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhNGNjNzU3ZS1lNzEwLTQ5ZWEtODMyMS1kNDY0MmQzOGJiZjUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.of88fLrICJ6o2SsnvWdCGfkpTJhaaI8aY0-G5VUcafuBQabLSYrdPsGpVSw4HKuAV1OkX3gMP63lx5I7FbLNjuxXGJqNFk9A83IqMwD2HISMNeDMsJZdtxYp_veFAFAJErr_F30pJKX4ad4FryV-LLjaxLt_xTPbZRK-8FERIUnBCa7-1-ds4WI-9qnZq4nIw5i6ws06F-J73KTGq9rYNkL91uPeGRaZEj_9Sc2XGDb6qk8XODghVYvmIIyBBJeRpYgN4384QqHIlE2GmoE8p8gRaC4K0zRrh8_PywL-bJI9NexfdH_78bJWsJBX2TmUjmnicitQGjqzg43Im3AJwQ
 
#導出令牌
[root@k8s-master01 dashboard]# vim /root/.kube/config   加
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWdneGY2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhNGNjNzU3ZS1lNzEwLTQ5ZWEtODMyMS1kNDY0MmQzOGJiZjUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.of88fLrICJ6o2SsnvWdCGfkpTJhaaI8aY0-G5VUcafuBQabLSYrdPsGpVSw4HKuAV1OkX3gMP63lx5I7FbLNjuxXGJqNFk9A83IqMwD2HISMNeDMsJZdtxYp_veFAFAJErr_F30pJKX4ad4FryV-LLjaxLt_xTPbZRK-8FERIUnBCa7-1-ds4WI-9qnZq4nIw5i6ws06F-J73KTGq9rYNkL91uPeGRaZEj_9Sc2XGDb6qk8XODghVYvmIIyBBJeRpYgN4384QqHIlE2GmoE8p8gRaC4K0zRrh8_PywL-bJI9NexfdH_78bJWsJBX2TmUjmnicitQGjqzg43Im3AJwQ
 
[root@k8s-master01 dashboard]# cp /root/.kube/config /data/tmp/admin.kubeconfig
[root@k8s-master01 dashboard]# sz /data/tmp/admin.kubeconfig

瀏覽器訪問

1.5.4 metrics-server

metrics-server 經過 kube-apiserver 發現全部節點,而後調用 kubelet APIs(經過 https 接口)得到各節點(Node)和 Pod 的 CPU、Memory 等資源使用狀況。從 Kubernetes 1.12 開始,kubernetes 的安裝腳本移除了 Heapster,從 1.13 開始徹底移除了對 Heapster 的支持,Heapster 再也不被維護。替代方案以下:
-> 用於支持自動擴縮容的 CPU/memory HPA metrics:metrics-server;
-> 通用的監控方案:使用第三方能夠獲取 Prometheus 格式監控指標的監控系統,如 Prometheus Operator;
-> 事件傳輸:使用第三方工具來傳輸、歸檔 kubernetes events;

從 Kubernetes 1.8 開始,資源使用指標(如容器 CPU 和內存使用率)經過 Metrics API 在 Kubernetes 中獲取, metrics-server 替代了heapster。Metrics Server 實現了Resource Metrics API,Metrics Server 是集羣範圍資源使用數據的聚合器。 Metrics Server 從每一個節點上的 Kubelet 公開的 Summary API 中採集指標信息。

在瞭解Metrics-Server以前,必需要事先了解下Metrics API的概念。Metrics API相比於以前的監控採集方式(hepaster)是一種新的思路,官方但願核心指標的監控應該是穩定的,版本可控的,且能夠直接被用戶訪問(例如經過使用 kubectl top 命令),或由集羣中的控制器使用(如HPA),和其餘的Kubernetes APIs同樣。官方廢棄heapster項目,就是爲了將核心資源監控做爲一等公民對待,即像pod、service那樣直接經過api-server或者client直接訪問,再也不是安裝一個hepater來匯聚且由heapster單獨管理。

假設每一個pod和node咱們收集10個指標,從k8s的1.6開始,支持5000節點,每一個節點30個pod,假設採集粒度爲1分鐘一次,則"10 x 5000 x 30 / 60 = 25000 平均每分鐘2萬多個採集指標"。由於k8s的api-server將全部的數據持久化到了etcd中,顯然k8s自己不能處理這種頻率的採集,並且這種監控數據變化快且都是臨時數據,所以須要有一個組件單獨處理他們,k8s版本只存放部分在內存中,因而metric-server的概念誕生了。其實hepaster已經有暴露了api,可是用戶和Kubernetes的其餘組件必須經過master proxy的方式才能訪問到,且heapster的接口不像api-server同樣,有完整的鑑權以及client集成。

有了Metrics Server組件,也採集到了該有的數據,也暴露了api,但由於api要統一,如何將請求到api-server的/apis/metrics請求轉發給Metrics Server呢,
解決方案就是:kube-aggregator,在k8s的1.7中已經完成,以前Metrics Server一直沒有面世,就是耽誤在了kube-aggregator這一步。kube-aggregator(聚合api)主要提供:
-> Provide an API for registering API servers;
-> Summarize discovery information from all the servers;
-> Proxy client requests to individual servers;

Metric API的使用:
-> Metrics API 只能夠查詢當前的度量數據,並不保存歷史數據
-> Metrics API URI 爲 /apis/metrics.k8s.io/,在 k8s.io/metrics 維護
-> 必須部署 metrics-server 才能使用該 API,metrics-server 經過調用 Kubelet Summary API 獲取數據

Metrics server定時從Kubelet的Summary API(相似/ap1/v1/nodes/nodename/stats/summary)採集指標信息,這些聚合過的數據將存儲在內存中,且以metric-api的形式暴露出去。Metrics server複用了api-server的庫來實現本身的功能,好比鑑權、版本等,爲了實現將數據存放在內存中嗎,去掉了默認的etcd存儲,引入了內存存儲(即實現Storage interface)。由於存放在內存中,所以監控數據是沒有持久化的,能夠經過第三方存儲來拓展,這個和heapster是一致的。

Kubernetes Dashboard 還不支持 metrics-server,若是使用 metrics-server 替代 Heapster,將沒法在 dashboard 中以圖形展現 Pod 的內存和 CPU 狀況,須要經過 Prometheus、Grafana 等監控方案來彌補。kuberntes 自帶插件的 manifests yaml 文件使用 gcr.io 的 docker registry,國內被牆,須要手動替換爲其它 registry 地址(本文檔未替換);能夠從微軟中國提供的 gcr.io 免費代理下載被牆的鏡像;下面部署命令均在k8s-master01節點上執行。

監控架構

安裝metrics-server

#從github clone源碼:
[root@k8s-master01 tmp]# mkdir metrics
[root@k8s-master01 tmp]# cd metrics/
[root@k8s-master01 metrics]# git clone https://github.com/kubernetes-incubator/metrics-server.git
[root@k8s-master01 metrics]# cd metrics-server/deploy/1.8+/
[root@k8s-master01 1.8+]# ls
aggregated-metrics-reader.yaml  metrics-apiservice.yaml         resource-reader.yaml
auth-delegator.yaml             metrics-server-deployment.yaml
auth-reader.yaml                metrics-server-service.yaml
[root@k8s-master01 1.8+]# cp metrics-server-deployment.yaml  metrics-server-deployment.yaml.bak
[root@k8s-master01 1.8+]# vim metrics-server-deployment.yaml
[root@k8s-master01 1.8+]# diff  metrics-server-deployment.yaml  metrics-server-deployment.yaml.bak
32,38c32,33
<         image: gcr.azk8s.cn/google_containers/metrics-server-amd64:v0.3.3
<         imagePullPolicy: IfNotPresent
<         command:
<         - /metrics-server
<         - --metric-resolution=30s
<         - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
<         - --kubelet-insecure-tls
---
>         image: k8s.gcr.io/metrics-server-amd64:v0.3.3
>         imagePullPolicy: Always

注:

這裏須要注意:

--metric-resolution=30s:從 kubelet 採集數據的週期;
--kubelet-preferred-address-types:優先使用 InternalIP 來訪問 kubelet,這樣能夠避免節點名稱沒有 DNS 解析記錄時,經過節點名稱調用節點 kubelet API 失敗的狀況(未配置時默認的狀況);
將metrics-server-deployment.yaml文件中的鏡像拉取策略修改成"IfNotPresent";
更改鏡像來源

部署metrics-server

[root@k8s-master01 1.8+]# kubectl create -f .
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.extensions/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

查看運行狀況

[root@k8s-master01 1.8+]#  kubectl -n kube-system get pods -l k8s-app=metrics-server
NAME                              READY   STATUS    RESTARTS   AGE
metrics-server-6c49c8b6cc-6flx6   1/1     Running   0          59s
 
[root@k8s-master01 1.8+]# kubectl get svc -n kube-system  metrics-server
NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
metrics-server   ClusterIP   10.0.94.102   <none>        443/TCP   79s

metrics-server 的命令行參數 (在任意一個node節點上執行下面命令)

[root@k8s-node01 ~]# docker run -it --rm gcr.azk8s.cn/google_containers/metrics-server-amd64:v0.3.3 --help
Launch metrics-server
 
Usage:
   [flags]
 
Flags:
      --alsologtostderr                                         log to standard error as well as files
      --authentication-kubeconfig string                        kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenaccessreviews.authentication.k8s.io.
      --authentication-skip-lookup                              If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
      --authentication-token-webhook-cache-ttl duration         The duration to cache responses from the webhook token authenticator. (default 10s)
      --authentication-tolerate-lookup-failure                  If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
      --authorization-always-allow-paths strings                A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
      --authorization-kubeconfig string                         kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io.
      --authorization-webhook-cache-authorized-ttl duration     The duration to cache 'authorized' responses from the webhook authorizer. (default 10s)
      --authorization-webhook-cache-unauthorized-ttl duration   The duration to cache 'unauthorized' responses from the webhook authorizer. (default 10s)
      --bind-address ip                                         The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
      --cert-dir string                                         The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates")
      --client-ca-file string                                   If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
      --contention-profiling                                    Enable lock contention profiling, if profiling is enabled
  -h, --help                                                    help for this command
      --http2-max-streams-per-connection int                    The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
      --kubeconfig string                                       The path to the kubeconfig used to connect to the Kubernetes API server and the Kubelets (defaults to in-cluster config)
      --kubelet-certificate-authority string                    Path to the CA to use to validate the Kubelet's serving certificates.
      --kubelet-insecure-tls                                    Do not verify CA of serving certificates presented by Kubelets.  For testing purposes only.
      --kubelet-port int                                        The port to use to connect to Kubelets. (default 10250)
      --kubelet-preferred-address-types strings                 The priority of node address types to use when determining which address to use to connect to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
      --log-flush-frequency duration                            Maximum number of seconds between log flushes (default 5s)
      --log_backtrace_at traceLocation                          when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                                          If non-empty, write log files in this directory
      --log_file string                                         If non-empty, use this log file
      --logtostderr                                             log to standard error instead of files (default true)
      --metric-resolution duration                              The resolution at which metrics-server will retain metrics. (default 1m0s)
      --profiling                                               Enable profiling via web interface host:port/debug/pprof/ (default true)
      --requestheader-allowed-names strings                     List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
      --requestheader-client-ca-file string                     Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
      --requestheader-extra-headers-prefix strings              List of request header prefixes to inspect. X-Remote-Extra- is suggested. (default [x-remote-extra-])
      --requestheader-group-headers strings                     List of request headers to inspect for groups. X-Remote-Group is suggested. (default [x-remote-group])
      --requestheader-username-headers strings                  List of request headers to inspect for usernames. X-Remote-User is common. (default [x-remote-user])
      --secure-port int                                         The port on which to serve HTTPS with authentication and authorization.If 0, don't serve HTTPS at all. (default 443)
      --skip_headers                                            If true, avoid header prefixes in the log messages
      --stderrthreshold severity                                logs at or above this threshold go to stderr
      --tls-cert-file string                                    File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
      --tls-cipher-suites strings                               Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use.  Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12
      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.
      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
  -v, --v Level                                                 number for the log level verbosity
      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging

測試是否成功

[root@k8s-master01 1.8+]# kubectl top node
error: metrics not available yet
說明還未成功,須要等待一會
[root@k8s-master01 1.8+]# kubectl top nodes
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   292m         7%     1209Mi          64%       
k8s-master02   291m         7%     1066Mi          56%       
k8s-master03   336m         8%     1212Mi          64%       
k8s-node01     125m         3%     448Mi           23%       
k8s-node02     117m         2%     425Mi           22%       
k8s-node03     118m         2%     464Mi           24%    
[root@k8s-master01 1.8+]# kubectl top pods -n kube-system
NAME                                       CPU(cores)   MEMORY(bytes)   
calico-kube-controllers-7c4d64d599-w24xk   2m           14Mi            
calico-node-9hzdk                          48m          40Mi            
calico-node-w75b8                          43m          71Mi            
coredns-6967fb4995-ztrlt                   5m           14Mi            
etcd-k8s-master01                          81m          109Mi           
etcd-k8s-master03                          54m          90Mi            
kube-apiserver-k8s-master01                53m          368Mi           
kube-apiserver-k8s-master03                42m          331Mi           
kube-controller-manager-k8s-master01       32m          65Mi            
kube-controller-manager-k8s-master03       1m           16Mi            
kube-proxy-dtpvd                           1m           32Mi            
kube-proxy-lscgw                           2m           20Mi            
kube-scheduler-k8s-master01                2m           28Mi            
kube-scheduler-k8s-master03                3m           18Mi   

瀏覽器訪問

[root@k8s-master01 1.8+]# kubectl cluster-info
Kubernetes master is running at https://20.0.0.250:8443
KubeDNS is running at https://20.0.0.250:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://20.0.0.250:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://20.0.0.250:8443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

 

 

1.5.5 kube-state-metrics插件

上面已經部署了metric-server,幾乎容器運行的大多數指標數據都能採集到了,可是下面這種狀況的指標數據的採集卻無能爲力:
-> 調度了多少個replicas?如今可用的有幾個?
-> 多少個Pod是running/stopped/terminated狀態?
-> Pod重啓了多少次?
-> 當前有多少job在運行中?

這些則是kube-state-metrics提供的內容,它是K8S的一個附加服務,基於client-go開發的。它會輪詢Kubernetes API,並將Kubernetes的結構化信息轉換爲metrics。kube-state-metrics可以採集絕大多數k8s內置資源的相關數據,例如pod、deploy、service等等。同時它也提供本身的數據,主要是資源採集個數和採集發生的異常次數統計。

kube-state-metrics 指標類別包括:

CronJob Metrics
DaemonSet Metrics
Deployment Metrics
Job Metrics
LimitRange Metrics
Node Metrics
PersistentVolume Metrics
PersistentVolumeClaim Metrics
Pod Metrics
Pod Disruption Budget Metrics
ReplicaSet Metrics
ReplicationController Metrics
ResourceQuota Metrics
Service Metrics
StatefulSet Metrics
Namespace Metrics
Horizontal Pod Autoscaler Metrics
Endpoint Metrics
Secret Metrics
ConfigMap Metrics

以pod爲例的指標有:

kube_pod_info
kube_pod_owner
kube_pod_status_running
kube_pod_status_ready
kube_pod_status_scheduled
kube_pod_container_status_waiting
kube_pod_container_status_terminated_reason
..............

kube-state-metrics與metric-server (或heapster)的對比
1)metric-server是從api-server中獲取cpu,內存使用率這種監控指標,並把它們發送給存儲後端,如influxdb或雲廠商,它當前的核心做用是:爲HPA等組件提供決策指標支持。
2)kube-state-metrics關注於獲取k8s各類資源的最新狀態,如deployment或者daemonset,之因此沒有把kube-state-metrics歸入到metric-server的能力中,是由於它們的關注點本質上是不同的。metric-server僅僅是獲取、格式化現有數據,寫入特定的存儲,實質上是一個監控系統。而kube-state-metrics是將k8s的運行情況在內存中作了個快照,而且獲取新的指標,但它沒有能力導出這些指標
3)換個角度講,kube-state-metrics自己是metric-server的一種數據來源,雖然如今沒有這麼作。
4)另外,像Prometheus這種監控系統,並不會去用metric-server中的數據,它都是本身作指標收集、集成的(Prometheus包含了metric-server的能力),但Prometheus能夠監控metric-server自己組件的監控狀態並適時報警,這裏的監控就能夠經過kube-state-metrics來實現,如metric-serverpod的運行狀態。

kube-state-metrics本質上是不斷輪詢api-server,其性能優化:
kube-state-metrics在以前的版本中暴露出兩個問題:
1)/metrics接口響應慢(10-20s)
2)內存消耗太大,致使超出limit被殺掉
問題一的方案:就是基於client-go的cache tool實現本地緩存,具體結構爲:var cache = map[uuid][]byte{}
問題二的的方案是:對於時間序列的字符串,是存在不少重複字符的(如namespace等前綴篩選),能夠用指針或者結構化這些重複字符。

kube-state-metrics優化點和問題
1)由於kube-state-metrics是監聽資源的add、delete、update事件,那麼在kube-state-metrics部署以前已經運行的資源的數據是否是就拿不到了?其實kube-state-metric利用client-go能夠初始化全部已經存在的資源對象,確保沒有任何遺漏;
2)kube-state-metrics當前不會輸出metadata信息(如help和description);
3)緩存實現是基於golang的map,解決併發讀問題當期是用了一個簡單的互斥鎖,應該能夠解決問題,後續會考慮golang的sync.Map安全map;
4)kube-state-metrics經過比較resource version來保證event的順序;
5)kube-state-metrics並不保證包含全部資源;

配置文件

https://github.com/kubernetes/kube-state-metrics

[root@k8s-master01 kubernetes]# ll
總用量 20
-rw-r--r-- 1 root root  362 7月  27 08:59 kube-state-metrics-cluster-role-binding.yaml
-rw-r--r-- 1 root root 1269 7月  27 08:59 kube-state-metrics-cluster-role.yaml
-rw-r--r-- 1 root root  800 7月  27 20:12 kube-state-metrics-deployment.yaml
-rw-r--r-- 1 root root   98 7月  27 08:59 kube-state-metrics-service-account.yaml
-rw-r--r-- 1 root root  421 7月  27 20:13 kube-state-metrics-service.yaml

更改配置

[root@k8s-master01 kubernetes]# fgrep -R "image" ./*
./kube-state-metrics-deployment.yaml:        image: quay.io/coreos/kube-state-metrics:v1.7.1

 
[root@k8s-master01 kubernetes]# cat kube-state-metrics-deployment.yaml
......
        image: gcr.azk8s.cn/google_containers/kube-state-metrics:v1.7.1
        imagePullPolicy: IfNotPresent


[root@k8s-master01 kubernetes]# cat kube-state-metrics-service.yaml
......
  type: NodePort
  selector:
    k8s-app: kube-state-metrics

執行並檢查

 

#執行配置文件
[root@k8s-master01 kubernetes]# kubectl create -f .
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
serviceaccount/kube-state-metrics created
service/kube-state-metrics created
#檢查
[root@k8s-master01 kubernetes]# kubectl get pod -n kube-system|grep kube-state-metrics
kube-state-metrics-7d5dfb9596-lds7l        1/1     Running   0          45s
[root@k8s-master01 kubernetes]# kubectl get svc -n kube-system|grep kube-state-metrics
kube-state-metrics     NodePort    10.0.112.70   <none>        8080:32672/TCP,8081:31505/TCP   75s
[root@k8s-master01 kubernetes]# kubectl get pod,svc -n kube-system|grep kube-state-metrics
 
pod/kube-state-metrics-7d5dfb9596-lds7l        1/1     Running   0          82s
service/kube-state-metrics     NodePort    10.0.112.70   <none>        8080:32672/TCP,8081:31505/TCP   83s
[root@k8s-master01 kubernetes]# curl http://20.0.0.201:32672/metrics|head -10
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0# HELP kube_certificatesigningrequest_labels Kubernetes labels converted to Prometheus labels.
# TYPE kube_certificatesigningrequest_labels gauge
# HELP kube_certificatesigningrequest_created Unix creation timestamp
# TYPE kube_certificatesigningrequest_created gauge
# HELP kube_certificatesigningrequest_condition The number of each certificatesigningrequest condition
# TYPE kube_certificatesigningrequest_condition gauge
# HELP kube_certificatesigningrequest_cert_length Length of the issued cert
# TYPE kube_certificatesigningrequest_cert_length gauge
# HELP kube_certificatesigningrequest_annotations Kubernetes annotations converted to Prometheus labels.
# TYPE kube_certificatesigningrequest_annotations gauge
100 13516    0 13516    0     0   793k      0 --:--:-- --:--:-- --:--:--  879k
curl: (23) Failed writing body (1535 != 2048)

瀏覽器

 

 

至此kubernetes集羣部署完成

建議:init 6 --> 驗證 --> 快照

1.5.6 感言

集羣部署這塊我是思考了又思考的,上次寫了114頁直接又毀了。這是一個開始,影響後面的學習。我但願本身能夠寫的更好!

相關文章
相關標籤/搜索