利用K8S技術棧打造我的私有云(連載之:K8S集羣搭建)

iPhone 5S

注: 本文首發於 My 公衆號 CodeSheep ,可 長按掃描 下面的 當心心 來訂閱 ↓ ↓ ↓node

CodeSheep · 程序羊



最近被業務折騰的死去活來,實在沒時間發帖,花了好多個晚上才寫好這篇帖子,後續會加油的!docker


【利用K8S技術棧打造我的私有云系列文章目錄】api


環境介紹

玩集羣嘛,固然要搞幾臺機器作節點!無賴本身並無性能很強勁的多餘機器,在家裏翻箱倒櫃,找出了幾臺破舊的本子,試試看吧,與其墊桌腳不如拿出來遛遛彎...架構

整體環境安排以下圖所示:less

集羣整體架構佈局

各部分簡介以下:ssh

Master節點 ( 一臺08年買的Hedy筆記本 Centos7.3 64bit )ide

HEDY

  • docker
  • etcd
  • flannel
  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

Slave節點 ( 一臺二手Thinkpad T420s Centos7.3 64bit )佈局

Thinkpad  T420s

  • docker
  • flannel
  • kubelet
  • kube-proxy

Client節點( 一臺12年的Sony Vaio SVS13 Win7 Ultimate)性能

  • 客戶端嘛,畢竟甲方,不須要安裝啥東西,有個ssh客戶端能連到master和slave節點就OK

Docker鏡像倉庫this

  • 通常企業內部應用的話,其會搭建本身的docker registry,用做鏡像倉庫,我這裏就直接用Docker Gub做爲鏡像倉庫,本身不搭建了(其實主要是沒機子啊!)

Wireless Router (雷猴子家的小米路由器3)

  • 最好能穿牆,由於我家路由器放在客廳,但我實驗是在臥室裏作的啊!

各部分所有都是由wifi進行互聯,我我的不太喜歡一大堆線繞來繞去


環境準備

  1. 先設置master節點和全部slave節點的主機名

master上執行:

hostnamectl --static set-hostname  k8s-master

slave上執行:

hostnamectl --static set-hostname  k8s-node-1
  1. 修改master和slave上的hosts

在master和slave的/etc/hosts文件中均加入如下內容:

192.168.31.166   k8s-master
192.168.31.166   etcd
192.168.31.166   registry
192.168.31.199   k8s-node-1
  1. 關閉master和slave上的防火牆
systemctl disable firewalld.service
systemctl stop firewalld.service

部署Master節點

master節點須要安裝如下組件:

  • etcd
  • flannel
  • docker
  • kubernets

下面按順序闡述

1. etcd安裝

  • 安裝命令:yum install etcd -y
  • 編輯etcd的默認配置文件/etc/etcd/etcd.conf
# [member]
ETCD_NAME=master
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#ETCD_ENABLE_V2="true"
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
#
#[profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[auth]
#ETCD_AUTH_TOKEN="simple"
  • 啓動etcd並驗證

首先啓動etcd服務

systemctl start etcd // 啓動etcd服務

再獲取etcd的健康指標看看:

etcdctl -C http://etcd:2379 cluster-health
etcdctl -C http://etcd:4001 cluster-health

查看etcd集羣健康度

2. flannel安裝

  • 安裝命令:yum install flannel
  • 配置flannel:/etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • 配置etcd中關於flannel的key
etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'

image.png

  • 啓動flannel並設置開機自啓
systemctl start flanneld.service
systemctl enable flanneld.service

3. docker安裝

該部分網上教程太多了,主要步驟以下

  • 安裝命令:yum install docker -y
  • 開啓docker服務:service docker start
  • 設置docker開啓自啓動:chkconfig docker on

4. kubernets安裝

k8s的安裝命令很簡單,執行:

yum install kubernetes

但k8s須要配置的東西比較多,正如第一節「環境介紹」中說起的,畢竟master上須要運行如下組件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

下面詳細闡述:

  • 配置/etc/kubernetes/apiserver文件
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
# KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""
  • 配置/etc/kubernetes/config文件
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
  • 啓動k8s各個組件
systemctl start kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service
  • 設置k8s各組件開機啓動
systemctl enable kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl enable kube-scheduler.service

部署Slave節點

slave節點須要安裝如下組件:

  • flannel
  • docker
  • kubernetes

下面按順序闡述:

1. flannel安裝

  • 安裝命令:yum install flannel
  • 配置flannel:/etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • 啓動flannel並設置開機自啓
systemctl start flanneld.service
systemctl enable flanneld.service

2. docker安裝

參考前文master節點上部署docker過程,此處再也不贅述

3. kubernetes安裝

安裝命令:yum install kubernetes

不一樣於master節點,slave節點上須要運行kubernetes的以下組件:

  • kubelet
  • kubernets-proxy

下面詳細闡述要配置的東西:

  • 配置/etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
  • 配置/etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""
  • 啓動kube服務
systemctl start kubelet.service
systemctl start kube-proxy.service
  • 設置k8s組件開機自啓
systemctl enable kubelet.service
systemctl enable kube-proxy.service

至此爲止,k8s集羣的搭建過程就完成了,下面來驗證一下集羣是否搭建成功了

驗證集羣狀態

  • 查看端點信息:kubectl get endpoints

image.png

  • 查看集羣信息:kubectl cluster-info

集羣信息

  • 獲取集羣中的節點狀態: kubectl get nodes

獲取集羣中的節點狀態

OK,節點已經就緒,能夠在上面作實驗了!


參考文獻

相關文章
相關標籤/搜索