linux運維、架構之路-Kubernetes集羣部署

1、kubernetes介紹node

       Kubernetes簡稱K8s,它是一個全新的基於容器技術的分佈式架構領先方案。Kubernetes(k8s)是Google開源的容器集羣管理系統(谷歌內部:Borg)。在Docker技術的基礎上,爲容器化的應用提供部署運行、資源調度、服務發現和動態伸縮等一系列完整功能,提升了大規模容器集羣管理的便捷性。linux

       Kubernetes是一個完備的分佈式系統支撐平臺,具備完備的集羣管理能力,多擴多層次的安全防禦和准入機制、多租戶應用支撐能力、透明的服務註冊和發現機制、內建智能負載均衡器、強大的故障發現和自我修復能力、服務滾動升級和在線擴容能力、可擴展的資源自動調度機制以及多粒度的資源配額管理能力。同時Kubernetes提供完善的管理工具,涵蓋了包括開發、部署測試、運維監控在內的各個環節。docker

2、Kubernetes架構和組件vim

3、K8S特性centos

       Kubernetes是爲生產環境而設計的容器調度管理系統,對於負載均衡、 服務發現、高可用、滾動升級、自動伸縮等容器雲平臺的功能要求有原生支持。
      一個K8s集羣是由分佈式存儲(etcd)、服務節點(Minion, etcd如今稱爲Node)和控制節點(Master)構成的。全部的集羣狀態都保存在etcd中,Master節點上則運行集羣的管理控制模塊。Node節點是真正運行應用容器的主機節點,在每一個Minion節點上都會運行一個Kubelet代理,控制該節點上的容器、鏡像和存儲卷等。api

Kubernetes功能:
1. 自動化容器的部署和複製
2. 隨時擴展或收縮容器規模
3. 將容器組織成組,而且提供容器間的負載均衡
4. 很容易地升級應用程序容器的新版本
5. 提供容器彈性,若是容器失效就替換它,等等…安全

Kubernetes解決的問題:
1. 調度 - 容器應該在哪一個機器上運行
2. 生命週期和健康情況 - 容器在無錯的條件下運行
3. 服務發現 - 容器在哪,怎樣與它通訊
4. 監控 - 容器是否運行正常
5. 認證 - 誰能訪問容器
6. 容器聚合 - 如何將多個容器合併成一個工程服務器

4、Kubernetes環境規劃網絡

一、環境架構

[root@k8s-master ~]# cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 
[root@k8s-master ~]# uname -r
3.10.0-327.el7.x86_64
[root@k8s-master ~]# systemctl status firewalld.service 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
[root@k8s-master ~]# getenforce 
Disabled

二、服務器規劃

節點及功能

主機名

IP

Master、etcd、registry

K8s-master

10.0.0.148

Node1

K8s-node-1

10.0.0.149

Node2

K8s-node-2

10.0.0.150

三、統一/etc/hosts

echo '
10.0.0.148    k8s-master
10.0.0.148    etcd
10.0.0.148    registry
10.0.0.149    k8s-node-1
10.0.0.150    k8s-node-2' >> /etc/hosts

四、Kubernetes集羣部署架構圖

5、Kubernetes集羣部署

一、 Master端安裝

①部署K8s依賴etcd服務

yum install etcd -y

修改配置文件:vim /etc/etcd/etcd.conf

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5" ETCD_NAME=master
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380" ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

啓動並驗證etcd集羣狀態

[root@k8s-master ~]# systemctl start etcd.service 
[root@k8s-master ~]# systemctl enable etcd.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@k8s-master ~]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
[root@k8s-master ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy

②安裝Docker

添加yum源

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
sed -i 's#download.docker.com#mirrors.ustc.edu.cn/docker-ce#g' /etc/yum.repos.d/docker-ce.repo

 安裝docker

yum install docker-ce -y

配置Docker配置文件,使其容許從registry中拉取鏡像

[root@k8s-master ~]# vim /etc/sysconfig/docker
# /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry registry:5000'

啓動docker並設置成開機啓動

[root@k8s-master ~]# systemctl start docker.service
[root@k8s-master ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

③安裝kubernets

yum install kubernetes -y
#安裝k8s的時候會自動安裝docker,若是先安裝docker版本衝突#
yum list installed | grep docker
yum remove -y docker-ce.x86_64
yum install kubernetes -y

④配置配置並啓動kubernetes

修改/etc/kubernetes/apiserver配置文件

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
修改/etc/kubernetes/config配置文件
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"

啓動K8S相關服務並設置開機啓動

[root@k8s-master ~]# systemctl start kube-controller-manager.service
[root@k8s-master ~]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@k8s-master ~]# systemctl start kube-scheduler.service
[root@k8s-master ~]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
二、Node節點部署
①安裝kubernets、docker
yum install kubernetes -y

啓動docker並設置開機啓動

[root@k8s-node-1 ~]# systemctl start docker.service 
[root@k8s-node-1 ~]# systemctl enable docker.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

②配置並啓動kubernetes

修改/etc/kubernetes/config配置文件

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"

修改/etc/kubernetes/kubelet

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

 啓動node節點服務並配置開機啓動

systemctl start kubelet.service
systemctl enable kubelet.service
systemctl start kube-proxy.service
systemctl enable kube-proxy.service

三、在master上查看集羣中節點及節點狀態

[root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
NAME         STATUS    AGE
k8s-node-1   Ready     1m
k8s-node-2   Ready     1m
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS    AGE
k8s-node-1   Ready     3m
k8s-node-2   Ready     3m

四、建立覆蓋網絡——Flannel

①在master、node上均執行以下命令,進行安裝

yum install flannel -y
master、node上配置Flannel,
vi /etc/sysconfig/flanneld

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
②設置etcd網絡
etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'

啓動Flannel,並重啓docker、kubernets服務

Master:
systemctl start flanneld.service 
systemctl enable flanneld.service 
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
Node:
systemctl start flanneld.service 
systemctl enable flanneld.service 
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
相關文章
相關標籤/搜索