K8S集羣搭建

K8S集羣搭建

摘要

是借鑑網上的幾篇文章加上本身的理解整理獲得的結果,去掉了一些文章中比較冗餘的組件和操做,力爭作到部署簡單化。node

K8S組件說明

Kubernetes包含兩種節點角色:master節點和minion節點linux

  • Master 節點負責對外提供一系列管理集羣的 API 接口,而且經過和 Minion 節點交互來實現對集羣的操做管理。
  • Minion 節點是實際運行 Docker 容器的節點,負責和節點上運行的 Docker 進行交互,而且提供了代理功能。

master節點組件

apiserver:用戶和 kubernetes 集羣交互的入口,封裝了核心對象的增刪改查操做,提供了 RESTFul 風格的 API 接口,經過 etcd 來實現持久化並維護對象的一致性。web

scheduler:負責集羣資源的調度和管理,例如當有 pod 異常退出須要從新分配機器時,scheduler 經過必定的調度算法從而找到最合適的節點。算法

controller-manager:主要是用於保證 replicationController 定義的複製數量和實際運行的 pod 數量一致,另外還保證了從 service 到 pod 的映射關係老是最新的。docker

etcd:key-value鍵值存儲數據庫,用來存儲kubernetes的信息的。數據庫

minion節點組件

kubelet:運行在 minion 節點,負責和節點上的 Docker 交互,例如啓停容器,監控運行狀態等。json

proxy:運行在 minion 節點,負責爲 pod 提供代理功能,會按期從 etcd 獲取 service 信息,並根據 service 信息經過修改 iptables 來實現流量轉發(最初的版本是直接經過程序提供轉發功能,效率較低。),將流量轉發到要訪問的 pod 所在的節點上去。vim

flannel:Flannel 是 CoreOS 團隊針對 Kubernetes 設計的一個覆蓋網絡(Overlay Network)工具,須要另外下載部署。咱們知道當咱們啓動 Docker 後會有一個用於和容器進行交互的 IP 地址,若是不去管理的話可能這個 IP 地址在各個機器上是同樣的,而且僅限於在本機上進行通訊,沒法訪問到其餘機器上的 Docker 容器。Flannel 的目的就是爲集羣中的全部節點從新規劃 IP 地址的使用規則,從而使得不一樣節點上的容器可以得到同屬一個內網且不重複的 IP 地址,並讓屬於不一樣節點上的容器可以直接經過內網 IP 通訊。centos

架構圖

K8S架構圖

前期準備

os:centos 7.5.1804api

IP hostname 服務
44.201 node1 kube-apiservice,kube-scheduler,kube-controller-manager,etcd
44.202 node2 kubelet,kube-proxy,flanneld,docker
44.203 node3 kubelet,kube-proxy,flanneld,docker

安裝

master節點

yum install kubernetes-master etcd -y

minion節點

yum install kubernetes-node flannel -y

配置

master節點

修改/etc/etcd/etcd.conf
[root@node1 ~]# vim /etc/etcd/etcd.conf 

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://192.168.44.201:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.44.201:2379,http://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="default"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.44.201:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.44.201:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="default=http://192.168.44.201:2380"
....
針對幾個URLS作下簡單的解釋:
[member]
ETCD_NAME :ETCD的節點名
ETCD_DATA_DIR:ETCD的數據存儲目錄
ETCD_SNAPSHOT_COUNTER:多少次的事務提交將觸發一次快照
ETCD_HEARTBEAT_INTERVAL:ETCD節點之間心跳傳輸的間隔,單位毫秒
ETCD_ELECTION_TIMEOUT:該節點參與選舉的最大超時時間,單位毫秒
ETCD_LISTEN_PEER_URLS:該節點與其餘節點通訊時所監聽的地址列表,多個地址使用逗號隔開,其格式能夠劃分爲scheme://IP:PORT,這裏的scheme能夠是http、https
ETCD_LISTEN_CLIENT_URLS:該節點與客戶端通訊時監聽的地址列表
[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS:該成員節點在整個集羣中的通訊地址列表,這個地址用來傳輸集羣數據的地址。所以這個地址必須是能夠鏈接集羣中全部的成員的。
ETCD_INITIAL_CLUSTER:配置集羣內部全部成員地址,其格式爲:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,若是有多個使用逗號隔開
ETCD_ADVERTISE_CLIENT_URLS:廣播給集羣中其餘成員本身的客戶端地址列表

注:k8s中只須要master節點配置etcd,由於其餘節點共享同一個etcd服務

修改/etc/kubernetes/apiserver
[root@node1 ~]# vim /etc/kubernetes/apiserver 

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--address=0.0.0.0"  ##API-server綁定的安全IP只有127.0.0.1,至關於一個白名單,修改爲"--address=0.0.0.0"",表示運行全部節點進行訪問。

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.44.201:2379"  #etcd服務

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.1.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/serviceaccount.key"
修改/etc/kubernetes/controller-manager
[root@node1 ~]# vim /etc/kubernetes/controller-manager

###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/etc/kubernetes/serviceaccount.key"

如下文件在操做過程當中,未修改過,不過也比較重要

/usr/lib/systemd/system/etcd.service
[root@node1 kubernetes]# cat /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
[root@node1 kubernetes]#

minion節點

修改/etc/kubernetes/config
[root@node2 kubernetes]# vim /etc/kubernetes/config 

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.44.201:8080"
修改/etc/kubernetes/kubelet
[root@node2 kubernetes]# vim /etc/kubernetes/kubelet 

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.44.202"  ##修改ip

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.44.202" ## 修改ip

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.44.201:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

注:修改對應的ip地址

修改/etc/sysconfig/flanneld

[root@node2 kubernetes]# vim /etc/sysconfig/flanneld 

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.44.201:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/coreos.com/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

注:node3與node2上面配置相同
如下文件在操做過程當中,未修改過,不過也比較重要

/usr/lib/systemd/system/flanneld.service
[root@node2 kubernetes]# cat /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
#ExecStart=/usr/bin/flanneld --iface ens33 $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
WantedBy=docker.service
/usr/lib/systemd/system/docker.service
[root@node2 kubernetes]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target flanneld.service
Wants=docker-storage-setup.service
Requires=flanneld.service  #這個參數應該能夠沒有,只要docker的啓動順序在flanneld以後就OK

[Service]
Type=notify
NotifyAccess=main
EnvironmentFile=-/run/containers/registries.conf
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          --init-path=/usr/libexec/docker/docker-init-current \
          --seccomp-profile=/etc/docker/seccomp.json \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY \
      $REGISTRIES
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
KillMode=process

[Install]
WantedBy=multi-user.target

啓動

minion節點上的flanneld服務必需要在master上的etcd服務啓動以後再啓動

master節點

systemctl start etcd
systemctl enable etcd
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl start kube-scheduler
sysemtctl enable kube-scheduler
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

minion節點

systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl start flanneld
systemctl enable flanneld
systemctl start docker 
systemctl enable docker

注:若是以前docker啓動,而且docker0存在(ip addr),則須要先將docker中止(systemctl stop docker),而且刪掉docker0(ip link delete docker0)

注意事項

1.每一個節點的防火牆和selinux必須關掉
  • 關閉防火牆
systemctl stop firewalld
systemctl disable firewalld
  • 關閉selinux
臨時關閉:
setenforce 0

永久關閉:
[root@node2 kubernetes]# vim /etc/selinux/config 


# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
2.僅僅關閉防火牆,pod之間也是沒法ping通。須要關閉iptables規則(這個很坑,找了不少資料才發現的問題)
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
iptables -L -n

web界面管理kuberneters

1.下載kubernetes-dashboard-amd64

docker search  kubernetes-dashboard-amd64

docker images

2.將kubernetes-dashboard-amd64發送到本地私有倉庫

docker tag siriuszg/kubernetes-dashboard-amd64:latest 192.168.44.201:5000/kubernetes-dashboard-amd64:latest

配置私有倉庫方法

3.在master服務器上建立kubernetes-dashboard.yaml文件

[root@node1 ~]# vim  kubernetes-dashboard.yaml 

      # Comment the following annotation if Dashboard must not be deployed on master
      annotations:
        scheduler.alpha.kubernetes.io/tolerations: |
          [
            {
              "key": "dedicated",
              "operator": "Equal",
              "value": "master",
              "effect": "NoSchedule"
            }
          ]
    spec:
      containers:
      - name: kubernetes-dashboard
        image: 192.168.44.201:5000/kubernetes-dashboard-amd64
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host  
          # If not specified, Dashboard will attempt to auto discover the API server and connect  
          # to it. Uncomment only if the default does not work.  
          - --apiserver-host=192.168.44.201:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: kubernetes-dashboard

4.建立kubernetes-dashboard

[root@node1 ~]# kubectl create -f kubernetes-dashboard.yaml
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists

出現上面錯誤,須要將kubernetes-dashboard的有關配置所有刪除

[root@node1 ~]# kubectl delete -f kubernetes-dashboard.yaml

5. 查看pod詳情


經過192.168.44.201:8080/ui能夠訪問

6.查看service詳情


經過192.168.44.203:8080/ui能夠訪問

6.web界面

相關文章
相關標籤/搜索