centos7單點部署k8s(kubernetes)

一、環境介紹及準備:

1.1 物理機操做系統java

[root@k8s-node-1 ~]# uname -a
Linux k8s-node-1 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
主機名 IP
k8s-master 172.18.227.106

1.2 設置機器的主機名:node

[root@k8s-node-1 ~]# hostnamectl --static set-hostname k8s-node-1

1.3 設置hostlinux

echo '10.0.251.148 k8s-master 172.18.227.106 etcd 172.18.227.106 registry 172.18.227.106 k8s-node-1' >> /etc/hosts

1.4 關閉防火牆web

systemctl disable firewalld.service
systemctl stop firewalld.service

二、安裝etcd

2.1 etcd用於配置共享和服務發現,k8s運行依賴etcd,須要先部署etcd,本文采用yum方式安裝:docker

yum install etcd -y

2.2 修改etcd的配置文件
修改ETCD_NAME=「master」
ETCD_LISTEN_CLIENT_URLS=「http://0.0.0.0:2379,http://0.0.0.0:4001」
ETCD_ADVERTISE_CLIENT_URLS=「http://etcd:2379,http://etcd:4001」shell

[root@localhost ~]# vi /etc/etcd/etcd.conf

# [member]
ETCD_NAME="master" 
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""

2.3 啓動並驗證狀態json

[root@k8s-node-1 etcd]# systemctl start etcd
[root@k8s-node-1 etcd]# etcdctl set testdir/testkey0 0
0
[root@k8s-node-1 etcd]# etcdctl get testdir/testkey0 
0
[root@k8s-node-1 etcd]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
[root@k8s-node-1 etcd]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy

三、部署master

3.1安裝dockervim

[root@k8s-node-1 etcd]# yum install docker -y

3.2 配置Docker配置文件,使其容許從registry中拉取鏡像。
OPTIONS=’–insecure-registry registry:5000’api

[root@k8s-master ~]# vim /etc/sysconfig/docker

# /etc/sysconfig/docker

# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry registry:5000'

3.3 啓動dockerapp

[root@k8s-node-1 etcd]# systemctl start docker

錯誤場景

  • cJob for docker.service failed because the control process exited with error code. See 「systemctl status docker.service」 and 「journalctl -xe」 for details.
    啓動失敗,使用 systemctl status docker -l查看失敗緣由
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since 五 2018-11-30 16:22:13 CST; 2min 12s ago
     Docs: http://docs.docker.com
  Process: 32245 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE)
 Main PID: 32245 (code=exited, status=1/FAILURE)

11月 30 16:22:11 k8s-node-1 dockerd-current[32245]: time="2018-11-30T16:22:11.990748588+08:00" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
11月 30 16:22:11 k8s-node-1 dockerd-current[32245]: time="2018-11-30T16:22:11.992907763+08:00" level=info msg="libcontainerd: new containerd process, pid: 32251"
11月 30 16:22:13 k8s-node-1 dockerd-current[32245]: time="2018-11-30T16:22:13.008685863+08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
11月 30 16:22:13 k8s-node-1 dockerd-current[32245]: time="2018-11-30T16:22:13.009911216+08:00" level=info msg="Loading containers: start."
11月 30 16:22:13 k8s-node-1 dockerd-current[32245]: time="2018-11-30T16:22:13.026236097+08:00" level=info msg="Firewalld running: false"
11月 30 16:22:13 k8s-node-1 dockerd-current[32245]: Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain: Iptables not found
11月 30 16:22:13 k8s-node-1 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
11月 30 16:22:13 k8s-node-1 systemd[1]: Failed to start Docker Application Container Engine.
11月 30 16:22:13 k8s-node-1 systemd[1]: Unit docker.service entered failed state.
11月 30 16:22:13 k8s-node-1 systemd[1]: docker.service failed.
  • Error initializing network controller: error obtaining controller instance: failed to create NAT chain: Iptables not found
    這種狀況屬於沒有安裝iptables,安裝iptables
[root@k8s-node-1 etcd]# yum install -y iptables-services iptables-devel.x86_64 iptables.x86_64 ##安裝
[root@k8s-node-1 etcd]# systemctl start iptables ##啓動iptables
[root@k8s-node-1 etcd]# systemctl status iptables ##查看iptables狀態
  • ‘overlay2’ is not supported over overlayfs
docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2018-12-03 02:20:45 UTC; 8s ago
     Docs: http://docs.docker.com
  Process: 439 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE)
 Main PID: 439 (code=exited, status=1/FAILURE)
   CGroup: /docker/785d11cd4339074446bbecac27eac786fb65f9b5f12a9fe5310fa9cb76ae5a64/system.slice/docker.service

Dec 03 02:20:44 785d11cd4339 systemd[1]: Starting Docker Application Container Engine...
Dec 03 02:20:44 785d11cd4339 dockerd-current[439]: time="2018-12-03T02:20:44.971453600Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
Dec 03 02:20:44 785d11cd4339 dockerd-current[439]: time="2018-12-03T02:20:44.973718100Z" level=info msg="libcontainerd: new containerd process, pid: 445"
Dec 03 02:20:45 785d11cd4339 dockerd-current[439]: time="2018-12-03T02:20:45.992735400Z" level=error msg="'overlay2' is not supported over overlayfs"
Dec 03 02:20:45 785d11cd4339 dockerd-current[439]: Error starting daemon: error initializing graphdriver: backing file system is unsupported for this graph driver
Dec 03 02:20:45 785d11cd4339 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 03 02:20:45 785d11cd4339 systemd[1]: Failed to start Docker Application Container Engine.
Dec 03 02:20:46 785d11cd4339 systemd[1]: Unit docker.service entered failed state.
Dec 03 02:20:46 785d11cd4339 systemd[1]: docker.service failed.

這種狀況修改 /etc/sysconfig/docker-storage文件
把DOCKER_STORAGE_OPTIONS改成"–storage-driver devicemapper 「,
保存而後重啓

  • no

3.4 docker啓動成功看看狀態

[root@k8s-node-1 etcd]# systemctl status docker -l

啓動正常

docker啓動正常
3.5 安裝部署kubernetes

[root@k8s-node-1 etcd]# yum install kubernetes -y ##安裝kubernetes

修改配置文件
在kubernetes master上須要運行如下組件:

Kubernets API Server

Kubernets Controller Manager

Kubernets Scheduler

須要修改對應的配置文件

3.5.1 修改/etc/kubernetes/apiserver配置文件裏面的
KUBE_API_ADDRESS="–insecure-bind-address=0.0.0.0"
KUBE_API_PORT="–port=8080"
KUBE_ETCD_SERVERS="–etcd-servers=http://etcd:2379"
KUBE_ADMISSION_CONTROL="–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

### 
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

3.5.2 修改/etc/kubernetes/config配置文件裏面的
KUBE_MASTER="–master=http://k8s-master:8080"

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"

3.5.3 逐個啓動服務

[root@k8s-node-1 etcd]# systemctl restart kube-apiserver
[root@k8s-node-1 etcd]# systemctl restart kube-controller-manager
[root@k8s-node-1 etcd]# systemctl restart kube-scheduler

四、部署node所須要的組件

在kubernetes node上須要運行如下組件:
    Kubelet
    Kubernets Proxy
4.1 修改/etc/kubernetes/config配置裏面的
KUBE_MASTER="–master=http://k8s-master:8080"

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"

4.2 修改/etc/kubernetes/kubelet裏面的配置
KUBELET_ADDRESS="–address=0.0.0.0"
KUBELET_HOSTNAME="–hostname-override=k8s-node-1"
KUBELET_API_SERVER="–api-servers=http://k8s-master:8080"

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

4.3 啓動服務

[root@k8s-node-1 etcd]# systemctl restart kubelet
[root@k8s-node-1 etcd]# systemctl restart kube-proxy

錯誤場景:

4.4 查看節點及節點狀態

[root@k8s-node-1 etcd]# kubectl get node
NAME         STATUS    AGE
k8s-node-1   Ready     22h

啓動成功 至此,已經搭建了一個kubernetes環境