docker k8s 1.3.8 + flannel

docker k8s + flannelnode

kubernetes 是谷歌開源的 docker 集羣管理解決方案。git

項目地址:
http://kubernetes.io/docker


測試環境:vim

node-1: 10.6.0.140
node-2: 10.6.0.187
node-3: 10.6.0.188api


kubernetes 集羣,包含 master 節點,與 node 節點。安全

關係圖:網絡

hostnamectl --static set-hostname hostnameide

10.6.0.140 - k8s-master
10.6.0.187 - k8s-node-1
10.6.0.188 - k8s-node-2測試


部署:ui

一: 首先,咱們須要先安裝 etcd , etcd 是k8s集羣的基礎組件。

分別安裝 etcd

yum -y install etcd

 

修改配置文件,/etc/etcd/etcd.conf 須要修改以下參數:

ETCD_NAME=etcd1
ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd"
ETCD_LISTEN_PEER_URLS="http://10.6.0.140:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.6.0.140:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.6.0.140:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://10.6.0.140:2380,etcd2=http://10.6.0.187:2380,etcd3=http://10.6.0.188:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://10.6.0.140:2379"

 

其餘etcd集羣中: ETCD_NAME , 以及IP 須要變更


修改 etcd 啓動文件 /usr/lib/systemd/system/etcd.service

 

sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service

 

分別啓動 全部節點的 etcd 服務

systemctl enable etcd


systemctl start etcd


查看啓動狀況
systemctl status etcd


查看 etcd 集羣狀態:

etcdctl cluster-health

出現 cluster is healthy 表示成功


查看 etcd 集羣成員:

etcdctl member list


二: 部署k8s 的網絡 flannel

編輯 /etc/hosts 文件,配置hostname 通訊

vi /etc/hosts 添加:

10.6.0.140 k8s-master
10.6.0.187 k8s-node-1
10.6.0.188 k8s-node-2


安裝 flannel 。

yum -y install flannel


清除網絡中遺留的docker 網絡 (docker0, flannel0 等)

ifconfig

若是存在 請刪除之,以避免發生沒必要要的未知錯誤

ip link delete docker0
....


設置 flannel 所用到的IP段

 

etcdctl --endpoint http://10.6.0.140:2379 set /flannel/network/config '{"Network":"10.10.0.0/16","SubnetLen":25,"Backend":{"Type":"vxlan","VNI":1}}'


接下來修改 flannel 配置文件

vim /etc/sysconfig/flanneld

 

FLANNEL_ETCD="http://10.6.0.140:2379,http://10.6.0.187:2379,http://10.6.0.188:2379"   # 修改成 集羣地址
FLANNEL_ETCD_KEY="/flannel/network/config"        # 修改成 上面導入配置中的  /flannel/network
FLANNEL_OPTIONS="--iface=em1"                     # 修改成 本機物理網卡的名稱

 


啓動 flannel

systemctl enable flanneld
systemctl start flanneld

 

下面還須要修改 docker 的啓動文件 /usr/lib/systemd/system/docker.service

在 ExecStart 參數 dockerd 後面增長

ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS


從新讀取配置,啓動 docker
systemctl daemon-reload
systemctl start docker

 

查看網絡接管

ifconfig

能夠看到 docker0 與 flannel.1 已經在咱們設置的IP段內了,表示已經成功

 

 

 


3、安裝k8s

 

安裝k8s 首先是 Master 端安裝

下載朋友的 rpm 包

http://upyun.mritd.me/kubernetes/kubernetes-1.3.8-1.x86_64.rpm

rpm -ivh kubernetes-1.3.8-1.x86_64.rpm

 

 

因爲 google 被牆

國內已經有人將鏡像上傳至 docker hub 裏面了

咱們直接下載:

docker pull chasontang/kube-proxy-amd64:v1.4.0
docker pull chasontang/kube-discovery-amd64:1.0
docker pull chasontang/kubedns-amd64:1.7
docker pull chasontang/kube-scheduler-amd64:v1.4.0
docker pull chasontang/kube-controller-manager-amd64:v1.4.0
docker pull chasontang/kube-apiserver-amd64:v1.4.0
docker pull chasontang/etcd-amd64:2.2.5
docker pull chasontang/kube-dnsmasq-amd64:1.3
docker pull chasontang/exechealthz-amd64:1.1
docker pull chasontang/pause-amd64:3.0


下載之後使用 docker tag 命令將其作別名改成 gcr.io/google_containers


docker tag chasontang/kube-proxy-amd64:v1.4.0  gcr.io/google_containers/kube-proxy-amd64:v1.4.0
docker tag chasontang/kube-discovery-amd64:1.0 gcr.io/google_containers/kube-discovery-amd64:1.0
docker tag chasontang/kubedns-amd64:1.7  gcr.io/google_containers/kubedns-amd64:1.7
docker tag chasontang/kube-scheduler-amd64:v1.4.0  gcr.io/google_containers/kube-scheduler-amd64:v1.4.0
docker tag chasontang/kube-controller-manager-amd64:v1.4.0  gcr.io/google_containers/kube-controller-manager-amd64:v1.4.0
docker tag chasontang/kube-apiserver-amd64:v1.4.0  gcr.io/google_containers/kube-apiserver-amd64:v1.4.0
docker tag chasontang/etcd-amd64:2.2.5  gcr.io/google_containers/etcd-amd64:2.2.5
docker tag chasontang/kube-dnsmasq-amd64:1.3  gcr.io/google_containers/kube-dnsmasq-amd64:1.3
docker tag chasontang/exechealthz-amd64:1.1  gcr.io/google_containers/exechealthz-amd64:1.1
docker tag chasontang/pause-amd64:3.0  gcr.io/google_containers/pause-amd64:3.0



清楚原來下載的鏡像

docker rmi chasontang/kube-proxy-amd64:v1.4.0
docker rmi chasontang/kube-discovery-amd64:1.0
docker rmi chasontang/kubedns-amd64:1.7
docker rmi chasontang/kube-scheduler-amd64:v1.4.0
docker rmi chasontang/kube-controller-manager-amd64:v1.4.0
docker rmi chasontang/kube-apiserver-amd64:v1.4.0
docker rmi chasontang/etcd-amd64:2.2.5
docker rmi chasontang/kube-dnsmasq-amd64:1.3
docker rmi chasontang/exechealthz-amd64:1.1
docker rmi chasontang/pause-amd64:3.0
View Code

 

 

 

安裝完 kubernetes 之後

配置 apiserver

編輯配置文件

vim /etc/kubernetes/apiserver

 

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=10.6.0.140"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://10.6.0.140:2379,http://10.6.0.187:2379,http://10.6.0.188:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

 

 

配置完畢之後 啓動 全部服務

systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler

 


下面 node 端安裝

wget http://upyun.mritd.me/kubernetes/kubernetes-1.3.8-1.x86_64.rpm

rpm -ivh kubernetes-1.3.8-1.x86_64.rpm


配置 kubelet

編輯配置文件

vim /etc/kubernetes/kubelet

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=10.6.0.187"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://10.6.0.140:8080"

# Add your own!
KUBELET_ARGS="--pod-infra-container-image=docker.io/kubernetes/pause:latest"

 

 

注: KUBELET_HOSTNAME 這個配置中 配置的爲 hostname 名稱,主要用於區分 node 在集羣中的顯示
名稱 必須能 ping 通,因此前面在 /etc/hosts 中要作配置


下面修改 kubernetes 的 config 文件

vim /etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.6.0.140:8080"

 

 

最後 啓動 全部服務


systemctl start kubelet
systemctl start kube-proxy
systemctl enable kubelet
systemctl enable kube-proxy
systemctl status kubelet
systemctl status kube-proxy

 

master 中 測試 是否 node 已經正常

 

[root@k8s-master ~]#kubectl --server="http://10.6.0.140:8080" get node
NAME         STATUS     AGE
k8s-master   NotReady   50m
k8s-node-1   Ready      1m
k8s-node-2   Ready      57s

 

 


4、雙向 TLS 認證配置

kubernetes 提供了多種安全認證機制 Token 或用戶名密碼的單向 tls 認證, 基於 CA 證書 雙向 tls 認證。


1. master 中 建立 openssl ca 證書

mkdir /etc/kubernetes/cert

必須受權
chown kube:kube -R /etc/kubernetes/cert

cd /etc/kubernetes/cert


生成私鑰

openssl genrsa -out k8sca-key.pem 2048


openssl req -x509 -new -nodes -key k8sca-key.pem -days 10000 -out k8sca.pem -subj "/CN=kube-ca"

 


配置 apiserver 證書

複製openssl 的配置文件到cert目錄中

cp /etc/pki/tls/openssl.cnf .

編輯 配置文件,支持IP認證
vim openssl.cnf

在 distinguished_name 上面添加  req_extensions          = v3_req

[ req ]
.....
req_extensions          = v3_req
distinguished_name      = req_distinguished_name
.....


在 [ v3_req ]  下面添加  subjectAltName = @alt_names

[ v3_req ]

basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names


添加 以下全部內容:

[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = ${K8S_SERVICE_IP}  # kubernetes server ip
IP.2 = ${MASTER_HOST}     # master ip(若是都在一臺機器上寫一個就行)

 

 


而後開始簽署 apiserver 相關的證書

openssl genrsa -out apiserver-key.pem 2048

openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf

openssl x509 -req -in apiserver.csr -CA k8sca.pem -CAkey k8sca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf

 

 

 

下面 生成 每一個 node 的證書, 在 master 中生成,而後複製到各 node 中

apiserver 證書籤署完成後還須要簽署每一個節點 node 的證書


cp openssl.cnf worker-openssl.cnf

編輯配置文件:
vim worker-openssl.cnf

主要修改以下[ alt_names ]內容 , 有多少個node 就寫多少個IP配置:

[ alt_names ]
IP.1 = 10.6.0.187
IP.2 = 10.6.0.188

 

 

 

生成 k8s-node-1 私鑰
openssl genrsa -out k8s-node-1-worker-key.pem 2048

openssl req -new -key k8s-node-1-worker-key.pem -out k8s-node-1-worker.csr -subj "/CN=k8s-node-1" -config worker-openssl.cnf

openssl x509 -req -in k8s-node-1-worker.csr -CA k8sca.pem -CAkey k8sca-key.pem -CAcreateserial -out k8s-node-1-worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf




生成 k8s-node-2 私鑰
openssl genrsa -out k8s-node-2-worker-key.pem 2048

openssl req -new -key k8s-node-2-worker-key.pem -out k8s-node-2-worker.csr -subj "/CN=k8s-node-2" -config worker-openssl.cnf

openssl x509 -req -in k8s-node-2-worker.csr -CA k8sca.pem -CAkey k8sca-key.pem -CAcreateserial -out k8s-node-2-worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf





全部 node 生成之後, 還須要生成集羣管理證書

openssl genrsa -out admin-key.pem 2048

openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=kube-admin"

openssl x509 -req -in admin.csr -CA k8sca.pem -CAkey k8sca-key.pem -CAcreateserial -out admin.pem -days 365

 

 

配置 master

編輯 master apiserver 配置文件

vim /etc/kubernetes/apiserver

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=10.6.0.140 --insecure-bind-address=127.0.0.1"

# The port on the local server to listen on.
KUBE_API_PORT="--secure-port=6443 --insecure-port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://10.6.0.140:2379,http://10.6.0.187:2379,http://10.6.0.188:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--tls-cert-file=/etc/kubernetes/cert/apiserver.pem --tls-private-key-file=/etc/kubernetes/cert/apiserver-key.pem --client-ca-file=/etc/kubernetes/cert/k8sca.pem --service-account-key-file=/etc/kubernetes/cert/apiserver-key.pem"

 

 

接着編輯 controller manager 的配置

vim /etc/kubernetes/controller-manager

###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/etc/kubernetes/cert/apiserver-key.pem  --root-ca-file=/etc/kubernetes/cert/k8sca.pem --master=http://127.0.0.1:8080"

 

 


接着 重啓 全部服務

systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler

systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler

 


配置 node 節點

先 copy 配置文件 到 node 節點


k8s-node-1 節點:

mkdir /etc/kubernetes/cert/

必須受權
chown kube:kube -R /etc/kubernetes/cert

拷貝以下三個文件到 /etc/kubernetes/cert/ 目錄下
k8s-node-1-worker-key.pem
k8s-node-1-worker.pem
k8sca.pem

 

k8s-node-2 節點:

mkdir /etc/kubernetes/cert/

必須受權
chown kube:kube -R /etc/kubernetes/cert

拷貝以下三個文件到 /etc/kubernetes/cert/ 目錄下
k8s-node-2-worker-key.pem
k8s-node-2-worker.pem
k8sca.pem

 

以下配置 全部 node 節點都須要配置

修改 kubelet 配置

vim /etc/kubernetes/kubelet

 

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=10.6.0.188"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-2"

# location of the api-server
KUBELET_API_SERVER="--api-servers=https://10.6.0.140:6443"

# Add your own!
KUBELET_ARGS="--tls-cert-file=/etc/kubernetes/cert/k8s-node-1-worker.pem --tls-private-key-file=/etc/kubernetes/cert/k8s-node-1-worker-key.pem --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml --pod-infra-container-image=docker.io/kubernetes/pause:latest"

 

 

修改 config 配置

vim /etc/kubernetes/config

 

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=https://10.6.0.140:6443"

 

 

 

建立 kube-proxy 配置文件

vim /etc/kubernetes/worker-kubeconfig.yaml

# 內容以下

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/cert/k8sca.pem
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/cert/k8s-node-1-worker.pem
    client-key: /etc/kubernetes/cert/k8s-node-1-worker-key.pem
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context

 

 

配置 kube-proxy 使其使用證書

vim /etc/kubernetes/proxy

###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--master=https://10.6.0.140:6443 --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml"

 


重啓動並測試

systemctl restart kubelet
systemctl restart kube-proxy

systemctl status kubelet
systemctl status kube-proxy

 

至此,整個集羣已經搭建完成,剩下的就是pod 的測試

相關文章
相關標籤/搜索