在Ubuntu16.04集羣上手工部署Kubernetes

簡介

目前Kubernetes爲Ubuntu提供的kube-up腳本,不支持15.10以及16.04這兩個使用systemd做爲init系統的版本。node

這裏詳細介紹一下如何以非Docker方式在Ubuntu16.04集羣上手動安裝部署Kubernetes的過程。linux

手動的部署過程,能夠很容易寫成自動部署的腳本。同時瞭解整個部署過程,對深刻理解Kubernetes的架構及各功能模塊也會頗有幫助。git

環境信息

版本信息

組件 版本
etcd 2.3.1
Flannel 0.5.5
Kubernetes 1.3.4

 

 

 

 

主機信息

主機 IP OS
k8s-master 172.16.203.133 Ubuntu 16.04
k8s-node01 172.16.203.134 Ubuntu 16.04
k8s-node02 172.16.203.135 Ubuntu 16.04

 

 

 

 

安裝Docker

每臺主機上安裝最新版Docker Engine(目前是1.12) - https://docs.docker.com/engine/installation/linux/ubuntulinux/github

部署etcd集羣

咱們將在3臺主機上安裝部署etcd集羣golang

下載etcd

在部署機上下載etcddocker

ETCD_VERSION=${ETCD_VERSION:-"2.3.1"}
ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz

tar xzf etcd.tar.gz -C /tmp
cd /tmp/
etcd-v${ETCD_VERSION}-linux-amd64

for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h mkdir -p '$HOME/kube' && scp -r etcd* user@$h:~/kube; done
for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h 'sudo mkdir -p /opt/bin && sudo mv $HOME/kube/* /opt/bin && rm -rf $home/kube/*'; done

 配置etcd服務

在每臺主機上,分別建立/opt/config/etcd.conf和/lib/systemd/system/etcd.service文件,(注意修改紅色粗體處的IP地址)ubuntu

/opt/config/etcd.confapi

sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /opt/config/
sudo  cat <<EOF |  sudo tee /opt/config/etcd.conf
ETCD_DATA_DIR=/var/lib/etcd
ETCD_NAME=$(hostname)
ETCD_INITIAL_CLUSTER=master=http://172.16.203.133:2380,node01=http://172.16.203.134:2380,node02=http://172.16.203.135:2380
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_LISTEN_PEER_URLS=http://172.16.203.133:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.16.203.133:2380
ETCD_ADVERTISE_CLIENT_URLS=http://172.16.203.133:2379
ETCD_LISTEN_CLIENT_URLS=http://172.16.203.133:2379
GOMAXPROCS=$(nproc)
EOF

 /lib/systemd/system/etcd.service架構

[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target


[Service]
User=root
Type=simple
EnvironmentFile=-/opt/config/etcd.conf
ExecStart=/opt/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target

 而後在每臺主機上運行app

sudo systemctl daemon-reload 
sudo systemctl enable etcd
sudo systemctl start etcd

下載Flannel

FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}
curl -L  https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz
tar xzf  flannel.tar.gz -C /tmp

編譯K8s

在部署機上編譯K8s,須要安裝docker engine(1.12)和go(1.6.2)

git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
make release-skip-tests
tar xzf _output/release-stage/full/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /tmp

Note

除了linux/amd64,默認還會爲其餘平臺作交叉編譯。爲了減小編譯時間,能夠修改hack/lib/golang.sh,把KUBE_SERVER_PLATFORMS, KUBE_CLIENT_PLATFORMS和KUBE_TEST_PLATFORMS中除linux/amd64之外的其餘平臺註釋掉。 

部署K8s Master

複製程序文件

cd /tmp
scp kubernetes/server/bin/kube-apiserver \
     kubernetes/server/bin/kube-controller-manager \
     kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@172.16.203.133:~/kube
scp flannel-${FLANNEL_VERSION}/flanneld user@172.16.203.133:~/kube
ssh -t user@172.16.203.133 'sudo mv ~/kube/* /opt/bin/'

建立證書

在master主機上 ,運行以下命令建立證書

mkdir -p /srv/kubernetes/

cd /srv/kubernetes export MASTER_IP=172.16.203.133 openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt openssl genrsa -out server.key 2048 openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000 

配置kube-apiserver服務

咱們使用以下的Service以及Flannel的網段:

SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16

FLANNEL_NET=192.168.0.0/16

在master主機上,建立/lib/systemd/system/kube-apiserver.service文件,內容以下

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
User=root
ExecStart=/opt/bin/kube-apiserver \
 --insecure-bind-address=0.0.0.0 \
 --insecure-port=8080 \
 --etcd-servers=http://172.16.203.133:2379, http://172.16.203.134:2379, http://172.16.203.135:2379 \
 --logtostderr=true \
--allow-privileged=false \
--service-cluster-ip-range=172.18.0.0/16 \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \ --service-node-port-range=30000-32767 \ --advertise-address=172.16.203.133 \ --client-ca-file=/srv/kubernetes/ca.crt \ --tls-cert-file=/srv/kubernetes/server.crt \ --tls-private-key-file=/srv/kubernetes/server.key Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target

配置kube-controller-manager服務

在master主機上,建立/lib/systemd/system/kube-controller-manager.service文件,內容以下

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager \
  --master=127.0.0.1:8080 \
  --root-ca-file=/srv/kubernetes/ca.crt \
  --service-account-private-key-file=/srv/kubernetes/server.key \
  --logtostderr=true
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置kuber-scheduler服務

在master主機上,建立/lib/systemd/system/kube-scheduler.service文件,內容以下

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-scheduler \
  --logtostderr=true \
  --master=127.0.0.1:8080
Restart=on-failure LimitNOFILE
=65536 [Install] WantedBy=multi-user.target

配置flanneld服務

在master主機上,建立/lib/systemd/system/flanneld.service文件,內容以下

[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service

[Service]
User=root
ExecStart=/opt/bin/flanneld \
  --etcd-endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" \
  --iface=172.16.203.133 \
  --ip-masq
Restart=on-failure
Type=notify
LimitNOFILE=65536

啓動服務

/opt/bin/etcdctl --endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" mk /coreos.com/network/config \ 
  '{"Network":"192.168.0.0/16", "Backend": {"Type": "vxlan"}}'

sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver sudo systemctl enable kube-controller-manager sudo systemctl enable kube-scheduler sudo systemctl enable flanneld sudo systemctl start kube-apiserver sudo systemctl start kube-controller-manager sudo systemctl start kube-scheduler sudo systemctl start flanneld

修改Docker服務 

source /run/flannel/subnet.env

sudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service
rc=0
ip link show docker0 >/dev/null 2>&1 || rc="$?"
if [[ "$rc" -eq "0" ]]; then
ip link set dev docker0 down
ip link delete docker0
fi


sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl restart docker

部署K8s Node

複製程序文件

cd /tmp
for h in k8s-master k8s-node01 k8s-node02; do scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@$h:~/kube; done
for h in k8s-master k8s-node01 k8s-node02; do scp flannel-${FLANNEL_VERSION}/flanneld user@$h:~/kube;done
for h in k8s-master k8s-node01 k8s-node02; do ssh -t user@$h 'sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/'; done

配置Flanned以及修改Docker服務

參見Master部分相關步驟: 配置Flanneld服務,啓動Flanneld服務,修改Docker服務。注意修改iface的地址

配置kubelet服務

/lib/systemd/system/kubelet.service,注意修改IP地址

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
ExecStart=/opt/bin/kubelet \
  --hostname-override=172.16.203.133 \
  --api-servers=http://172.16.203.133:8080 \
  --logtostderr=true
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

啓動服務

sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet

配置kube-proxy服務

/lib/systemd/system/kube-proxy.service,注意修改IP地址

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
ExecStart=/opt/bin/kube-proxy  \
  --hostname-override=172.16.203.133 \
  --master=http://172.16.203.133:8080 \
  --logtostderr=true
Restart=on-failure

[Install]
WantedBy=multi-user.target

啓動服務

sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy

部署附件組件

部署DNS

DNS_SERVER_IP="172.18.8.8"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
KUBE_APISERVER_URL=http://172.16.203.133:8080

cat <<EOF > skydns.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v17.1
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v17.1
    kubernetes.io/cluster-service: "true"
spec:
  replicas: $DNS_REPLICAS
  selector:
    k8s-app: kube-dns
    version: v17.1
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v17.1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: kubedns
        image: gcr.io/google_containers/kubedns-amd64:1.5
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            cpu: 100m
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 30
          timeoutSeconds: 5
        args:
        # command = "/kube-dns"
        - --domain=$DNS_DOMAIN.
        - --dns-port=10053
        - --kube-master-url=$KUBE_APISERVER_URL
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
      - name: dnsmasq
        image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
        args:
        - --cache-size=1000
        - --no-resolv
        - --server=127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
      - name: healthz
        image: gcr.io/google_containers/exechealthz-amd64:1.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 10m
            memory: 50Mi
          requests:
            cpu: 10m
            # Note that this container shouldn't really need 50Mi of memory. The
            # limits are set higher than expected pending investigation on #29688.
            # The extra memory was stolen from the kubedns container to keep the
            # net memory requested by the pod constant.
            memory: 50Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.$DNS_DOMAIN 127.0.0.1 >/dev/null
        - -port=8080
        - -quiet
        ports:
        - containerPort: 8080
          protocol: TCP
      dnsPolicy: Default  # Don't use cluster DNS.
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: $DNS_SERVER_IP
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
EOF

kubectl create -f skydns.yml

 而後,修該各節點的kubelet.service,添加--cluster-dns=172.18.8.8以及--cluster-domain=cluster.local

部署Dashboard

echo <<'EOF' > kube-dashboard.yml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    app: kubernetes-dashboard
    version: v1.1.0
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-dashboard
  template:
    metadata:
      labels:
        app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
        - --apiserver-host=http://172.16.203.133:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: kubernetes-dashboard
EOF

kubectl create -f kube-dashboard.yml
相關文章
相關標籤/搜索