阿里雲環境部署k8s集羣

    網上講述如何部署k8s集羣的文章不少,特別是k8s中文社區裏面,每一個平臺的部署方式都有詳細說明。但就是看了中文社區的指導,發現一路是坑,第一個源訪問的時候就404 NotFound, 更別說那生澀的翻譯和版本匹配問題。node

     如此一來還不如本身寫一個。git

  1. 開虛機github

      系統環境CentOS 7.2 . 這方面再也不贅述docker

  2. 設置/etc/hosts segmentfault

      簡單來講就是把master和minion主機都用域名在hosts文件中記錄一下。api

  3. 集羣主機都安裝kubernetes和etcdbash

       直接yum install . 安裝kubernetes 時會順帶把docker等都安裝上。 此文章編寫時,阿里雲的kubernetes版本爲1.5.2.restful

      這裏還須要注意:應該在etcdctl中配置flannel,不然flannel沒法正常啓動。app

[root@k8s-master home]# etcdctl set /flannel/network/config '{ "Network": "172.16.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }'

  4. 配置apiserverdom

      按照中文社區的指導配置(但最終並無使用該配置,看下去就知道)

      修改/etc/kubernetes/apiserver

      修改/etc/kubernetes/config

     因爲這條路不通,具體修改內容就不貼了。

  5. master配置啓動腳本

#/bin/bashfor SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; dosystemctl restart $SERVICESsystemctl enable $SERVICESsystemctl status $SERVICESdone

   走到這裏,按照中文社區的指導,應該就能直接起來了。但其實並不能!!!

   收到如下錯誤:

Sep 29 17:06:15 debug010000002015 kube-apiserver: W0929 17:06:15.881473   21259 handlers.go:50] Authentication is disabledSep 29 17:06:15 debug010000002015 kube-apiserver: [restful] 2018/09/29 17:06:15 log.go:30: [restful/swagger] listing is available at https://172.16.7.93:6443/swaggerapi/Sep 29 17:06:15 debug010000002015 kube-apiserver: [restful] 2018/09/29 17:06:15 log.go:30: [restful/swagger] https://172.16.7.93:6443/swaggerui/ is mapped to folder /swagger-ui/Sep 29 17:06:15 debug010000002015 kube-apiserver: E0929 17:06:15.984071   21259 reflector.go:199] k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get http://127.0.0.1:18080/api/v1/resourcequotas?resourceVersion=0: dial tcp 127.0.0.1:18080: getsockopt: connection refusedSep 29 17:06:15 debug010000002015 kube-apiserver: E0929 17:06:15.984217   21259 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.Namespace: Get http://127.0.0.1:18080/api/v1/namespaces?resourceVersion=0: dial tcp 127.0.0.1:18080: getsockopt: connection refusedSep 29 17:06:15 debug010000002015 kube-apiserver: E0929 17:06:15.987986   21259 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.LimitRange: Get http://127.0.0.1:18080/api/v1/limitranges?resourceVersion=0: dial tcp 127.0.0.1:18080: getsockopt: connection refusedSep 29 17:06:16 debug010000002015 kube-apiserver: F0929 17:06:16.058072   21259 genericapiserver.go:189] unable to load server certificate: open /var/run/kubernetes/apiserver.key: permission deniedSep 29 17:06:16 debug010000002015 systemd: kube-apiserver.service: main process exited, code=exited, status=255/n/aSep 29 17:06:16 debug010000002015 systemd: Failed to start Kubernetes API Server.Sep 29 17:06:16 debug010000002015 systemd: Unit kube-apiserver.service entered failed state.Sep 29 17:06:16 debug010000002015 systemd: kube-apiserver.service failed.Sep 29 17:06:16 debug010000002015 systemd: kube-apiserver.service holdoff time over, scheduling restart.

 查詢了Google和Baidu都無果。

 可是測試發現直接用命令行啓動kube-api 是成功的。所以只好採起直接修改systemctl service文件的作法。

 修改kube-apiserver.service的啓動腳本, 路徑是 /lib/systemd/system/kube-apiserver.service

 內容以下:

[root@k8s-master home]# vi /lib/systemd/system/kube-apiserver.service  [Unit]Description=Kubernetes API ServiceDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.targetAfter=etcd.service [Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/apiserver#ExecStart=/usr/bin/kube-apiserver \#           $KUBE_LOGTOSTDERR \#           $KUBE_LOG_LEVEL \#           $KUBE_ETCD_SERVERS \#           $KUBE_API_ADDRESS \#           $KUBE_API_PORT \#           $KUBELET_PORT \#           $KUBE_ALLOW_PRIV \#           $KUBE_SERVICE_ADDRESSES \#           $KUBE_ADMISSION_CONTROL \#           $KUBE_API_ARGS ExecStart=/usr/bin/kube-apiserver --allow_privileged=true --logtostderr=false --v=6 --log-dir=/var/log/k8s/kube-apiserver --insecure-bind-address=0.0.0.0 --insecure-port=8080 --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota,ServiceAccount,AlwaysPullImages,SecurityContextDeny --etcd_servers=http://x.x.x.x:2379 --master-service-namespace=master --secure-port=6443 --bind-address=0.0.0.0 --service-cluster-ip-range=10.0.0.0/16 --max-requests-inflight=1000 --storage-backend=etcd3 --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \            --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \            --client-ca-file=/etc/kubernetes/pki/ca.pem \            --service-account-key-file=/etc/kubernetes/pki/ca-key.pemKillMode=control-groupRestart=on-failureRestartSec=10 [Install]WantedBy=multi-user.target

其中 : --etcd\_servers=http://xx.xx.xx.xx:2379   爲本機eth0網卡IP,須要替換。

             相關ssl文件須要用openssl本身生成。或者使用insecure模式。

將 etcd的配置文件,路徑以下:/etc/etcd/etcd.conf , 其中下述內容由監聽本地迴環改成監聽0.0.0.0

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

修改flannel.service文件

[root@k8s-master home]# vi /lib/systemd/system/flanneld.service  [Unit]Description=Flanneld overlay address etcd agentAfter=network.targetAfter=network-online.targetWants=network-online.targetAfter=etcd.serviceBefore=docker.service [Service]Type=notifyEnvironmentFile=/etc/sysconfig/flanneldEnvironmentFile=-/etc/sysconfig/docker-networkExecStart=/usr/bin/flanneld -etcd-endpoints=http://x.x.x.x:2379 -etcd-prefix=/flannel/network -iface=eth0#ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS#ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/dockerRestart=on-failure [Install]WantedBy=multi-user.targetWantedBy=docker.service

其中 : --etcd\_servers=http://xx.xx.xx.xx:2379   爲本機eth0網卡IP,須要替換。

修改kube-controller-manager.service文件

[root@k8s-master home]# vi /lib/systemd/system/kube-controller-manager.service  Description=Kubernetes Controller ManagerDocumentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/controller-manager#ExecStart=/usr/bin/kube-controller-manager \#           $KUBE_LOGTOSTDERR \#           $KUBE_LOG_LEVEL \#           $KUBE_MASTER \#           $KUBE_CONTROLLER_MANAGER_ARGS ExecStart=/usr/bin/kube-controller-manager --logtostderr=false --v=6 --log-dir=/var/log/k8s/kube-controller-manager --namespace-sync-period=5m0s --node-monitor-grace-period=40s --node-monitor-period=5s --node-startup-grace-period=1m0s --node-sync-period=10s --pod-eviction-timeout=5m0s --pvclaimbinder-sync-period=10s --register-retry-count=20    --kubeconfig=/etc/kubernetes/controller-manager.conf \            --cluster-name=kubernetes \            --service-cluster-ip-range=10.0.0.0/16 \            --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \            --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \            --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \            --root-ca-file=/etc/kubernetes/pki/ca.pem Restart=on-failureLimitNOFILE=65536 [Install]WantedBy=multi-user.target

修改kube-scheduler.service文件

[root@k8s-master home]# vi /lib/systemd/system/kube-scheduler.service  [Unit]Description=Kubernetes Scheduler PluginDocumentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/scheduler#ExecStart=/usr/bin/kube-scheduler \#           $KUBE_LOGTOSTDERR \#           $KUBE_LOG_LEVEL \#           $KUBE_MASTER \#           $KUBE_SCHEDULER_ARGS ExecStart=/usr/bin/kube-scheduler --logtostderr=false --v=6 --log-dir=/var/log/k8s/kube-scheduler --algorithm-provider=DefaultProvider --kubeconfig=/etc/kubernetes/scheduler.conf Restart=on-failureLimitNOFILE=65536 [Install]WantedBy=multi-user.target

修改kube-proxy.service文件

[root@k8s-master home]# vi /lib/systemd/system/kube-proxy.service  [Unit]Description=Kubernetes Kube-Proxy ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target [Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/proxy#ExecStart=/usr/bin/kube-proxy \#           $KUBE_LOGTOSTDERR \#           $KUBE_LOG_LEVEL \#           $KUBE_MASTER \#           $KUBE_PROXY_ARGS ExecStart=/usr/bin/kube-proxy --master=http://x.x.x.x:8080 --hostname-override=k8s-master --proxy-mode=iptables -v=6 --logtostderr=false --log-dir=/var/log/k8s/kube-proxyRestart=on-failureLimitNOFILE=65536 [Install]WantedBy=multi-user.target

其中 xx.xx.xx.xx:8080   爲本機eth0網卡IP,須要替換。

修改kubelet.service文件

[root@k8s-master home]# vi /lib/systemd/system/kubelet.service  [Unit]Description=Kubernetes Kubelet ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service [Service]WorkingDirectory=/var/lib/kubeletEnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/kubelet#ExecStart=/usr/bin/kubelet \#           $KUBE_LOGTOSTDERR \#           $KUBE_LOG_LEVEL \#           $KUBELET_API_SERVER \#           $KUBELET_ADDRESS \#           $KUBELET_PORT \#           $KUBELET_HOSTNAME \#           $KUBE_ALLOW_PRIV \#           $KUBELET_POD_INFRA_CONTAINER \#           $KUBELET_ARGS ExecStart=/usr/bin/kubelet --allow-privileged=true \        --logtostderr=false \        --v=6 \        --log-dir=/var/log/k8s/kubelet \        --address=x.x.x.x \        --cluster-dns=10.0.1.10 \        --hostname-override=k8s-master \        --cluster-domain=cluster.local \        --kubeconfig=/etc/kubernetes/kubelet.conf \        --pod-manifest-path=/etc/kubernetes/manifest \        --allow-privileged=true \        --authorization-mode=AlwaysAllow \        --fail-swap-on=false \        --cgroup-driver=systemd \        --pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0 Restart=on-failure [Install]WantedBy=multi-user.target

其中 xx.xx.xx.xx   爲本機eth0網卡IP,須要替換。

registry.aliyuncs.com/archon/pause-amd64:3.0  來源自https://segmentfault.com/q/1010000008763165/a-1020000008824481

  完畢後,再運行啓動腳本,此時全部組件都可以正常啓動。

 6.  minion配置文件

     路徑爲: /etc/kubernetes/kubelet 以及 /etc/kubernetes/config

   config文件內容以下:

#### kubernetes system config## The following values are used to configure various aspects of all# kubernetes services, including##   kube-apiserver.service#   kube-controller-manager.service#   kube-scheduler.service#   kubelet.service#   kube-proxy.service# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debugKUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserverKUBE_MASTER="--master=http://k8s-master:8080" # Comma separated list of nodes in the etcd clusterKUBE_ETCD_SERVERS=」–etcd_servers=http://k8s-master:4001″ # logging to stderr means we get it in the systemd journal

 kubelet文件以下:

#### kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on# KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostnameKUBELET_HOSTNAME="--hostname-override=k8s-slave" # location of the api-serverKUBELET_API_SERVER="--api-servers=http://k8s-master:8080" # pod infrastructure containerKUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own!KUBELET_ARGS=""

驗證服務狀態:

[root@k8s-slave home]# kubectl get csNAME                 STATUS    MESSAGE              ERRORscheduler            Healthy   ok                   controller-manager   Healthy   ok                   etcd-0               Healthy   {"health": "true"}

  6. minion配置啓動腳本

修改kube-proxy.service文件

[root@k8s-slave home]# vi /lib/systemd/system/kube-proxy.service  [Unit]Description=Kubernetes Kube-Proxy ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target [Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/proxy#ExecStart=/usr/bin/kube-proxy \#           $KUBE_LOGTOSTDERR \#           $KUBE_LOG_LEVEL \#           $KUBE_MASTER \#           $KUBE_PROXY_ARGS ExecStart=/usr/bin/kube-proxy --master=http://x.x.x.x:8080 --hostname-override=k8s-slave --proxy-mode=iptables -v=6 --logtostderr=false --log-dir=/var/log/k8s/kube-proxy Restart=on-failureLimitNOFILE=65536 [Install]WantedBy=multi-user.target

其中 xx.xx.xx.xx   爲master eth0網卡IP,須要替換。

修改kubelet配置文件

[root@k8s-slave home]# vi /lib/systemd/system/kubelet.service  [Unit]Description=Kubernetes Kubelet ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service [Service]WorkingDirectory=/var/lib/kubeletEnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/kubelet#ExecStart=/usr/bin/kubelet \#           $KUBE_LOGTOSTDERR \#           $KUBE_LOG_LEVEL \#           $KUBELET_API_SERVER \#           $KUBELET_ADDRESS \#           $KUBELET_PORT \#           $KUBELET_HOSTNAME \#           $KUBE_ALLOW_PRIV \#           $KUBELET_POD_INFRA_CONTAINER \#           $KUBELET_ARGS ExecStart=/usr/bin/kubelet --allow-privileged=true \        --logtostderr=false \        --v=6 \        --log-dir=/var/log/k8s/kubelet \        --address=0.0.0.0 \        --cluster-dns=10.0.1.10 \        --hostname-override=k8s-slave \        --cluster-domain=cluster.local \        --kubeconfig=/etc/kubernetes/kubelet.conf \        --pod-manifest-path=/etc/kubernetes/manifest \        --allow-privileged=true \        --authorization-mode=AlwaysAllow \        --fail-swap-on=false \        --cgroup-driver=systemd \        --pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0 Restart=on-failure [Install]WantedBy=multi-user.target

在etc/profile文件末尾添加如下內容:

export KUBERNETES_MASTER=http://x.x.x.x:8080

其中 xx.xx.xx.xx   爲master eth0網卡IP,須要替換。

for SERVICES in kube-proxy kubelet docker; dosystemctl restart $SERVICESsystemctl enable $SERVICESsystemctl status $SERVICESdone

   運行啓動腳本,此時全部服務能夠正常啓動。

   驗證服務狀態:

[root@k8s-slave home]# kubectl get nodesNAME         STATUS    ROLES     AGE       VERSIONk8s-master   Ready     <none>    3h        v1.9.0k8s-slave    Ready     <none>    2h        v1.9.0

   至此集羣部署完畢。

相關文章
相關標籤/搜索