Kubernetes安裝測試


date: 2019-5-21
author:yangxiaoyi
---linux

安裝Kubernetes單機版

系統: centos 7.6
docker: 1.13.1nginx

yum安裝etcd和kubernetes(會自動安裝docker)

yum install etcd kubernetes -ygit

修改配置文件

  1. Docker配置文件/etc/sysconfig/docker, OPTIONS=’–selinux-enabled=false –insecure-registry gcr.io’
  2. Kubernetes apiservce配置文件/etc/kubernetes/apiserver,把–admission_control參數鐘的ServiceAccount刪除

啓動服務

systemctl start etcd
systemctl start docker
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxygithub

安裝Kubernetes集羣

系統:centos 7
docker: 1.13.1
etcd: 3.3.11
Kubernetes: v1.5.2
apiserver: v1beta1docker

測試環境

2臺主機作master集羣、etcd集羣、registry集羣
2臺主機作node
192.168.181.146 master1
192.168.181.150 master2
192.168.181.149 node1
192.168.181.147 node2centos

安裝前準備工做

  1. 修改/etc/hosts文件
    192.168.181.146 master1 etcd1 keep1 k8s-n1
    192.168.181.149 node1 k8s-n2
    192.168.181.147 node2 k8s-n3
    192.168.92.136 master2 etcd2 keep2 k8s-n4
  2. 實現節點時間同步
  3. 關閉防火牆服務,並確保不會自動啓動

安裝etcd集羣

  1. master1和master2上安裝etcd程序(yum安裝)
    yum install etcd -y
  2. 修改配置文件
    vi /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.181.146:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.181.146:2379,http://127.0.0.1:2379"
ETCD_NAME="etcd1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.181.146:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.181.146:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.181.146:2380,etcd2=http://192.168.181.150:2380"

etcd2主機將配置文件中etcd1改成etcd2api

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.181.150:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.181.150:2379,http://127.0.0.1:2379"
ETCD_NAME="etcd2"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.181.150:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.181.150:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.181.146:2380,etcd2=http://192.168.181.150:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
  1. 啓動服務
    systemctl start etcd服務器

  2. 測試服務
    etcdctl member list負載均衡

利用Habor和Cephfs安裝私有倉庫

參考文檔https://cloud.tencent.com/developer/article/1433266

  1. Habor & Cephfs 介紹
    Harbor 是由 VMware 公司開源的企業級的 Docker Registry 管理項目,它包括權限管理(RBAC)、LDAP、日誌審覈、管理界面、自我註冊、鏡像複製和中文支持等功能,能夠很好的知足咱們公司私有鏡像倉庫的需求。Cephfs 是 Ceph 分佈式存儲系統中的文件存儲,可靠性高,管理方便,伸縮性強,可以輕鬆應對 PB、EB 級別數據。咱們能夠使用 Cephfs 做爲 Harbor 底層分佈式存儲使用,提升 Harbor 集羣的高可用性。
  2. 安裝簡單的registry私有倉庫

  3. yum install -y docker-distribution
  4. systemctl start docker-distribution

    安裝K8s集羣

  5. master1和master2主機安裝master節點
    yum install -y kubernetes-master
  6. 修改配置文件
    vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.181.146:2379,http://192.168.181.150:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""

vi /etc/kubernetes/config

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.181.146:8080"
  1. 啓動程序
    systemctl start kube-apiserver.service
    systemctl start kube-controller-manager.service
    systemctl start kube-scheduler.service

利用nginx和keepalived部署高可用LB

  1. 安裝nginx和keepalived
    yum -y install keepalived nginx
  2. 添加負載均衡配置
    vi /etc/nginx/conf.d/kube.conf
upstream kube-master {
  ip_hash;
  server master1:8080 weight=3; 
  server master2:8080 weight=2;
}
server {
  listen 8001;
  server_name _;
  location / {
  proxy_pass http://kube-master;
  }
}
  1. 修改keepalived配置

vi /etc/keepalived/keepalived.conf

[root@master1 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id node1
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance ka {
    state MASTER
    interface ens37
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.181.151
    }
}

virtual_server 192.168.181.151 8001 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.181.146 8001 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.181.150 8001 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

部署node節點

  1. 在節點服務器上面安裝docker
    yum -y install docker
    systemctl start docker.service
  2. 部署啓動kubelet和kubernetes Network proxy
    yum install kubernetes-node
  3. 修改kubelet配置文件
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=node1"
KUBELET_API_SERVER="--api-servers=http://master1:8080,http://master2:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
  1. 啓動服務並測試
    systemctl start kubelet proxy

    在master上測試

    kubectl get nodes

  2. 修改docker的鏡像源爲本地鏡像源
    vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi
ADD_REGISTRY='--add-registry master1:5000'
INSECURE_REGISTRY='--insecure-registry master1:5000'

測試

建立pods

kubectl run -i -t bbox --image=busybox

#查看部署
kubectl get deployments
#查看pods
kubectl get pods

問題

  1. 若是訪問不了registry.access.redhat.com/rhel7/pod-infrastructure:latest,或者認證失敗
    緣由是:缺乏/etc/rhsm/ca/redhat-uep.pem文件
    解決方案:建議直接search pod-infrastructure ,而後從docker.io上下載下來,而後在上傳到本地倉庫
  2. 建立的pod出現CrashLoopBackOff 狀態?
    緣由:建立的容器沒有啓動起來,相似docker run busybox ,沒有-d選項,因此容器一啓動就關閉
    解決方案:建立的容器要有啓動命令,若是用dockerfile製做的鏡像,要加入CMD ["httpd","-f"] 或者啓動http服務

部署flannel

1.安裝flannel
yum install flannel
2.在master和node節點上配置flannel
vi /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://master1:2379,http://master2:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"

  1. 在etcd上配置flanneld使用的地址池
    etcdctl mk /atomic.io/network/config '{"Network":"10.99.0.0/16"}'

  2. 在master上啓動flanneld服務
    systemctl start flanneld.service

  3. 在node節點上啓動flanneld服務,並重啓docker服務便可完成配置
    systemctl start flanneld.service
    systemctl restart docker.service

  4. 測試

Kubernetes Dashboard安裝

1.下載yaml文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.5.1/src/deploy/kubernetes-dashboard.yaml
2.修改文件內容

image:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
#注意千萬別用域名,要用ip地址
- --apiserver-host=http://192.168.181.146:8080
  1. 啓動
    kubectl create -f kubernetes-dashboard.yaml
  2. 訪問
    http://192.168.181.146:8080/ui

K8S監控系統部署

  1. Heaoster和grafana
相關文章
相關標籤/搜索