主要步驟是參考:https://www.kubernetes.org.cn/5551.html html
本示例是使用kubeadm安裝k8s 1.15,系統爲centos7; node
前提:能夠科-xue-shang-網
git
主要步驟記錄:
github
1.安裝docker,略;
2.配置Ipvs模塊,kube-proxy會用到;
docker
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bashmodprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
3.配置k8s的yum源 -> 此步用到科-xue-shang-網,由於要訪問https://packages.cloud.google.comcentos
#google的k8s源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF #阿里雲的k8s源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
4.安裝kubeadm,kubeletapi
yum makecache fast yum install -y kubelet kubeadm kubectl #啓動:kubelet systemctl start kubelet
5.提早下載鏡像,加速安裝速度bash
#若能夠訪問k8s.gcr.io,能夠使用這種方式下載 kubeadm config images pull #若不能訪問k8s.gcr.io,能夠使用這種方式下載1.15須要的鏡像 docker pull gcr.azk8s.cn/google-containers/kube-apiserver:v1.15.1 docker pull gcr.azk8s.cn/google-containers/kube-controller-manager:v1.15.1 docker pull gcr.azk8s.cn/google-containers/kube-scheduler:v1.15.1 docker pull gcr.azk8s.cn/google-containers/kube-proxy:v1.15.1 docker pull gcr.azk8s.cn/google-containers/pause:3.1 docker pull gcr.azk8s.cn/google-containers/etcd:3.3.10 docker pull gcr.azk8s.cn/google-containers/coredns:1.3.1 docker tag gcr.azk8s.cn/google-containers/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1 docker tag gcr.azk8s.cn/google-containers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1 docker tag gcr.azk8s.cn/google-containers/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1 docker tag gcr.azk8s.cn/google-containers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1 docker tag gcr.azk8s.cn/google-containers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag gcr.azk8s.cn/google-containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 docker tag gcr.azk8s.cn/google-containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
6.執行kubeadm init(只在master節點執行),初始化安裝操做;根據安裝結果,執行如下命令能夠使用kubelet命令;網絡
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
7.安裝容器網絡;本例選擇使用,weave;
app
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
注:到此,單節點k8s已經安裝成功,kubectl get pods -n kube-system會看到各pod均是running狀態
8.其它集羣加入k8s集羣(node節點要先啓動kubelet)
#啓動kubelet systemctl start kubelet kubeadm join 10.10.5.63:6443 --token nuscgj.s7lveu88id4l5dq3 \ --discovery-token-ca-cert-hash sha256:7fd76b241a139d72f5011a8b94f38d0b884495b37f63935ed52aff03e924a8ba
9.因爲默認master節點,加入了node-role.kubernetes.io/master的taint,致使不能調度,刪除此taint
kubectl taint node master node-role.kubernetes.io/master-
10.修改kube-proxy的工做模式爲ipvs,默認是iptables
#修改mode爲ipvs kubectl edit cm kube-proxy -n kube-system #刪除kube-proxy的pod,以使配置生效 kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
11.安裝dashboard、ceph-rook等其它輔助工做操做,方便實驗
#安裝dashboard,參考:https://github.com/kubernetes/dashboard kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml #安裝ceph-rook,參考:https://rook.io/docs/rook/v1.0/ceph-quickstart.html git clone cd cluster/examples/kubernetes/ceph kubectl create -f common.yaml kubectl create -f operator.yaml kubectl create -f cluster-test.yaml #肯定ceph是否正常,參考: kubectl exec -it rook-ceph-tools-8fd8977f-9csxd bash -n rook-ceph [root@hadoop002 /]# ceph status cluster: id: 36b6bd6c-9651-413b-b2a9-e60126e4beda health: HEALTH_OK services: mon: 1 daemons, quorum a (age 11m) mgr: a(active, since 10m) osd: 3 osds: 3 up (since 9m), 3 in (since 9m) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 36 GiB used, 42 GiB / 78 GiB avail pgs:
注:看到health: HEALTH_OK表示ceph集羣是正常的