k8s記錄-kubeam方式部署k8s

參考:https://blog.csdn.net/networken/article/details/84991940node

# k8s工具部署方案linux

# 1.集羣規劃nginx

| **服務器** | |
| ------------ | ---------------------------------------- |
| **數量** | >1(根據實際提供的服務器分配模塊) |
| **配置** | 16 core /32 memory / 300GB硬盤/50M帶寬 |
| **操做系統** | CentOS linux 7.2 master節點須要外網環境 |
| **文件系統** | 300G硬盤安裝在/ data目錄下 |
| **其餘條件** | master節點必須具有外網環境 |c++

| 節點名稱 | 主機名 | IP地址 | 操做系統 |
| -------- | ------------- | ----------- | ---------- |
| master | centos01 | 192.168.0.1 | CentOS 7.2 |
| node1 | centos02 | 192.168.0.2 | CentOS 7.2 |
| node2 | centos03 | 192.168.0.3 | CentOS 7.2 |git

# 2.基礎環境配置github

## 2.1 hostname配置(可選)web

**1)修改主機名**docker

**在192.168.0.1 root用戶下執行:**json

hostnamectl set-hostname VM_0_1_centosvim

**在192.168.0.2 root用戶下執行:**

hostnamectl set-hostname VM_0_2_centos

**在192.168.0.3 root用戶下執行:**

hostnamectl set-hostname VM_0_3_centos

**2)加入主機映射**

**在目標服務器(192.168.0.1 192.168.0.2 192.168.0.3)root用戶下執行:**

vim /etc/hosts

192.168.0.1 VM_0_1_centos

192.168.0.2 VM_0_2_centos

192.168.0.3 VM_0_3_centos

## 2.2 關閉selinux(可選)

**在目標服務器(192.168.0.1 192.168.0.2 192.168.0.3)root用戶下執行:**

sed -i '/\^SELINUX/s/=.\*/=disabled/' /etc/selinux/config

setenforce 0

## 2.3 修改Linux最大打開文件數

**在目標服務器(192.168.0.1 192.168.0.2 192.168.0.3)root用戶下執行:**

vim /etc/security/limits.conf

\* soft nofile 65536

\* hard nofile 65536

## 2.4 關閉防火牆(可選)

**在目標服務器(192.168.0.1 192.168.0.2 192.168.0.3)root用戶下執行**

systemctl disable firewalld.service

systemctl stop firewalld.service

systemctl status firewalld.service

## 2.5 軟件環境初始化

**1)初始化服務器**

groupadd -g 6000 apps
useradd -s /bin/sh -g apps –d /home/app app
passwd app
yum -y install gcc gcc-c++ make autoconfig openssl-devel supervisor gmp-devel mpfr-devel libmpc-devel libaio numactl autoconf automake libtool libffi-dev

**2)配置sudo**

**在目標服務器(192.168.0.1 192.168.0.2 192.168.0.3)root用戶下執行**

vim /etc/sudoers.d/app

app ALL=(ALL) ALL

app ALL=(ALL) NOPASSWD: ALL

Defaults !env_reset

**3)配置ssh無密登陸**

**a. 在目標服務器(192.168.0.1 192.168.0.2 192.168.0.3)app用戶下執行**

su app

ssh-keygen -t rsa

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

chmod 600 \~/.ssh/authorized_keys

**b.合併id_rsa_pub文件**

**在192.168.0.1 app用戶下執行**

scp \~/.ssh/authorized_keys app\@192.168.0.2:/home/app/.ssh

輸入app密碼

**在192.168.0.2 app用戶下執行**

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

scp \~/.ssh/authorized_keys app@192.168.0.3:/home/app/.ssh

**在192.168.0.3 app用戶下執行**

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

scp \~/.ssh/authorized_keys app@192.168.0.1:/home/app/.ssh

scp \~/.ssh/authorized_keys app@192.168.0.2:/home/app/.ssh

**c. 在目標服務器(192.168.0.1 192.168.0.2 192.168.0.3)app用戶下執行ssh 測試**

ssh app@192.168.0.1

ssh app@192.168.0.2

ssh app@192.168.0.3

## 2.6 sysctl參數配置

**在192.168.0.1 192.168.0.2 192.168.0.3 root用戶下操做**

**vim /etc/sysctl.conf**

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.swappiness=0

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_watches=89100

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

**#生效**

sysctl –p

## 2.7 ntpd配置

****1**)服務端配置****

**在192.168.0.1 root用戶下操做**

yum install -y ntp ntpdate

**修改etc/ntp.conf**

**註釋全部的server和restrict**

**加入:**

server 0.cn.pool.ntp.org

server 0.asia.pool.ntp.org

server 3.asia.pool.ntp.org

 

restrict 0.cn.pool.ntp.org nomodify notrap noquery

restrict 0.asia.pool.ntp.org nomodify notrap noquery

restrict 3.asia.pool.ntp.org nomodify notrap noquery

 

server 127.127.1.0 # local clock

fudge 127.127.1.0 stratum 10

 

system enable ntpd

systemctl disable chronyd

systemctl restart ntpd

**查看網絡中的NTP服務器**

ntpq –p

****2**)客戶端配置****

**在192.168.0.2 192.168.0.3 root用戶下操做**

yum install -y ntp ntpdate

**在/etc/ntp.conf加入**

server 192.168.0.1 prefer

 

system enable ntpd

systemctl disable chronyd

systemctl restart ntpd

**同步**

ntpdate -u 192.168.0.1

執行hwclock --systohc,把系統時間同步到硬件BIOS

ssh app@192.168.0.3

# 3.配置centos源

**在192.168.0.1 root用戶下操做,須要外網環境**

**1)安裝插件**

yum install -y yum-plugin-downloadonly createrepo rsync

**2)建立目錄**

mkdir -p /data/mirrors/centos

**3)下載文件或上傳文件**

yum install nginx -y --downloadonly --downloaddir= /data/mirrors/centos
也能夠自行下載rpm包到/data/mirrors/centos

**4)建立repo**

createrepo /data/mirrors/centos

**5)安裝nginx**

yum -y install nginx
cd /etc/nginx/conf.d

**vim mirrors.conf**

server {
listen 88;
server_name localhost;
root /data/mirrors/;
location / {
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
}
**啓服務**
nginx
nginx -t
nginx -s reload
systemctl enable nginx
systemctl start nginx

**6)配置repo(在192.168.0.1 192.168.0.2 192.168.0.3 root用戶下操做)**

**vim /etc/yum.repos.d/mirrors.repo**
[yumbase]
name=yum-local-repository
baseurl=http://192.168.0.1:88/centos/
enabled=1
gpgcheck=0
#驗證
yum clean all && yum makecache
yum repoinfo webank-local-repository

**7)驗證(任意機器)**

yum -y install 軟件名.版本號

**8)同步清華大學源(在192.168.0.1 root用戶下操做)**
#!/bin/bash
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/centosplus/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/extras/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/os/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/updates/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/epel/7Server/x86_64/Packages/ /data/mirrors/centos

**9)同步阿里雲k8s組件(在192.168.0.1 root用戶下操做)**

須要手動下載:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages,下載並拷貝到 /data/mirrors/centos便可。

# 4.安裝docker

**在192.168.0.1 192.168.0.2 192.168.0.3 root用戶下操做**

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-18.09.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.09.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

rpm -ivh containerd.io-1.2.0-3.el7.x86_64.rpm

rpm -ivh docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

rpm -ivh docker-ce-cli-18.09.0-3.el7.x86_64.rpm

rpm -ivh docker-ce-18.09.0-3.el7.x86_64.rpm

systemctl enable docker

usermod -G docker app

systemctl start docker

# 5.registry私有倉庫配置

****1**)配置registry私有倉庫****

**在192.168.0.1(外網環境)app用戶下執行**

docker pull registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1

docker tag registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1 k8s.gcr.io/pause:3.1

docker pull registry

docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

**在192.168.0.1 192.168.0.2 192.168.0.3 root用戶下執行**

vim /etc/docker/daemon.json加入

{

​ "registry-mirrors": ["https://njrds9qc.mirror.aliyuncs.com"],

​ "insecure-registries":["192.168.0.1:5000"]

}

systemctl daemon-reload

systemctl restart docker

docker login 192.168.0.1:5000 輸入用戶名和密碼:wb 123

cat ~/.docker/config.json 查看認證信息

**建立secret**

/data/projects/common/kubernetes/bin/kubectl create secret docker-registry dockercfg-192 --docker-server=192.168.0.1:5000 --docker-username=wb --docker-password=123

**查看建立的dockercfg-192**

/data/projects/common/kubernetes/bin/kubectl get secret |grep dockercfg-192

**2****)推送images到私有倉庫**

**在192.168.0.1 app用戶下執行**

**a.****改標籤**

docker tag f32a97de94e1 192.168.0.1:5000/registry:latest

docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1

**b.****推送**

docker push 192.168.0.1:5000/registry: latest

docker push 192.168.0.1:5000/ k8s.gcr.io/pause:3.1

**c.****拉取**

**在192.168.0.2 192.168.0.3 app用戶下執行**

docker pull 192.168.0.1:5000/registry:latest

docker tag f32a97de94e1 registry:latest

docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

docker pull 192.168.0.1:5000/ k8s.gcr.io/pause:3.1

docker tag 192.168.0.1:5000/k8s.gcr.io /pause:3.1 k8s.gcr.io/pause:3.1

# 6.安裝k8s管理工具

**在192.168.0.1 192.168.0.2 192.168.0.3 root 用戶下安裝**

yum -y install kubelet-1.16.1 kubeadm-1.15.4 kubectl-1.15.0 --disableexcludes=kubernetes

systemctl daemon-reload

systemctl enable kubelet

# 7.部署k8s組件

**1)查看所須要鏡像(在master節點192.168.0.1 root用戶下操做)**

#kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.16.1
k8s.gcr.io/kube-controller-manager:v1.16.1
k8s.gcr.io/kube-scheduler:v1.16.1
k8s.gcr.io/kube-proxy:v1.16.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

**2)下載鏡像(在master節點192.168.0.1 root用戶下操做)**

**cat kubeadm.sh**

#!/bin/bash

set -e

KUBE_VERSION=v1.16.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

**執行腳本**

bash kubeadm.sh

**3)初始化(master節點)**

kubeadm init \
--apiserver-advertise-address 192.168.0.1 \
--kubernetes-version=v1.16.1 \
--apiserver-bind-port 8080 \
--image-repository registry.aliyuncs.com/google_containers\
--pod-network-cidr=10.244.0.0/16

**如返回如下信息表示初始化成功**

kubeadm join 192.168.0.1:8080 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b

**#添加節點(全部節點)**

kubeadm join 192.168.0.1:8080 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b --ignore-preflight-errors=all

**#將master節點開放到node節點處**

kubectl taint nodes --all node-role.kubernetes.io/master-

**#導出配置(8080)**

在/etc/profile加入,而後source /etc/profile

export KUBECONFIG=/etc/kubernetes/kubelet.conf

export KUBECONFIG=/etc/kubernetes/admin.conf

**4)安裝flannel插件(全部節點)**

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

**#下載鏡像**

**vim flanneld.sh**

#!/bin/bash

set -e

FLANNEL_VERSION=v0.11.0

QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos

images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)

for imageName in ${images[@]} ; do
docker pull $QINIU_URL/$imageName
docker tag $QINIU_URL/$imageName $QUAY_URL/$imageName
docker rmi $QINIU_URL/$imageName
done

**執行腳本**

bash flanneld.sh

**#建立**

git clone https://github.com/coreos/flannel.git

cd flannel/Documentation

kubectl apply -f kube-flannel.yml

**#驗證節點安裝狀況**

kubectl get componentstatus

kubectl get node

**k8s組件配置鏡像倉庫**

**#master節點操做**

docker tag k8s.gcr.io/kube-proxy:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker tag k8s.gcr.io/kube-controller-manager:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker tag k8s.gcr.io/kube-apiserver:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker tag k8s.gcr.io/kube-scheduler:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker tag k8s.gcr.io/coredns:1.3.1 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker tag k8s.gcr.io/etcd:3.3.10 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker tag quay.io/coreos/flannel:v0.11.0-s390x 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker tag quay.io/coreos/flannel:v0.11.0-ppc64le 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker tag quay.io/coreos/flannel:v0.11.0-arm64 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker tag quay.io/coreos/flannel:v0.11.0-arm 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker tag quay.io/coreos/flannel:v0.11.0-amd64 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

docker push 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker push 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker push 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

**node節點操做**

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker pull 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker pull 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

docker tag 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1 k8s.gcr.io/kube-proxy:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1 k8s.gcr.io/kube-controller-manager:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1 k8s.gcr.io/kube-apiserver:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1 k8s.gcr.io/kube-scheduler:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag 192.168.0.1:5000/k8s.gcr.io/pause:3.1 k8s.gcr.io/pause:3.1
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x quay.io/coreos/flannel:v0.11.0-s390x
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le quay.io/coreos/flannel:v0.11.0-ppc64le
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64 quay.io/coreos/flannel:v0.11.0-arm64
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm quay.io/coreos/flannel:v0.11.0-arm
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

# 8.安裝helm

**全部節點安裝**

wget
https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz

tar xvf helm-v2.14.3-linux-amd64.tar.gz

sudo cp linux-amd64/helm tiller /usr/local/bin

sudo yum install -y socat

sudo yum install -y *rhsm*

sudo yum –y install bridge*

sudo nohup /usr/local/bin/tiller &

sudo sed -i '$a\export HELM_HOST=localhost:44134' /etc/profile

source /etc/profile

helm version

相關文章
相關標籤/搜索