.net core i上 K8S(一)集羣搭建

1.前言

  之前搭建集羣都是使用nginx反向代理,但如今咱們有了更好的選擇——K8S。我不打算一上來就講K8S的知識點,由於知識點仍是比較多,我打算先從搭建K8S集羣講起,我也是在搭建集羣的過程當中熟悉了K8S的一些概念,但願對你們有所幫助。K8S集羣的搭建難度適中,網上有不少搭建k8s的教程,我搭建的過程當中或多或少遇到一些問題,如今就把我總結完的教程給你們總結一下。這裏主要講經過二進制包安裝K8Snode

2.集羣組件介紹

節點 ip 組件
master 192.168.8.201

etcd:存儲集羣節點信息linux

kubectl:管理集羣組件,經過kubectl控制集羣nginx

kube-controller-manage:監控節點是否健康,不健康則自動修復至健康狀態git

kube-scheduler:負責爲kube-controller-manage建立的pod選擇合適的節點,將節點信息寫入etcdgithub

node 192.168.8.202

kube-proxy:service與pod通訊docker

kubelet:kube-scheduler將節點數據存入etcd後,kubelet獲取到並按規則建立podvim

dockercentos

3.etcd安裝

yum install etcd –y
vi /etc/etcd/etcd.conf

修改etcd.conf內容api

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

啓動瀏覽器

systemctl start etcd
systemctl enable etcd

4.下載k8s安裝包

打開github中k8s地址,選擇一個版本的安裝包

點擊CHANGELOG-1.13.md,在master節點上安裝server包,node節點上安裝node包

5.master節點安裝server

tar zxvf kubernetes-server-linux-amd64.tar.gz      #解壓
mkdir -p /opt/kubernetes/{bin,cfg}            #建立文件夾
mv kubernetes/server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager,kubectl} /opt/kubernetes/bin    #移動文件到上一步的文件夾
chmod +x /opt/kubernetes/bin/*
5.1配置apiserver
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=http://192.168.8.201:2379 \\
--insecure-bind-address=0.0.0.0 \\
--insecure-port=8080 \\
--advertise-address=192.168.8.201 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.10.10.0/24 \\
--service-node-port-range=30000-50000 \\
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota"

EOF
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
5.2配置kube-controller-manager
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager


KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect=true \\
--address=127.0.0.1"

EOF
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
5.3配置kube-scheduler
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect"

EOF
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
5.4運行kube-api與kube-controller-manager與kube-scheduler
vim ku.sh    #建立一個腳本,內容以下
#!/bin/bash


systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

systemctl enable kube-scheduler
systemctl restart kube-scheduler

執行以上腳本

chmod +x *.sh    #給權限

./ku.sh    #運行
5.5將kubectl配置到環境變量,便於執行
echo "export PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile
source /etc/profile

至此server安裝成功,可經過命令查看相關進程是否啓動成功

ps -ef |grep kube

啓動失敗可經過如下命令查看信息

journalctl -u kube-apiserver

6.安裝node節點

6.1docker安裝
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum makecache fast
sudo yum -y install docker-ce
sudo systemctl start docker
6.2解壓node.zip包
tar zxvf kubernetes-node-linux-amd64.tar.gz

mkdir -p /opt/kubernetes/{bin,cfg}

mv kubernetes/node/bin/{kubelet,kube-proxy} /opt/kubernetes/bin/

 chmod +x /opt/kubernetes/bin/*

6.3建立配置文件
vim /opt/kubernetes/cfg/kubelet.kubeconfig
apiVersion: v1
kind: Config
clusters:
- cluster:
    server: http://192.168.8.201:8080
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
  name: default-context
current-context: default-context
vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
apiVersion: v1
kind: Config
clusters:
- cluster:
    server: http://192.168.8.201:8080
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
  name: default-context
current-context: default-context
cat <<EOF >/opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=192.168.8.202 \\
--hostname-override=192.168.8.202 \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--allow-privileged=true \\
--cluster-dns=10.10.10.2 \\
--cluster-domain=cluster.local \\
--fail-swap-on=false \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

EOF
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.8.202 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

EOF
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
6.4啓動kube-proxy與kubelet
vim ku.sh
#!/bin/bash

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

systemctl enable kube-proxy
systemctl restart kube-proxy

至此node安裝完成,查看是否安裝成功

失敗則查看日誌

journalctl -u kubelet

7.master節點驗證是否有node節點

查看集羣健康狀態

 

至此master與node安裝成功

8.啓動一個nginx示例

kubectl run nginx --image=nginx --replicas=3
kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort

驗證

瀏覽器訪問

9.安裝dashbord

vim kube.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-dashboard
  template:
    metadata:
      labels:
        app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/tolerations: |
          [
            {
              "key": "dedicated",
              "operator": "Equal",
              "value": "master",
              "effect": "NoSchedule"
            }
          ]
    spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.7.0
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
           - --apiserver-host=http://192.168.8.201:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30

---

kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: kubernetes-dashboard

建立

kubectl create -f kube.yaml

查看pod

查看端口

 

訪問bord

 

 至此集羣搭建完成

相關文章
相關標籤/搜索