標籤(空格分隔): kubernetes系列html
[toc]node
系統:Centos7.8x64 cat /etc/hosts ----- 192.168.100.11 node01.flyfish.cn 192.168.100.12 node02.flyfish.cn 192.168.100.13 node03.flyfish.cn 192.168.100.14 node04.flyfish.cn 192.168.100.15 node05.flyfish.cn 192.168.100.16 node06.flyfish.cn 192.168.100.17 node07.flyfish.cn 192.168.100.18 node08.flyfish.cn ----- 本次安裝以 前三臺部署 k8s 部署說明
安裝基礎工具 yum install -y wget && yum install -y vim && yum install -y lsof && yum install -y net-tools
關閉防火牆或者阿里雲開通安全組端口訪問 systemctl stop firewalld systemctl disable firewalld 執行關閉命令: systemctl stop firewalld.service 再次執行查看防火牆命令:systemctl status firewalld.service 執行開機禁用防火牆自啓命令 : systemctl disable firewalld.service
關閉 selinux: sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0 cat /etc/selinux/config
關閉 swap swapoff -a #臨時 sed -ri 's/.*swap.*/#&/' /etc/fstab #永久 free -l -h
將橋接的 IPv4 流量傳遞到 iptables 的鏈 若是沒有/etc/sysctl.conf文件的話直接執行 echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
下載地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz 如下在全部節點操做。這裏採用二進制安裝,用yum安裝也同樣。 在 node01.flyfish,node02.flyfish 與 node03.flyfish 節點上面安裝
3.1 解壓二進制包 tar zxvf docker-19.03.9.tgz mv docker/* /usr/bin
3.2 systemd管理docker cat > /usr/lib/systemd/system/docker.service << EOF [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify ExecStart=/usr/bin/dockerd ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF
3.3 建立配置文件 mkdir /etc/docker cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] } EOF registry-mirrors 阿里雲鏡像加速器
3.4 啓動並設置開機啓動 systemctl daemon-reload systemctl start docker systemctl enable docker
安裝k8s、kubelet、kubeadm、kubectl(全部節點) 配置K8S的yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
安裝kubelet、kubeadm、kubectl yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
systemctl enable kubelet && systemctl start kubelet
初始化全部節點: 下載鏡像腳本: vim image.sh ---- #!/bin/bash images=( kube-apiserver:v1.17.3 kube-proxy:v1.17.3 kube-controller-manager:v1.17.3 kube-scheduler:v1.17.3 coredns:1.6.5 etcd:3.4.3-0 pause:3.1 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done -----
初始化 master節點: 注意,該操做只是在master節點以後構建環境。 kubeadm init \ --apiserver-advertise-address=192.168.100.11 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.17.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
部署網絡插件 kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
其餘節點加入: kubeadm join 192.168.100.11:6443 --token y28jw9.gxstbcar3m4n5p1a \ --discovery-token-ca-cert-hash sha256:769528577607a4024ead671ae01b694744dba16e0806e57ed1b099eb6c6c9350
yum install -y nfs-utils echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
mkdir -p /nfs/data systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server exportfs -r
測試Pod直接掛載NFS了(主節點操做) 在opt目錄下建立一個nginx.yaml的文件 vim nginx.yaml ---- apiVersion: v1 kind: Pod metadata: name: vol-nfs namespace: default spec: volumes: - name: html nfs: path: /nfs/data #1000G server: 192.168.100.11 #本身的nfs服務器地址 containers: - name: myapp image: nginx volumeMounts: - name: html mountPath: /usr/share/nginx/html/ ---- kubectl apply -f nginx.yaml cd /nfs/data/ echo " 11111" >>> index.html
安裝客戶端工具(node節點操做) node02.flyfish.cn showmount -e 192.168.100.11
建立同步文件夾 mkdir /root/nfsmount
將客戶端的/root/nfsmount和/nfs/data/作同步(node節點操做) mount -t nfs 192.168.100.11:/nfs/data/ /root/nfsmount
vim nfs-rbac.yaml ---- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get","create","list", "watch","update"] - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-provisioner subjects: - kind: ServiceAccount name: nfs-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Deployment apiVersion: apps/v1 metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers: - name: nfs-client-provisioner image: lizhenliang/nfs-client-provisioner volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: storage.pri/nfs - name: NFS_SERVER value: 192.168.100.11 - name: NFS_PATH value: /nfs/data volumes: - name: nfs-client-root nfs: server: 192.168.100.11 path: /nfs/data ---- kubectl apply -f nfs-rbac.yaml kubectl get pod
建立storageclass vi storageclass-nfs.yaml ---- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: storage-nfs provisioner: storage.pri/nfs reclaimPolicy: Delete ---- kubectl apply -f storageclass-nfs.yaml
#擴展"reclaim policy"有三種方式:Retain、Recycle、Deleted。 Retain #保護被PVC釋放的PV及其上數據,並將PV狀態改爲"released",不將被其它PVC綁定。集羣管理員手動經過以下步驟釋放存儲資源: 手動刪除PV,但與其相關的後端存儲資源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。 手動清空後端存儲volume上的數據。 手動刪除後端存儲volume,或者重複使用後端volume,爲其建立新的PV。 Delete 刪除被PVC釋放的PV及其後端存儲volume。對於動態PV其"reclaim policy"繼承自其"storage class", 默認是Delete。集羣管理員負責將"storage class"的"reclaim policy"設置成用戶指望的形式,不然須要用 戶手動爲建立後的動態PV編輯"reclaim policy" Recycle 保留PV,但清空其上數據,已廢棄
kubectl get storageclass
改變默認sc https://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/#%e4%b8%ba%e4%bb%80%e4%b9%88%e8%a6%81%e6%94%b9%e5%8f%98%e9%bb%98%e8%ae%a4-storage-class kubectl patch storageclass storage-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
驗證nfs動態供應 建立pvc vim pvc.yaml ----- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-claim-01 #annotations: # volume.beta.kubernetes.io/storage-class: "storage-nfs" spec: storageClassName: storage-nfs #這個class必定注意要和sc的名字同樣 accessModes: - ReadWriteMany resources: requests: storage: 1Mi ----- kubectl apply -f pvc.yaml
使用pvc vi testpod.yaml ---- kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-pod image: busybox command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" volumeMounts: - name: nfs-pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: pvc-claim-01 ----- kubectl apply -f testpod.yaml
一、先安裝metrics-server(yaml以下,已經改好了鏡像和配置,能夠直接使用),這樣就能監控到pod。node的資源狀況(默認只有cpu、memory的資源審計信息喲,更專業的咱們後面對接 Prometheus) vim 2222.yaml ---- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:aggregated-metrics-reader labels: rbac.authorization.k8s.io/aggregate-to-view: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-admin: "true" rules: - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.metrics.k8s.io spec: service: name: metrics-server namespace: kube-system group: metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100 --- apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system --- apiVersion: apps/v1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: metrics-server volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers - name: tmp-dir emptyDir: {} containers: - name: metrics-server image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6 imagePullPolicy: IfNotPresent args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname ports: - name: main-port containerPort: 4443 protocol: TCP securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - name: tmp-dir mountPath: /tmp nodeSelector: kubernetes.io/os: linux kubernetes.io/arch: "amd64" --- apiVersion: v1 kind: Service metadata: name: metrics-server namespace: kube-system labels: kubernetes.io/name: "Metrics-server" kubernetes.io/cluster-service: "true" spec: selector: k8s-app: metrics-server ports: - port: 443 protocol: TCP targetPort: main-port --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- kubectl apply -f 2222.yaml
kubetl top nodes kubectl top nodes
https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/ wget https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml wget https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml
vim cluster-configuration.yaml ---- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.0.0 spec: persistence: storageClass: "" # If there is not a default StorageClass in your cluster, you need to specify an existing StorageClass here. authentication: jwtSecret: "" # Keep the jwtSecret consistent with the host cluster. Retrive the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the host cluster. etcd: monitoring: true # Whether to enable etcd monitoring dashboard installation. You have to create a secret for etcd before you enable it. endpointIps: 192.168.100.11 # etcd cluster EndpointIps, it can be a bunch of IPs here. port: 2379 # etcd port tlsEnable: true common: mysqlVolumeSize: 20Gi # MySQL PVC size. minioVolumeSize: 20Gi # Minio PVC size. etcdVolumeSize: 20Gi # etcd PVC size. openldapVolumeSize: 2Gi # openldap PVC size. redisVolumSize: 2Gi # Redis PVC size. es: # Storage backend for logging, events and auditing. # elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number # elasticsearchDataReplicas: 1 # total number of data nodes. elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes. elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log. console: enableMultiLogin: true # enable/disable multiple sing on, it allows an account can be used by different users at the same time. port: 30880 alerting: # (CPU: 0.3 Core, Memory: 300 MiB) Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from. enabled: true auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants. enabled: true devops: # (CPU: 0.47 Core, Memory: 8.6 G) Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image. enabled: true jenkinsMemoryLim: 2Gi # Jenkins memory limit. jenkinsMemoryReq: 1500Mi # Jenkins memory request. jenkinsVolumeSize: 8Gi # Jenkins volume size. jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters. jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters. enabled: true ruler: enabled: true replicas: 2 logging: # (CPU: 57 m, Memory: 2.76 G) Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd. enabled: true logsidecarReplicas: 2 metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler). enabled: false monitoring: # prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well. prometheusMemoryRequest: 400Mi # Prometheus request memory. prometheusVolumeSize: 20Gi # Prometheus PVC size. # alertmanagerReplicas: 1 # AlertManager Replicas. multicluster: clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster. networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net. enabled: true notification: # Email Notification support for the legacy alerting system, should be enabled/disabled together with the above alerting option. enabled: true openpitrix: # (2 Core, 3.6 G) Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management. enabled: true servicemesh: # (0.3 Core, 300 MiB) Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology. enabled: true ---- kubectl apply -f kubesphere-installer.yaml kubectl apply -f cluster-configuration1.yaml
查看安裝進度: kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
kubectl get pod -A
kubesphere-monitoring-system prometheus-k8s-0 0/3 ContainerCreating 0 7m20s kubesphere-monitoring-system prometheus-k8s-1 0/3 ContainerCreating 0 7m20s prometheus-k8s-1 這個一直在 ContainerCreating 這個 狀態
kubectl describe pod prometheus-k8s-0 -n kubesphere-monitoring-system kube-etcd-client-certs 這個證書沒有找到:
kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key kubectl get secret -A |grep etcd
kubectl get pod -n kubesphere-monitoring-system prometheus-k8s-1 這個pod 就變成Running 狀態了
下面根據日誌提示打開kubesphere 的web 頁面: