使用vagrant快速建立linux虛擬機html
安裝virtualbox前端
https://www.virtualbox.org/wiki/Downloadsjava
下載安裝vagrantnode
本人使用3個Centos7虛擬機,每臺內存20Glinux
傳統部署時代: 早期,組織在物理服務器上運行應用程序。沒法爲物理服務器中的應用程序定義資源邊界,這會致使資源分配問題。例如,若是在物理服務器上運行多個應用程序,則可能會出現一個應用程序佔用大部分資源的狀況,結果可能致使其餘應用程序的性能降低。一種解決方案是在不一樣的物理服務器上運行每一個應用程序,可是因爲資源利用不足而沒法擴展,而且組織維護許多物理服務器的成本很高。nginx
虛擬化部署時代: 做爲解決方案,引入了虛擬化功能,它容許您在單個物理服務器的 CPU 上運行多個虛擬機(VM)。虛擬化功能容許應用程序在 VM 之間隔離,並提供安全級別,由於一個應用程序的信息不能被另外一應用程序自由地訪問。git
由於虛擬化能夠輕鬆地添加或更新應用程序、下降硬件成本等等,因此虛擬化能夠更好地利用物理服務器中的資源,並能夠實現更好的可伸縮性。github
每一個 VM 是一臺完整的計算機,在虛擬化硬件之上運行全部組件,包括其本身的操做系統。web
容器部署時代: 容器相似於 VM,可是它們具備輕量級的隔離屬性,能夠在應用程序之間共享操做系統(OS)。所以,容器被認爲是輕量級的。容器與 VM 相似,具備本身的文件系統、CPU、內存、進程空間等。因爲它們與基礎架構分離,所以能夠跨雲和 OS 分發進行移植。
下面列出了容器的一些好處:
容器時代,容器管理的需求凸顯。
Kubernetes 爲您提供:
K8S架構
一個K8S集羣由兩部分構成 master節點和node節點。
master節點主要負責集羣的控制,對pod進行調度,已經令牌管理等等功能。
node節點主要是負責幹活,啓動容器、管理容器。
master節點和node節點通常不要部署在一臺機器上。
下面一個master,兩個node。
將這張圖抽象一下,大約是這個樣子:
master節點中有不少組件
一個Pod中的應用容器共享同一組資源:
(1)PID命名空間:Pod中的不一樣應用程序能夠看見其餘應用程序的進程ID
(2)網絡命名空間:Pod中的多個容器能訪問同一個IP和端口範圍
(3)IPC命名空間:Pod中的多個容器可以使用SystemV IPC或POSIX消息隊列進行通訊
(4)UTS命名空間:Pod中的多個容器共享一個主機名
(5)Volumes(共享存儲卷):Pod中的各個容器能夠訪問在Pod級別定義的Volumes。容器運行的過程當中會產生數據,這些數據就保存在Volume中
每一個節點運行着docker,docker中運行着容器,這些容器被kubelet組合成pod,pod負責建立銷燬維護聲明週期。將pod暴露出去後,提供一個訪問地址,訪問該地址,kube-proxy會解析地址最終訪問到指定的容器,kube-proxy還能進行負載均衡。
Deployment是Kubernetes v1.2引入的概念,引入的目的是爲了更好地解決Pod的編排問題。爲此,Deployment在內部使用了Replica Set來實現目的,不管從Deployment的做用與目的,它的YAML定義,仍是從它的具體命令行操做來看,咱們均可以把它看做RC的一次升級,二者類似度超過90%。
Deployment相對於RC的一個最大升級是咱們隨時知道當前Pod「部署」的進度。實際上因爲一個Pod的建立、調度、綁定節點及在目標Node上啓動對應的容器這一完整過程須要必定的時間,因此咱們期待系統啓動N個Pod副本的目標狀態,其實是一個連續變化的「部署過程」致使的最終狀態。
Deployment的典型使用場景有如下幾個。
在Kubernetes系統中,Pod的管理對象RC、Deployment、DaemonSet和Job都是面向無狀態的服務。但現實中有不少服務是有狀態的,特別是一些複雜的中間件集羣,例如MySQL集羣、MongoDB集羣、Kafka集羣、Zookeeper集羣等,這些應用集羣有如下一些共同點。
每一個節點都有固定的身份ID,經過這個ID,集羣中的成員能夠相互發現而且通訊。
集羣的規模是比較固定的,集羣規模不能隨意變更。
集羣裏的每一個節點都是有狀態的,一般會持久化數據到永久存儲中。
若是磁盤損壞,則集羣裏的某個節點沒法正常運行,集羣功能受損。
若是用RC/Deployment控制Pod副本數的方式來實現上述有狀態的集羣,則咱們會發現第一點是沒法知足的,由於Pod的名字是隨機產生的,Pod的IP地址也是在運行期才肯定且可能有變更的,咱們事先沒法爲每一個Pod肯定惟一不變的ID,爲了可以在其餘節點上恢復某個失敗的節點,這種集羣中的Pod須要掛接某種共享存儲,爲了解決這個問題,Kubernetes從v1.4版本開始引入了PetSet這個新的資源對象,而且在v1.5版本時改名爲StatefulSet,StatefulSet從本質上來講,能夠看做Deployment/RC的一個特殊變種,它有以下一些特性。
StatefulSet裏的每一個Pod都有穩定、惟一的網絡標識,能夠用來發現集羣內的其餘成員。假設StatefulSet的名字叫kafka,那麼第一個Pod叫kafak-0,第二個Pod叫kafak-1,以此類推。
StatefulSet控制的Pod副本的啓停順序是受控的,操做第n個Pod時,前n-1個Pod已經時運行且準備好的狀態。
StatefulSet裏的Pod採用穩定的持久化存儲卷,經過PV/PVC來實現,刪除Pod時默認不會刪除與StatefulSet相關的存儲卷(爲了保證數據的安全)。
Service
定義一組pod訪問策略
pod的負載均衡,提供一個或多個pod的穩定訪問地址
Kubernetes的Service定義了一個服務的訪問入口地址,前端的應用經過這個入口地址訪問其背後的一組由Pod副本組成的集羣實例,Service與其後端Pod副本集羣之間則是經過Label Selector來實現「無縫對接」的。而RC的做用其實是保證Service的服務能力和服務質量始終處於預期的標準。
pod組合一些controller,service又挑出一些pod進行組合,提供負載均衡,好比想訪問購物車功能,只訪問service便可,service會知道組合的pod在哪些服務器中,訪問一個便可。一次部署就是將pod部署到各個節點,Deployment產生pod,service對pod進行組合。
Label
Label是Kubernetes系統中另一個核心概念。一個Label是一個key=value的鍵值對,其中key與vaue由用戶本身指定。Label能夠附加到各類資源對象上,例如Node、Pod、Service、RC等,一個資源對象能夠定義任意數量的Label,同一個Label也能夠被添加到任意數量的資源對象上去,Label一般在資源對象定義時肯定,也能夠在對象建立後動態添加或者刪除。
咱們能夠經過指定的資源對象捆綁一個或多個不一樣的Label來實現多維度的資源分組管理功能,以便於靈活、方便地進行資源分配、調度、配置、部署等管理工做。例如:部署不一樣版本的應用到不一樣的環境中;或者監控和分析應用(日誌記錄、監控、告警)等。一些經常使用等label示例以下。
Label至關於咱們熟悉的「標籤」,給某個資源對象定義一個Label,就至關於給它打了一個標籤,隨後能夠經過Label Selector(標籤選擇器)查詢和篩選擁有某些Label的資源對象,Kubernetes經過這種方式實現了相似SQL的簡單又通用的對象查詢機制。
namespace
命名空間,用於隔離
經過kubectl提交一個建立RC的請求,請求經過API Server被寫入etcd中
此時,Controller Manager經過API server的監聽自由變化的接口監聽到此RC事件
發現當前集羣中沒有它所對應的pod實例
根據RC裏的pod模板定義生成一個pod對象,經過API Server寫入etcd
此事件被Scheduler發現,它當即執行一個複雜的調度流程,爲這個新pod選定一個落戶的node,而後經過API Server將這一結果寫入到etcd中
目標node上運行的kubelet進程經過API Server監測到這個新生的pod,並按照它的定義,啓動該pod並一直對其負責直到生命結束
隨後,kubetcl提交一個新的映射到該pod的service的建立請求
Controller Manager經過Label標籤查詢到關聯的pod實例,而後生成service的Endpoints信息,並經過API Service寫入到etcd中
接下來,全部node上運行的proxy進程經過API Server查詢並監聽Service對象與其對應的Endpoints信息,創建一個軟件方式的負載均衡器來實現Service訪問到後端pod的流量轉發功能
k8s中全部資源對象均可以採用yaml或JSON格式的文件定義或描述
集羣安裝
kubeadm是快速部署k8s集羣的工具
建立一個master節點
kubeadm init
一個node節點加入集羣
kubeadm join <master節點ip和端口>
一個maser,兩個node
步驟:
關閉防火牆、selinux
關閉swap
添加主機名和ip對應關係
將橋接的ipv4流量傳遞到iptables的鏈
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-iptables=1 EOF
docker依賴
yum install -y yum-utils device-mapper-persistent-data lvm2
設置docker的yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
安裝docker以及docker-cli
yum install -y docker-ce docker-ce-cli containerd.io
配置docker加速
mkdir -p /etc/docker tee /etc/docker/daemon.json <<- 'EOF' { "registry-mirrors":["https://82m9ar63.mirror.aliyuncs.com"] } EOF systemctl daemon-reload systemctl restart docker
開機啓動
systemctl enable docker
添加阿里雲yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
安裝kubeadm、kubelet和kubectl
yum list|grep kube yum install -y kubelet-1.17.3 kubectl-1.17.3 kubeadm-1.17.3
開機啓動
systemctl enable kubelet
systemctl start kubelet
./master_images.sh 下載節點鏡像
#!/bin/bash images=( kube-apiserver:v1.17.3 kube-proxy:v1.17.3 kube-controller-manager:v1.17.3 kube-scheduler:v1.17.3 coredns:1.6.5 etcd:3.4.3-0 pause:3.1 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName # docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName done
master節點初始化
kubeadm init \
--apiserver-advertise-address=192.168.147.8 \ apiserver是master的組件,所以就是master的地址 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.17.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16
下面會出一些命令,根據命令完成後面的步驟
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
安裝pod網絡插件
kubectl apply -f kube-flannel.yml
--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - amd64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - ppc64le hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - s390x hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
查看名稱空間
kubectl get ns
獲取全部命名空間的pod
kubectl get pods --all-namespaces
kube-flannel-ds-amd64-m4wln 狀態爲running,再繼續下面的步驟
獲取節點
kubectl get nodes
有一個節點是master,狀態是ready
node2節點
kubeadm join 10.0.2.5:6443 --token qy7chp.d1qvqk6slfpsl284 --discovery-token-ca-cert-hash sha256:9e75c53992ae8803fa727ea5d537c387fa42aec4ddc6ed934c146e665cec5de3
node3節點也同樣
[root@k8s-node1 k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 41d v1.17.3 k8s-node2 Ready <none> 41d v1.17.3 k8s-node3 Ready <none> 41d v1.17.3
入門操做k8s集羣
部署一個tomcat
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8
查看全部資源
kubectl get all
查看pod詳細信息
kubectl get pods -o wide
暴露nginx訪問
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort
pod的80映射容器的8080,service會代理pod的80
查看service詳細信息
[root@k8s-node1 k8s]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 41d <none>
tomcat6 NodePort 10.96.57.43 <none> 80:32311/TCP 41d app=tomcat6
tomcat暴露了32311端口
能夠訪問
動態擴容,擴容3份
[root@k8s-node1 k8s]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tomcat6-5f7ccf4cb9-2bghn 1/1 Running 10 41d 10.244.1.13 k8s-node2 <none> <none>
[root@k8s-node1 k8s]# kubectl scale --replicas=3 deployment tomcat6
deployment.apps/tomcat6 scaled
[root@k8s-node1 k8s]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tomcat6-5f7ccf4cb9-2bghn 1/1 Running 10 41d 10.244.1.13 k8s-node2 <none> <none>
tomcat6-5f7ccf4cb9-4vmv7 1/1 Running 0 4s 10.244.2.3 k8s-node3 <none> <none>
tomcat6-5f7ccf4cb9-7grgq 1/1 Running 0 4s 10.244.2.4 k8s-node3 <none> <none>
訪問任意節點的上述端口均可以訪問tomcat
也能夠縮容
[root@k8s-node1 k8s]# kubectl scale --replicas=1 deployment tomcat6
deployment.apps/tomcat6 scaled
[root@k8s-node1 k8s]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tomcat6-5f7ccf4cb9-2bghn 1/1 Running 10 41d 10.244.1.13 k8s-node2 <none> <none>
tomcat6-5f7ccf4cb9-4vmv7 1/1 Terminating 0 66s 10.244.2.3 k8s-node3 <none> <none>
tomcat6-5f7ccf4cb9-7grgq 1/1 Terminating 0 66s 10.244.2.4 k8s-node3 <none> <none>
查看service、deployment及副本等信息,刪除部署,也能夠刪除service
[root@k8s-node1 k8s]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-5f7ccf4cb9-2bghn 1/1 Running 10 41d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 41d service/tomcat6 NodePort 10.96.57.43 <none> 80:32311/TCP 41d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 1/1 1 1 41d NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-5f7ccf4cb9 1 1 1 41d
[root@k8s-node1 k8s]# kubectl delete deployment.apps/tomcat6
https://kubernetes.io/zh/docs/reference/kubectl/overview/
Kubectl 是一個命令行接口,用於對 Kubernetes 集羣運行命令。kubectl
在 $HOME/.kube 目錄中尋找一個名爲 config 的文件。您能夠經過設置環境變量 KUBECONFIG 或設置 --kubeconfig
參數指定其它 kubeconfig 文件。
使用如下語法 kubectl
從終端窗口運行命令:
kubectl [command] [TYPE] [NAME] [flags]
其中 command
、TYPE
、NAME
和 flags
分別是:
command
:指定要對一個或多個資源執行的操做,例如 create
、get
、describe
、delete
。
TYPE
:指定資源類型。資源類型不區分大小寫,能夠指定單數、複數或縮寫形式。例如,如下命令輸出相同的結果:
```shell
kubectl get pod pod1
kubectl get pods pod1
kubectl get po pod1
```
NAME
:指定資源的名稱。名稱區分大小寫。若是省略名稱,則顯示全部資源的詳細信息 kubectl get pods
。
在對多個資源執行操做時,您能夠按類型和名稱指定每一個資源,或指定一個或多個文件:
要按類型和名稱指定資源:
要對全部類型相同的資源進行分組,請執行如下操做:TYPE1 name1 name2 name<#>
。
例子:kubectl get pod example-pod1 example-pod2
分別指定多個資源類型:TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>
。
例子:kubectl get pod/example-pod1 replicationcontroller/example-rc1
用一個或多個文件指定資源:-f file1 -f file2 -f file<#>
kubectl get pod -f ./pod.yaml
flags
: 指定可選的參數。例如,可使用 -s
或 -server
參數指定 Kubernetes API 服務器的地址和端口。
警告:從命令行指定的參數會覆蓋默認值和任何相應的環境變量。
若是您須要幫助,只需從終端窗口運行 kubectl help
便可
yml模板
使用apply命令可使用yml文件
kubectl apply -f example.yaml
能夠將以前部署tomcat格式化爲yml信息
[root@k8s-node1 k8s]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: tomcat6 name: tomcat6 spec: replicas: 1 selector: matchLabels: app: tomcat6 strategy: {} template: metadata: creationTimestamp: null labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat resources: {} status: {}
將其保存爲tomcat6.yaml
可使用該文件
kubectl apply -f tomcat6.yaml
Pod是個啥
Pod和Controller
控制器能夠建立和管理多個Pod,管理副本和上線,並在集羣範圍內提供自修復能力。如,一個節點識別,控制器能夠在不一樣的節點上調度同樣的替身來自動替換Pod。
包含一個或多個Pod的控制器的一些示例:
控制器一般使用您提供的Pod模板來建立它所負責的Pod
Services
將運行在一組 Pods 上的應用程序公開爲網絡服務的抽象方法。
使用Kubernetes,您無需修改應用程序便可使用不熟悉的服務發現機制。 Kubernetes爲Pods提供本身的IP地址和一組Pod的單個DNS名稱,而且能夠在它們之間進行負載平衡。
Kubernetes Pods 是有生命週期的。他們能夠被建立,並且銷燬不會再啓動。 若是您使用 Deployment 來運行您的應用程序,則它能夠動態建立和銷燬 Pod。
每一個 Pod 都有本身的 IP 地址,可是在 Deployment 中,在同一時刻運行的 Pod 集合可能與稍後運行該應用程序的 Pod 集合不一樣。
這致使了一個問題: 若是一組 Pod(稱爲「後端」)爲羣集內的其餘 Pod(稱爲「前端」)提供功能,那麼前端如何找出並跟蹤要鏈接的 IP 地址,以便前端可使用工做量的後端部分?
Kubernetes Service
定義了這樣一種抽象:邏輯上的一組 Pod
,一種能夠訪問它們的策略 —— 一般稱爲微服務。 這一組 Pod
可以被 Service
訪問到,一般是經過 selector (查看下面瞭解,爲何你可能須要沒有 selector 的 Service
)實現的。
從新部署下tomcat服務,刪除以前的資源
[root@k8s-node1 k8s]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-5f7ccf4cb9-2bghn 1/1 Running 10 41d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 41d service/tomcat6 NodePort 10.96.57.43 <none> 80:32311/TCP 41d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 1/1 1 1 41d NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-5f7ccf4cb9 1 1 1 41d [root@k8s-node1 k8s]# kubectl delete deployment.apps/tomcat6 deployment.apps "tomcat6" deleted [root@k8s-node1 k8s]# kubectl delete pod/tomcat6-5f7ccf4cb9-2bghn Error from server (NotFound): pods "tomcat6-5f7ccf4cb9-2bghn" not found [root@k8s-node1 k8s]# kubectl delete service/tomcat6 service "tomcat6" deleted
生成yaml文件,去掉沒必要要的內容
[root@k8s-node1 k8s]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml > tomcat6-deployment.yaml [root@k8s-node1 k8s]# vi tomcat6-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: labels: app: tomcat6 name: tomcat6 spec: replicas: 3 selector: matchLabels: app: tomcat6 template: metadata: labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat
進行Deployment,會產生Pod
[root@k8s-node1 k8s]# kubectl apply -f tomcat6-deployment.yaml deployment.apps/tomcat6 created
[root@k8s-node1 k8s]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-5f7ccf4cb9-542xn 1/1 Running 0 12s pod/tomcat6-5f7ccf4cb9-c7599 1/1 Running 0 12s pod/tomcat6-5f7ccf4cb9-sffzb 1/1 Running 0 12s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 41d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 3/3 3 3 12s NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-5f7ccf4cb9 3 3 3 12s
暴露服務
[root@k8s-node1 k8s]# kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: tomcat6 name: tomcat6 spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: tomcat6 type: NodePort status: loadBalancer: {}
咱們能夠將這個yaml複製到上一個yaml進行合併
[root@k8s-node1 k8s]# vi tomcat6-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: tomcat6 name: tomcat6 spec: replicas: 3 selector: matchLabels: app: tomcat6 template: metadata: labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat --- apiVersion: v1 kind: Service metadata: labels: app: tomcat6 name: tomcat6 spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: tomcat6 type: NodePort
刪除以前的部署
[root@k8s-node1 k8s]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-5f7ccf4cb9-542xn 1/1 Running 0 5m34s pod/tomcat6-5f7ccf4cb9-c7599 1/1 Running 0 5m34s pod/tomcat6-5f7ccf4cb9-sffzb 1/1 Running 0 5m34s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 41d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 3/3 3 3 5m34s NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-5f7ccf4cb9 3 3 3 5m34s
[root@k8s-node1 k8s]# kubectl delete deployment.apps/tomcat6 deployment.apps "tomcat6" deleted [root@k8s-node1 k8s]# kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 41d
如今一次進行部署和暴露
[root@k8s-node1 k8s]# kubectl apply -f tomcat6-deployment.yaml deployment.apps/tomcat6 created service/tomcat6 created
[root@k8s-node1 k8s]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/tomcat6-5f7ccf4cb9-g74gl 1/1 Running 0 24s
pod/tomcat6-5f7ccf4cb9-hz6md 1/1 Running 0 24s
pod/tomcat6-5f7ccf4cb9-pb2bx 1/1 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 41d
service/tomcat6 NodePort 10.96.21.159 <none> 80:31871/TCP 24s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tomcat6 3/3 3 3 24s
NAME DESIRED CURRENT READY AGE
replicaset.apps/tomcat6-5f7ccf4cb9 3 3 3 24s
經過Service發現Pod進行關聯,基於域名訪問
經過Ingress Controller實現Pod負載均衡
支持TCP/UDP 4層負載均衡和HTTP 7層負載均衡
步驟:
[root@k8s-node1 k8s]# kubectl apply -f ingress-controller.yaml namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created daemonset.apps/nginx-ingress-controller created service/ingress-nginx created [root@k8s-node1 k8s]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default tomcat6-5f7ccf4cb9-g74gl 1/1 Running 0 7m13s default tomcat6-5f7ccf4cb9-hz6md 1/1 Running 0 7m13s default tomcat6-5f7ccf4cb9-pb2bx 1/1 Running 0 7m13s ingress-nginx nginx-ingress-controller-v28sn 0/1 ContainerCreating 0 80s ingress-nginx nginx-ingress-controller-w8qng 0/1 ContainerCreating 0 80s kube-system coredns-7f9c544f75-lvsrk 1/1 Running 14 41d kube-system coredns-7f9c544f75-xlk4v 1/1 Running 14 41d kube-system etcd-k8s-node1 1/1 Running 15 41d kube-system kube-apiserver-k8s-node1 1/1 Running 15 41d kube-system kube-controller-manager-k8s-node1 1/1 Running 65 41d kube-system kube-flannel-ds-amd64-ktbfz 1/1 Running 12 41d kube-system kube-flannel-ds-amd64-lh9fl 1/1 Running 10 41d kube-system kube-flannel-ds-amd64-m99gh 1/1 Running 16 41d kube-system kube-proxy-dwmnm 1/1 Running 12 41d kube-system kube-proxy-kxcpw 1/1 Running 14 41d kube-system kube-proxy-mnj6q 1/1 Running 10 41d kube-system kube-scheduler-k8s-node1 1/1 Running 59 41d
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: DaemonSet metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: hostNetwork: true serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: siriuszg/nginx-ingress-controller:0.20.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 --- apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx spec: #type: NodePort ports: - name: http port: 80 targetPort: 80 protocol: TCP - name: https port: 443 targetPort: 443 protocol: TCP selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx
vi ingress-tomcat6.yaml
kind: Ingress metadata: name: web spec: rules: - host: example.aidata.com http: paths: - backend: serviceName: tomcat6 servicePort: 80
windows的hosts文件裏
192.168.56.101 example.aidata.com
瀏覽器直接訪問,不用端口
[root@k8s-node1 k8s]# kubectl apply -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created
# Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard
比dashboard更增強大
https://kubesphere.com.cn/docs/zh-CN/installation/prerequisites/
安裝helm
[root@k8s-node1 k8s]# curl -L https://git.io/get_helm.sh | bash
get_helm.sh
#!/usr/bin/env bash # Copyright The Helm Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # The install script is based off of the MIT-licensed script from glide, # the package manager for Go: https://github.com/Masterminds/glide.sh/blob/master/get PROJECT_NAME="helm" TILLER_NAME="tiller" : ${USE_SUDO:="true"} : ${HELM_INSTALL_DIR:="/usr/local/bin"} # initArch discovers the architecture for this system. initArch() { ARCH=$(uname -m) case $ARCH in armv5*) ARCH="armv5";; armv6*) ARCH="armv6";; armv7*) ARCH="arm";; aarch64) ARCH="arm64";; x86) ARCH="386";; x86_64) ARCH="amd64";; i686) ARCH="386";; i386) ARCH="386";; esac } # initOS discovers the operating system for this system. initOS() { OS=$(echo `uname`|tr '[:upper:]' '[:lower:]') case "$OS" in # Minimalist GNU for Windows mingw*) OS='windows';; esac } # runs the given command as root (detects if we are root already) runAsRoot() { local CMD="$*" if [ $EUID -ne 0 -a $USE_SUDO = "true" ]; then CMD="sudo $CMD" fi $CMD } # verifySupported checks that the os/arch combination is supported for # binary builds. verifySupported() { local supported="darwin-386\ndarwin-amd64\nlinux-386\nlinux-amd64\nlinux-arm\nlinux-arm64\nlinux-ppc64le\nwindows-386\nwindows-amd64" if ! echo "${supported}" | grep -q "${OS}-${ARCH}"; then echo "No prebuilt binary for ${OS}-${ARCH}." echo "To build from source, go to https://github.com/helm/helm" exit 1 fi if ! type "curl" > /dev/null && ! type "wget" > /dev/null; then echo "Either curl or wget is required" exit 1 fi } # checkDesiredVersion checks if the desired version is available. checkDesiredVersion() { if [ "x$DESIRED_VERSION" == "x" ]; then # Get tag from release URL local release_url="https://github.com/helm/helm/releases" if type "curl" > /dev/null; then TAG=$(curl -Ls $release_url | grep 'href="/helm/helm/releases/tag/v2.' | grep -v no-underline | head -n 1 | cut -d '"' -f 2 | awk '{n=split($NF,a,"/");print a[n]}' | awk 'a !~ $0{print}; {a=$0}') elif type "wget" > /dev/null; then TAG=$(wget $release_url -O - 2>&1 | grep 'href="/helm/helm/releases/tag/v2.' | grep -v no-underline | head -n 1 | cut -d '"' -f 2 | awk '{n=split($NF,a,"/");print a[n]}' | awk 'a !~ $0{print}; {a=$0}') fi else TAG=$DESIRED_VERSION fi } # checkHelmInstalledVersion checks which version of helm is installed and # if it needs to be changed. checkHelmInstalledVersion() { if [[ -f "${HELM_INSTALL_DIR}/${PROJECT_NAME}" ]]; then local version=$("${HELM_INSTALL_DIR}/${PROJECT_NAME}" version -c | grep '^Client' | cut -d'"' -f2) if [[ "$version" == "$TAG" ]]; then echo "Helm ${version} is already ${DESIRED_VERSION:-latest}" return 0 else echo "Helm ${TAG} is available. Changing from version ${version}." return 1 fi else return 1 fi } # downloadFile downloads the latest binary package and also the checksum # for that binary. downloadFile() { HELM_DIST="helm-$TAG-$OS-$ARCH.tar.gz" DOWNLOAD_URL="https://get.helm.sh/$HELM_DIST" CHECKSUM_URL="$DOWNLOAD_URL.sha256" HELM_TMP_ROOT="$(mktemp -dt helm-installer-XXXXXX)" HELM_TMP_FILE="$HELM_TMP_ROOT/$HELM_DIST" HELM_SUM_FILE="$HELM_TMP_ROOT/$HELM_DIST.sha256" echo "Downloading $DOWNLOAD_URL" if type "curl" > /dev/null; then curl -SsL "$CHECKSUM_URL" -o "$HELM_SUM_FILE" elif type "wget" > /dev/null; then wget -q -O "$HELM_SUM_FILE" "$CHECKSUM_URL" fi if type "curl" > /dev/null; then curl -SsL "$DOWNLOAD_URL" -o "$HELM_TMP_FILE" elif type "wget" > /dev/null; then wget -q -O "$HELM_TMP_FILE" "$DOWNLOAD_URL" fi } # installFile verifies the SHA256 for the file, then unpacks and # installs it. installFile() { HELM_TMP="$HELM_TMP_ROOT/$PROJECT_NAME" local sum=$(openssl sha1 -sha256 ${HELM_TMP_FILE} | awk '{print $2}') local expected_sum=$(cat ${HELM_SUM_FILE}) if [ "$sum" != "$expected_sum" ]; then echo "SHA sum of ${HELM_TMP_FILE} does not match. Aborting." exit 1 fi mkdir -p "$HELM_TMP" tar xf "$HELM_TMP_FILE" -C "$HELM_TMP" HELM_TMP_BIN="$HELM_TMP/$OS-$ARCH/$PROJECT_NAME" TILLER_TMP_BIN="$HELM_TMP/$OS-$ARCH/$TILLER_NAME" echo "Preparing to install $PROJECT_NAME and $TILLER_NAME into ${HELM_INSTALL_DIR}" runAsRoot cp "$HELM_TMP_BIN" "$HELM_INSTALL_DIR" echo "$PROJECT_NAME installed into $HELM_INSTALL_DIR/$PROJECT_NAME" if [ -x "$TILLER_TMP_BIN" ]; then runAsRoot cp "$TILLER_TMP_BIN" "$HELM_INSTALL_DIR" echo "$TILLER_NAME installed into $HELM_INSTALL_DIR/$TILLER_NAME" else echo "info: $TILLER_NAME binary was not found in this release; skipping $TILLER_NAME installation" fi } # fail_trap is executed if an error occurs. fail_trap() { result=$? if [ "$result" != "0" ]; then if [[ -n "$INPUT_ARGUMENTS" ]]; then echo "Failed to install $PROJECT_NAME with the arguments provided: $INPUT_ARGUMENTS" help else echo "Failed to install $PROJECT_NAME" fi echo -e "\tFor support, go to https://github.com/helm/helm." fi cleanup exit $result } # testVersion tests the installed client to make sure it is working. testVersion() { set +e HELM="$(which $PROJECT_NAME)" if [ "$?" = "1" ]; then echo "$PROJECT_NAME not found. Is $HELM_INSTALL_DIR on your "'$PATH?' exit 1 fi set -e echo "Run '$PROJECT_NAME init' to configure $PROJECT_NAME." } # help provides possible cli installation arguments help () { echo "Accepted cli arguments are:" echo -e "\t[--help|-h ] ->> prints this help" echo -e "\t[--version|-v <desired_version>]" echo -e "\te.g. --version v2.4.0 or -v latest" echo -e "\t[--no-sudo] ->> install without sudo" } # cleanup temporary files to avoid https://github.com/helm/helm/issues/2977 cleanup() { if [[ -d "${HELM_TMP_ROOT:-}" ]]; then rm -rf "$HELM_TMP_ROOT" fi } # Execution #Stop execution on any error trap "fail_trap" EXIT set -e # Parsing input arguments (if any) export INPUT_ARGUMENTS="${@}" set -u while [[ $# -gt 0 ]]; do case $1 in '--version'|-v) shift if [[ $# -ne 0 ]]; then export DESIRED_VERSION="${1}" else echo -e "Please provide the desired version. e.g. --version v2.4.0 or -v latest" exit 0 fi ;; '--no-sudo') USE_SUDO="false" ;; '--help'|-h) help exit 0 ;; *) exit 1 ;; esac shift done set +u initArch initOS verifySupported checkDesiredVersion if ! checkHelmInstalledVersion; then downloadFile installFile fi testVersion cleanup
配置權限
建立helm-rbac.yaml文件建立helm-rbac.yaml文件
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
[root@k8s-node1 k8s]# kubectl apply -f helm-rbac.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
初始化
helm init --service-account=tiller --tiller-image=sapcc/tiller:v2.16.3 --history-max 300
--tiller-image指定鏡像,不然會被牆
安裝 OpenEBS 建立 LocalPV 存儲類型 https://kubesphere.com.cn/docs/zh-CN/appendix/install-openebs/
[root@k8s-node1 k8s]# kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-node1 Ready master 41d v1.17.3 10.0.2.5 <none> CentOS Linux 7 (Core) 3.10.0-957.12.2.el7.x86_64 docker://19.3.8 k8s-node2 Ready <none> 41d v1.17.3 10.0.2.4 <none> CentOS Linux 7 (Core) 3.10.0-957.12.2.el7.x86_64 docker://19.3.8 k8s-node3 Ready <none> 41d v1.17.3 10.0.2.15 <none> CentOS Linux 7 (Core) 3.10.0-957.12.2.el7.x86_64 docker://19.3.8
[root@k8s-node1 k8s]# kubectl describe node k8s-node1 | grep Taint Taints: node-role.kubernetes.io/master:NoSchedule
[root@k8s-node1 k8s]# kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule- node/k8s-node1 untainted
[root@k8s-node1 k8s]# kubectl describe node k8s-node1 | grep Taint Taints: <none> [root@k8s-node1 k8s]# kubectl create ns openebs namespace/openebs created
[root@k8s-node1 k8s]# helm install --namespace openebs --name openebs stable/openebs --version 1.5.0 NAME: openebs LAST DEPLOYED: Mon Jun 8 10:10:50 2020 NAMESPACE: openebs STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRole NAME AGE openebs 4s ==> v1/ClusterRoleBinding NAME AGE openebs 4s ==> v1/ConfigMap NAME AGE openebs-ndm-config 4s ==> v1/DaemonSet NAME AGE openebs-ndm 4s ==> v1/Deployment NAME AGE openebs-admission-server 4s openebs-apiserver 4s openebs-localpv-provisioner 4s openebs-ndm-operator 4s openebs-provisioner 4s openebs-snapshot-operator 4s ==> v1/Pod(related) NAME AGE openebs-admission-server-5cf6864fbf-ttntv 4s openebs-apiserver-bc55cd99b-8sbh7 4s openebs-localpv-provisioner-85ff89dd44-h9qzh 4s openebs-ndm-cdqc5 4s openebs-ndm-cvgpf 4s openebs-ndm-operator-87df44d9-849cd 3s openebs-ndm-sc779 4s openebs-provisioner-7f86c6bb64-94pxj 4s openebs-snapshot-operator-54b9c886bf-bsj6f 3s ==> v1/Service NAME AGE openebs-apiservice 4s ==> v1/ServiceAccount NAME AGE openebs 4s NOTES: The OpenEBS has been installed. Check its status by running: $ kubectl get pods -n openebs For dynamically creating OpenEBS Volumes, you can either create a new StorageClass or use one of the default storage classes provided by OpenEBS. Use `kubectl get sc` to see the list of installed OpenEBS StorageClasses. A sample PVC spec using `openebs-jiva-default` StorageClass is given below:" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: demo-vol-claim spec: storageClassName: openebs-jiva-default accessModes: - ReadWriteOnce resources: requests: storage: 5G --- For more information, please visit http://docs.openebs.io/. Please note that, OpenEBS uses iSCSI for connecting applications with the OpenEBS Volumes and your nodes should have the iSCSI initiator installed. [root@k8s-node1 k8s]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default tomcat6-5f7ccf4cb9-g74gl 1/1 Running 0 117m default tomcat6-5f7ccf4cb9-hz6md 1/1 Running 0 117m default tomcat6-5f7ccf4cb9-pb2bx 1/1 Running 0 117m ingress-nginx nginx-ingress-controller-q5dzl 1/1 Running 0 3m47s ingress-nginx nginx-ingress-controller-v28sn 1/1 Running 0 111m ingress-nginx nginx-ingress-controller-w8qng 1/1 Running 0 111m kube-system coredns-7f9c544f75-lvsrk 1/1 Running 14 41d kube-system coredns-7f9c544f75-xlk4v 1/1 Running 14 41d kube-system etcd-k8s-node1 1/1 Running 15 41d kube-system kube-apiserver-k8s-node1 1/1 Running 15 41d kube-system kube-controller-manager-k8s-node1 1/1 Running 65 41d kube-system kube-flannel-ds-amd64-ktbfz 1/1 Running 12 41d kube-system kube-flannel-ds-amd64-lh9fl 1/1 Running 10 41d kube-system kube-flannel-ds-amd64-m99gh 1/1 Running 16 41d kube-system kube-proxy-dwmnm 1/1 Running 12 41d kube-system kube-proxy-kxcpw 1/1 Running 14 41d kube-system kube-proxy-mnj6q 1/1 Running 10 41d kube-system kube-scheduler-k8s-node1 1/1 Running 59 41d kube-system kubernetes-dashboard-7c54d59f66-48g9z 0/1 ImagePullBackOff 0 75m kube-system tiller-deploy-5fdc6844fb-kjps5 1/1 Running 0 8m8s openebs openebs-admission-server-5cf6864fbf-ttntv 1/1 Running 0 2m16s openebs openebs-apiserver-bc55cd99b-8sbh7 1/1 Running 3 2m16s openebs openebs-localpv-provisioner-85ff89dd44-h9qzh 1/1 Running 0 2m16s openebs openebs-ndm-cdqc5 1/1 Running 0 2m16s openebs openebs-ndm-cvgpf 1/1 Running 0 2m16s openebs openebs-ndm-operator-87df44d9-849cd 1/1 Running 1 2m15s openebs openebs-ndm-sc779 1/1 Running 0 2m16s openebs openebs-provisioner-7f86c6bb64-94pxj 1/1 Running 0 2m16s openebs openebs-snapshot-operator-54b9c886bf-bsj6f 2/2 Running 0 2m15s
[root@k8s-node1 k8s]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE openebs-device openebs.io/local Delete WaitForFirstConsumer false 21s openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 21s openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 23s openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 21s
[root@k8s-node1 k8s]# kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' storageclass.storage.k8s.io/openebs-hostpath patched [root@k8s-node1 k8s]# kubectl taint nodes k8s-node1 node-role.kubernetes.io=master:NoSchedule node/k8s-node1 tainted
安裝KubeSphere的前置條件至此完成,安裝KubeSphere
[root@k8s-node1 k8s]# vi kubesphere-minimal.yaml [root@k8s-node1 k8s]# kubectl apply -f kubesphere-minimal.yaml
--- apiVersion: v1 kind: Namespace metadata: name: kubesphere-system --- apiVersion: v1 data: ks-config.yaml: | --- persistence: storageClass: "" etcd: monitoring: False endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9 port: 2379 tlsEnable: True common: mysqlVolumeSize: 20Gi minioVolumeSize: 20Gi etcdVolumeSize: 20Gi openldapVolumeSize: 2Gi redisVolumSize: 2Gi metrics_server: enabled: False console: enableMultiLogin: False # enable/disable multi login port: 30880 monitoring: prometheusReplicas: 1 prometheusMemoryRequest: 400Mi prometheusVolumeSize: 20Gi grafana: enabled: False logging: enabled: False elasticsearchMasterReplicas: 1 elasticsearchDataReplicas: 1 logsidecarReplicas: 2 elasticsearchMasterVolumeSize: 4Gi elasticsearchDataVolumeSize: 20Gi logMaxAge: 7 elkPrefix: logstash containersLogMountedPath: "" kibana: enabled: False openpitrix: enabled: False devops: enabled: False jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g sonarqube: enabled: False postgresqlVolumeSize: 8Gi servicemesh: enabled: False notification: enabled: False alerting: enabled: False kind: ConfigMap metadata: name: ks-installer namespace: kubesphere-system --- apiVersion: v1 kind: ServiceAccount metadata: name: ks-installer namespace: kubesphere-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: creationTimestamp: null name: ks-installer rules: - apiGroups: - "" resources: - '*' verbs: - '*' - apiGroups: - apps resources: - '*' verbs: - '*' - apiGroups: - extensions resources: - '*' verbs: - '*' - apiGroups: - batch resources: - '*' verbs: - '*' - apiGroups: - rbac.authorization.k8s.io resources: - '*' verbs: - '*' - apiGroups: - apiregistration.k8s.io resources: - '*' verbs: - '*' - apiGroups: - apiextensions.k8s.io resources: - '*' verbs: - '*' - apiGroups: - tenant.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - certificates.k8s.io resources: - '*' verbs: - '*' - apiGroups: - devops.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - monitoring.coreos.com resources: - '*' verbs: - '*' - apiGroups: - logging.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - jaegertracing.io resources: - '*' verbs: - '*' - apiGroups: - storage.k8s.io resources: - '*' verbs: - '*' - apiGroups: - admissionregistration.k8s.io resources: - '*' verbs: - '*' --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ks-installer subjects: - kind: ServiceAccount name: ks-installer namespace: kubesphere-system roleRef: kind: ClusterRole name: ks-installer apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: ks-installer namespace: kubesphere-system labels: app: ks-install spec: replicas: 1 selector: matchLabels: app: ks-install template: metadata: labels: app: ks-install spec: serviceAccountName: ks-installer containers: - name: installer image: kubesphere/ks-installer:v2.1.1 imagePullPolicy: "Always"
監控安裝過程
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
最後出現下面信息
Console: http://192.168.10.100:30880 Account: admin Password: P@88w0rd
包括了登陸帳號和密碼
kubectl get pods --all-namespaces
全部都running後就能夠瀏覽器訪問了
添加功能
$ kubectl edit cm -n kubesphere-system ks-installer
參考以下修改 ConfigMap
··· metrics-server: enabled: True
devops:
enabled: True
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
sonarqube:
enabled: True
postgresqlVolumeSize: 8Gi
servicemesh:
enabled: False
notification:
enabled: True
alerting:
enabled: True
監控安裝進度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
多租戶管理快速入門
平臺的資源一共有三個層級,包括 集羣 (Cluster)、 企業空間 (Workspace)、 項目 (Project) 和 DevOps Project (DevOps 工程),層級關係以下圖所示,即一個集羣中能夠建立多個企業空間,而每一個企業空間,能夠建立多個項目和 DevOps工程,而集羣、企業空間、項目和 DevOps工程中,默認有多個不一樣的內置角色。
選 平臺角色
自定義角色,點新建
賦予權限
點 帳戶管理
點建立,填寫信息,肯定
咱們可使用atguigu-hr登陸
建立帳戶
退出,ws-manager登陸
建立企業空間
能夠建立企業空間,也能夠邀請成員,邀請ws-admin進企業空間,並指定爲管理員
ws-admin登陸
管理企業空間
邀請project-admin,指定爲worksapce-viewer
邀請project-regular,指定爲workspace-regular
project-amdin 登陸
建立資源項目
邀請project-regular,指定爲operator
這是項目的開發人員
建立DevOps工程
邀請project-regular,指定爲maintain
登陸project-regular
進入項目
配置中心,密鑰,建立密鑰
默認,添加數據
輸入key和value
上,建立一個 WordPress 密鑰,Data 鍵值對填寫 WORDPRESS_DB_PASSWORD
和 123456
。此時兩個密鑰都建立完成。
建立存儲卷
存儲卷
,點擊 建立
,基本信息以下。
同理,建立mysql-pvc
應用
部署新應用
下拉,添加容器鏡像
輸入mysql:5.6,使用默認端口
內存要大於1000M
下拉
引用配置文件或密鑰
點對號
添加存儲卷
點對號
點對號
繼續添加組件
點擊建立
等待
外網訪問
訪問任意節點30110端口
DevOps
Dev:怎麼開發
Ops:怎麼運維
高併發:怎麼承擔高併發
高可用:怎麼作到高可用
CI&CD
持續集成
持續部署
前提條件
project-admin
邀請項目普通用戶 project-regular
加入 DevOps 工程並授予 maintainer
角色,若還未邀請請參考 多租戶管理快速入門 - 邀請成員。
流程說明:
- 階段一. Checkout SCM: 拉取 GitHub 倉庫代碼
- 階段二. Unit test: 單元測試,若是測試經過了才繼續下面的任務
- 階段三. SonarQube analysis:sonarQube 代碼質量檢測
- 階段四. Build & push snapshot image: 根據行爲策略中所選擇分支來構建鏡像,並將 tag 爲
SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBER
推送至 Harbor (其中$BUILD_NUMBER
爲 pipeline 活動列表的運行序號)。- 階段五. Push latest image: 將 master 分支打上 tag 爲 latest,並推送至 DockerHub。
- 階段六. Deploy to dev: 將 master 分支部署到 Dev 環境,此階段須要審覈。
- 階段七. Push with tag: 生成 tag 並 release 到 GitHub,並推送到 DockerHub。
- 階段八. Deploy to production: 將發佈的 tag 部署到 Production 環境。
登陸project-regular,建立憑證
建立 DockerHub 憑證
一、點擊 建立,建立一個用於 DockerHub 登陸的憑證;
二、完成後點擊 肯定。
建立 GitHub 憑證
同上,建立一個用於 GitHub 的憑證,憑證 ID 可命名爲 github-id,類型選擇 帳戶憑證
,輸入您我的的 GitHub 用戶名和密碼,備註描述信息,完成後點擊 肯定。
注意:若用戶的憑證信息如帳號或密碼中包含了
@
,$
這類特殊符號,可能在運行時沒法識別而報錯,這類狀況須要用戶在建立憑證時對密碼進行 urlencode 編碼,可經過一些第三方網站進行轉換 (好比http://tool.chinaz.com/tools/urlencode.aspx
),而後再將轉換後的輸出粘貼到對應的憑證信息中。
建立 kubeconfig 憑證
同上,在 憑證 下點擊 建立,建立一個類型爲 kubeconfig
的憑證,憑證 ID 可命名爲 demo-kubeconfig,完成後點擊 肯定。
說明:kubeconfig 類型的憑證用於訪問接入正在運行的 Kubernetes 集羣,在流水線部署步驟將用到該憑證。注意,此處的 Content 將自動獲取當前 KubeSphere 中的 kubeconfig 文件內容,若部署至當前 KubeSphere 中則無需修改,若部署至其它 Kubernetes 集羣,則須要將其 kubeconfig 文件的內容粘貼至 Content 中。
建立 SonarQube Token
訪問 SonarQube
master 節點
[root@node01 ~]# kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h gulimall mysql ClusterIP None <none> 3306/TCP 11h gulimall wordpress NodePort 10.96.31.243 <none> 80:30110/TCP 11h ingress-nginx ingress-nginx ClusterIP 10.96.96.80 <none> 80/TCP,443/TCP 16h kube-system kube-controller-manager-headless ClusterIP None <none> 10252/TCP 15h kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 16h kube-system kube-scheduler-headless ClusterIP None <none> 10251/TCP 15h kube-system kubelet ClusterIP None <none> 10250/TCP 15h kube-system tiller-deploy ClusterIP 10.96.244.191 <none> 44134/TCP 15h kubesphere-alerting-system alerting-client-server ClusterIP 10.96.66.239 <none> 9200/TCP 15h kubesphere-alerting-system alerting-manager-server ClusterIP 10.96.162.207 <none> 9201/TCP,9200/TCP 15h kubesphere-alerting-system notification ClusterIP 10.96.162.106 <none> 9201/TCP,9200/TCP 15h kubesphere-controls-system default-http-backend ClusterIP 10.96.168.235 <none> 80/TCP 15h kubesphere-devops-system ks-jenkins NodePort 10.96.205.17 <none> 80:30180/TCP 15h kubesphere-devops-system ks-jenkins-agent ClusterIP 10.96.101.100 <none> 50000/TCP 15h kubesphere-devops-system ks-sonarqube-postgresql ClusterIP 10.96.150.29 <none> 5432/TCP 15h kubesphere-devops-system ks-sonarqube-sonarqube NodePort 10.96.3.13 <none> 9000:31426/TCP 15h kubesphere-devops-system s2ioperator ClusterIP 10.96.191.165 <none> 443/TCP 15h kubesphere-devops-system s2ioperator-metrics-service ClusterIP 10.96.217.230 <none> 8080/TCP 15h kubesphere-devops-system uc-jenkins-update-center ClusterIP 10.96.96.70 <none> 80/TCP 15h kubesphere-devops-system webhook-server-service ClusterIP 10.96.91.21 <none> 443/TCP 15h kubesphere-monitoring-system kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 15h kubesphere-monitoring-system node-exporter ClusterIP None <none> 9100/TCP 15h kubesphere-monitoring-system prometheus-k8s ClusterIP None <none> 9090/TCP 15h kubesphere-monitoring-system prometheus-k8s-system ClusterIP None <none> 9090/TCP 15h kubesphere-monitoring-system prometheus-operated ClusterIP None <none> 9090/TCP 15h kubesphere-monitoring-system prometheus-operator ClusterIP None <none> 8080/TCP 15h kubesphere-system etcd ClusterIP 10.96.139.180 <none> 2379/TCP 15h kubesphere-system ks-account ClusterIP 10.96.151.87 <none> 80/TCP 15h kubesphere-system ks-apigateway ClusterIP 10.96.164.101 <none> 80/TCP 15h kubesphere-system ks-apiserver ClusterIP 10.96.76.72 <none> 80/TCP 15h kubesphere-system ks-console NodePort 10.96.213.199 <none> 80:30880/TCP 15h kubesphere-system minio ClusterIP 10.96.42.56 <none> 9000/TCP 15h kubesphere-system mysql ClusterIP 10.96.111.25 <none> 3306/TCP 15h kubesphere-system openldap ClusterIP None <none> 389/TCP 15h kubesphere-system redis ClusterIP 10.96.20.135 <none> 6379/TCP 15h openebs admission-server-svc ClusterIP 10.96.208.248 <none> 443/TCP 15h openebs openebs-apiservice ClusterIP 10.96.233.170 <none> 5656/TCP 15h
訪問31426端口
使用默認帳號 admin/admin
登入 sonar
輸入 name
,而後點擊 Generate
。
複製token,保存
點擊 Continue
選擇 Language Java
,選擇 build technology 爲 Maven
,複製 token。點擊 Finish this tutorial
便可。
Provide a token gulimall-token: 606a3b04c649631669516c21f6235b81b039d519 The token is used to identify you when an analysis is performed. If it has been compromised, you can revoke it at any point of time in your user account.
新建一個憑證
第一步:Fork項目
登陸 GitHub,將本示例用到的 GitHub 倉庫 devops-java-sample Fork 至您我的的 GitHub。
第二步:修改 Jenkinsfile
一、Fork 至您我的的 GitHub 後,在 根目錄 進入 Jenkinsfile-online。
二、在 GitHub UI 點擊編輯圖標,須要修改以下環境變量 (environment) 的值。
修改項 | 值 | 含義 |
---|---|---|
DOCKER_CREDENTIAL_ID | dockerhub-id | 填寫建立憑證步驟中的 DockerHub 憑證 ID,用於登陸您的 DockerHub |
GITHUB_CREDENTIAL_ID | github-id | 填寫建立憑證步驟中的 GitHub 憑證 ID,用於推送 tag 到 GitHub 倉庫 |
KUBECONFIG_CREDENTIAL_ID | demo-kubeconfig | kubeconfig 憑證 ID,用於訪問接入正在運行的 Kubernetes 集羣 |
REGISTRY | docker.io | 默認爲 docker.io 域名,用於鏡像的推送 |
DOCKERHUB_NAMESPACE | your-dockerhub-account | 替換爲您的 DockerHub 帳號名 (它也能夠是帳戶下的 Organization 名稱) |
GITHUB_ACCOUNT | your-github-account | 替換爲您的 GitHub 帳號名,例如 https://github.com/kubesphere/ 則填寫 kubesphere (它也能夠是帳戶下的 Organization 名稱) |
APP_NAME | devops-java-sample | 應用名稱 |
SONAR_CREDENTIAL_ID | sonar-token | 填寫建立憑證步驟中的 SonarQube token憑證 ID,用於代碼質量檢測 |
注:master
分支 Jenkinsfile 中 mvn
命令的參數 -o
,表示開啓離線模式。本示例爲適應某些環境下網絡的干擾,以及避免在下載依賴時耗時太長,已事先完成相關依賴的下載,默認開啓離線模式。
去掉文件中的-o參數,修改環境
environment { DOCKER_CREDENTIAL_ID = 'dockerhub-id' GITHUB_CREDENTIAL_ID = 'github-id' KUBECONFIG_CREDENTIAL_ID = 'demo-kubeconfig' REGISTRY = 'docker.io' DOCKERHUB_NAMESPACE = 'zhao750456695' GITHUB_ACCOUNT = 'zhao750456695' APP_NAME = 'devops-java-sample' SONAR_CREDENTIAL_ID = 'sonar-qube' }
三、修改以上的環境變量後,點擊 Commit changes,將更新提交到當前的 master 分支。
四、若須要測試緩存,須要切換至 dependency
分支,對 dependency
分支下的 Jenkinsfile-online 作相似的修改,不然該分支的流水線將構建失敗。
CI/CD 流水線會根據示例項目的 yaml 模板文件,最終將示例分別部署到 kubesphere-sample-dev
和 kubesphere-sample-prod
這兩個項目 (Namespace) 環境中,這兩個項目須要預先在控制檯依次建立,參考以下步驟建立該項目。
第一步:建立第一個項目
提示:項目管理員
project-admin
帳號已在 多租戶管理快速入門 中建立。
一、使用項目管理員 project-admin
帳號登陸 KubeSphere,在以前建立的企業空間 (demo-workspace) 下,點擊 項目 → 建立,建立一個 資源型項目,做爲本示例的開發環境,填寫該項目的基本信息,完成後點擊 下一步。
kubesphere-sample-dev
,若須要修改項目名稱則需在 yaml 模板文件 中修改 namespace
二、本示例暫無資源請求和限制,所以高級設置中無需修改默認值,點擊 建立,項目可建立成功。
第二步:邀請成員
第一個項目建立完後,還須要項目管理員 project-admin
邀請當前的項目普通用戶 project-regular
進入 kubesphere-sample-dev
項目,進入「項目設置」→「項目成員」,點擊「邀請成員」選擇邀請 project-regular
並授予 operator
角色,若對此有疑問可參考 多租戶管理快速入門 - 邀請成員 。
第三步:建立第二個項目
同上,參考以上兩步建立一個名稱爲 kubesphere-sample-prod
的項目做爲生產環境,並邀請普通用戶 project-regular
進入 kubesphere-sample-prod
項目並授予 operator
角色。
說明:當 CI/CD 流水線後續執行成功後,在
kubesphere-sample-dev
和kubesphere-sample-prod
項目中將看到流水線建立的部署 (Deployment) 和服務 (Service)。
登陸project-regular
第一步:填寫基本信息
一、進入已建立的 DevOps 工程,選擇左側菜單欄的 流水線,而後點擊 建立。
二、在彈出的窗口中,輸入流水線的基本信息。
第二步:添加倉庫
一、點擊代碼倉庫,以添加 Github 倉庫爲例。
二、點擊彈窗中的 獲取 Token。
三、在 GitHub 的 access token 頁面填寫 Token description,簡單描述該 token,如 DevOps demo,在 Select scopes 中無需任何修改,點擊 Generate token
,GitHub 將生成一串字母和數字組成的 token 用於訪問當前帳戶下的 GitHub repo。
四、複製生成的 token,在 KubeSphere Token 框中輸入該 token 而後點擊保存。
五、驗證經過後,右側會列出此 Token 關聯用戶的全部代碼庫,選擇其中一個帶有 Jenkinsfile 的倉庫。好比此處選擇準備好的示例倉庫 devops-java-sample,點擊 選擇此倉庫,完成後點擊 下一步。
第三步:高級設置
完成代碼倉庫相關設置後,進入高級設置頁面,高級設置支持對流水線的構建記錄、行爲策略、按期掃描等設置的定製化,如下對用到的相關配置做簡單釋義。
一、分支設置中,勾選 丟棄舊的分支
,此處的 保留分支的天數 和 保留分支的最大個數 默認爲 -1。
說明:
分支設置的保留分支的天數和保持分支的最大個數兩個選項能夠同時對分支進行做用,只要某個分支的保留天數和個數不知足任何一個設置的條件,則將丟棄該分支。假設設置的保留天數和個數爲 2 和 3,則分支的保留天數一旦超過 2 或者保留個數超過 3,則將丟棄該分支。默認兩個值爲 -1,表示將會丟棄已經被刪除的分支。
丟棄舊的分支將肯定什麼時候應丟棄項目的分支記錄。分支記錄包括控制檯輸出,存檔工件以及與特定分支相關的其餘元數據。保持較少的分支能夠節省 Jenkins 所使用的磁盤空間,咱們提供了兩個選項來肯定應什麼時候丟棄舊的分支:
- 保留分支的天數:若是分支達到必定的天數,則丟棄分支。
- 保留分支的個數:若是已經存在必定數量的分支,則丟棄最舊的分支。
二、行爲策略中,KubeSphere 默認添加了三種策略。因爲本示例還未用到 從 Fork 倉庫中發現 PR 這條策略,此處能夠刪除該策略,點擊右側刪除按鈕。
說明:
支持添加三種類型的發現策略。須要說明的是,在 Jenkins 流水線被觸發時,開發者提交的 PR (Pull Request) 也被視爲一個單獨的分支。
發現分支:
- 排除也做爲 PR 提交的分支:選擇此項表示 CI 將不會掃描源分支 (好比 Origin 的 master branch),也就是須要被 merge 的分支
- 只有被提交爲 PR 的分支:僅掃描 PR 分支
- 全部分支:拉取的倉庫 (origin) 中全部的分支
從原倉庫中發現 PR:
- PR 與目標分支合併後的源代碼版本:一次發現操做,基於 PR 與目標分支合併後的源代碼版本建立並運行流水線
- PR 自己的源代碼版本:一次發現操做,基於 PR 自己的源代碼版本建立並運行流水線
- 當 PR 被發現時會建立兩個流水線,一個流水線使用 PR 自己的源代碼版本,一個流水線使用 PR 與目標分支合併後的源代碼版本:兩次發現操做,將分別建立兩條流水線,第一條流水線使用 PR 自己的源代碼版本,第二條流水線使用 PR 與目標分支合併後的源代碼版本
三、默認的 腳本路徑 爲 Jenkinsfile,請將其修改成 Jenkinsfile-online。
注:路徑是 Jenkinsfile 在代碼倉庫的路徑,表示它在示例倉庫的根目錄,若文件位置變更則需修改其腳本路徑。
四、在 掃描 Repo Trigger 勾選 若是沒有掃描觸發,則按期掃描
,掃描時間間隔可根據團隊習慣設定,本示例設置爲 5 minutes
。
說明:按期掃描是設定一個週期讓流水線週期性地掃描遠程倉庫,根據 行爲策略 查看倉庫有沒有代碼更新或新的 PR。
Webhook 推送:
Webhook 是一種高效的方式可讓流水線發現遠程倉庫的變化並自動觸發新的運行,GitHub 和 Git (如 Gitlab) 觸發 Jenkins 自動掃描應該以 Webhook 爲主,以上一步在 KubeSphere 設置按期掃描爲輔。在本示例中,能夠經過手動運行流水線,如需設置自動掃描遠端分支並觸發運行,詳見 設置自動觸發掃描 - GitHub SCM。
完成高級設置後點擊 建立。
第四步:運行流水線
流水線建立後,點擊瀏覽器的 刷新 按鈕,可見兩條自動觸發遠程分支後的運行記錄,分別爲 master
和 dependency
分支的構建記錄。
一、點擊右側 運行,將根據上一步的 行爲策略 自動掃描代碼倉庫中的分支,在彈窗選擇須要構建流水線的 master
分支,系統將根據輸入的分支加載 Jenkinsfile-online (默認是根目錄下的 Jenkinsfile)。
二、因爲倉庫的 Jenkinsfile-online 中 TAG_NAME: defaultValue
沒有設置默認值,所以在這裏的 TAG_NAME
能夠輸入一個 tag 編號,好比輸入 v0.0.1。
三、點擊 肯定,將新生成一條流水線活動開始運行。
說明: tag 用於在 Github 和DockerHub 中分別生成帶有 tag 的 release 和鏡像。 注意: 在主動運行流水線以發佈 release 時,
TAG_NAME
不該與以前代碼倉庫中所存在的tag
名稱重複,若是重複會致使流水線的運行失敗。
至此,流水線 已完成建立並開始運行。
注:點擊 分支 切換到分支列表,查看流水線具體是基於哪些分支運行,這裏的分支則取決於上一步 行爲策略 的發現分支策略。
第五步:審覈流水線
爲方便演示,此處默認用當前帳戶來審覈,當流水線執行至 input
步驟時狀態將暫停,須要手動點擊 繼續,流水線才能繼續運行。注意,在 Jenkinsfile-online 中分別定義了三個階段 (stage) 用來部署至 Dev 環境和 Production 環境以及推送 tag,所以在流水線中依次須要對 deploy to dev, push with tag, deploy to production
這三個階段審覈 3
次,若不審覈或點擊 終止 則流水線將不會繼續運行。
說明:在實際的開發生產場景下,可能須要更高權限的管理員或運維人員來審覈流水線和鏡像,並決定是否容許將其推送至代碼或鏡像倉庫,以及部署至開發或生產環境。Jenkinsfile 中的
input
步驟支持指定用戶審覈流水線,好比要指定用戶名爲 project-admin 的用戶來審覈,能夠在 Jenkinsfile 的 input 函數中追加一個字段,若是是多個用戶則經過逗號分隔,以下所示:
···
input(id: 'release-image-with-tag', message: 'release image with tag?', submitter: 'project-admin,project-admin1') ···
集羣形式
主從式
主從複製,同步方式
主從調度,控制方式
分片式
數據分片存儲,片區之間備份
選主式
爲了容災選主
爲了調度選主
集羣目標
高可用
突破數據量限制
數據備份容災
壓力分擔
1.1 簡介
MMM是Master-Master Manager for MySQL(mysql主從複製的簡稱),是Google開源項目(Perl腳本)。MMM基於MySQL Replication作的擴展架構,主要用來監控mysql主主複製並作失敗轉移。
其原理是將真實真實數據庫節點的IP(RIP)映射爲虛擬IP(VIP)集。
有個監管端,會提供多個虛擬IP(VIP),包括一個可寫VIP,多個可讀VIP,經過監管的管理,這些IP會綁定在可用mysql上,當某一臺mysql宕機時,監管會將VIP遷移至其餘mysql。
整個監管過程當中,須要在mysql中添加相關受權用戶,以便讓mysql能夠支持監管機的維護。受權的用戶包括一個mmm_monitor用戶和一個mmm_agent用戶,若是想使用mmm的備份工具還要添加一個mmm_tools用戶。
MMM 是一套支持雙主故障切換以及雙主平常管理的第三方軟件。MMM 由 Perl 開發,用來管理和監控雙主複製,雖然是雙主架構,可是業務上同一時間只容許一個節點進行寫入操做。
MMM 包含兩類角色: writer
和 reader
, 分別對應讀寫節點和只讀節點。
使用 MMM 管理雙主節點的狀況下,當 writer
節點出現宕機(假定是 master1
),程序會自動移除該節點上的讀寫 VIP,切換到 Master2
,並設置 Master2
爲 read_only = 0
, 同時,全部 Slave
節點會指向 Master2
。
除了管理雙主節點,MMM 也會管理 Slave
節點,在出現宕機、複製延遲或複製錯誤,MMM 會移除該節點的 VIP,直到節點恢復正常。
進行IP漂移,將master的IP指向備用的節點,該節點就成了master。可是該節點可能沒拿到所有數據,一致性方面存在問題。
1.2 組件
MMM 由兩類程序組成
monitor
: 監控集羣內數據庫的狀態,在出現異常時發佈切換命令,通常和數據庫分開部署agent
: 運行在每一個 MySQL 服務器上的代理進程,monitor 命令的執行者,完成監控的探針工做和具體服務設置,例如設置 VIP、指向新同步節點其架構以下:
1.3 切換流程
以上述架構爲例,描述一下故障轉移的流程,如今假設 Master1 宕機
read_only=1
select master_pos_wait()
等待同步完畢read_only=0
從整個流程能夠看到,若是主節點出現故障,MMM 會自動實現切換,不須要人工干預,同時咱們也能看出一些問題,就是數據庫掛掉後,只是作了切換,不會主動補齊丟失的數據,因此 MMM 會有數據不一致性的風險。
簡介
MHA(Master HA)是一款開源的 MySQL 的高可用程序,它爲 MySQL 主從複製架構提供了 automating master failover 功能。MHA 在監控到 master 節點故障時,會提高其中擁有最新數據的 slave 節點成爲新的master 節點,在此期間,MHA 會經過於其它從節點獲取額外信息來避免一致性方面的問題。MHA 還提供了 master 節點的在線切換功能,即按需切換 master/slave 節點。
MHA 是由日本人 yoshinorim(原就任於DeNA現就任於FaceBook)開發的比較成熟的 MySQL 高可用方案。MHA 可以在30秒內實現故障切換,並能在故障切換中,最大可能的保證數據一致性。目前淘寶也正在開發類似產品 TMHA, 目前已支持一主一從。
MHA 服務
服務角色
MHA 服務有兩種角色, MHA Manager(管理節點)和 MHA Node(數據節點):
MHA Manager:
一般單獨部署在一臺獨立機器上管理多個 master/slave 集羣(組),每一個 master/slave 集羣稱做一個 application,用來管理統籌整個集羣。
MHA node:
運行在每臺 MySQL 服務器上(master/slave/manager),它經過監控具有解析和清理 logs 功能的腳原本加快故障轉移。
主要是接收管理節點所發出指令的代理,代理須要運行在每個 mysql 節點上。簡單講 node 就是用來收集從節點服務器上所生成的 bin-log 。對比打算提高爲新的主節點之上的從節點的是否擁有並完成操做,若是沒有發給新主節點在本地應用後提高爲主節點。
由上圖咱們能夠看出,每一個複製組內部和 Manager 之間都須要ssh實現無密碼互連,只有這樣,在 Master 出故障時, Manager 才能順利的鏈接進去,實現主從切換功能。
提供的工具
MHA會提供諸多工具程序, 其常見的以下所示:
Manager節點:
masterha_check_ssh
:MHA 依賴的 ssh 環境監測工具;
masterha_check_repl
:MYSQL 複製環境檢測工具;
masterga_manager
:MHA 服務主程序;
masterha_check_status
:MHA 運行狀態探測工具;
masterha_master_monitor
:MYSQL master 節點可用性監測工具;
masterha_master_swith:master
:節點切換工具;
masterha_conf_host
:添加或刪除配置的節點;
masterha_stop
:關閉 MHA 服務的工具。
Node節點:(這些工具一般由MHA Manager的腳本觸發,無需人爲操做)
save_binary_logs
:保存和複製 master 的二進制日誌;
apply_diff_relay_logs
:識別差別的中繼日誌事件並應用於其餘 slave;
purge_relay_logs
:清除中繼日誌(不會阻塞 SQL 線程);
自定義擴展:
secondary_check_script
:經過多條網絡路由檢測master的可用性;
master_ip_failover_script
:更新application使用的masterip;
report_script
:發送報告;
init_conf_load_script
:加載初始配置參數;
master_ip_online_change_script
;更新master節點ip地址。
工做原理
MHA工做原理總結爲如下幾條:
(1) 從宕機崩潰的 master 保存二進制日誌事件(binlog events);
(2) 識別含有最新更新的 slave ;
(3) 應用差別的中繼日誌(relay log) 到其餘 slave ;
(4) 應用從 master 保存的二進制日誌事件(binlog events);
(5) 提高一個 slave 爲新 master ;
(6) 使用其餘的 slave 鏈接新的 master 進行復制。
InnoDB Cluster主要由MySQL Shell、MySQL Router和MySQL服務器集羣組成,三者協同工做,共同爲MySQL提供完整的高可用性解決方案。下圖所示爲InnoDB Cluster的總體架構。
InnoDB Cluster架構
InnoDB Cluster以組複製爲基礎,集羣中的每一個MySQL服務器實例都是組複製的成員,提供了在InnoDB Cluster內複製數據的機制,而且具備內置的故障轉移功能。MySQL Shell在InnoDB Cluster中充當控制檯角色,使用它包含的AdminAPI,可使安裝、配置、管理、維護多個MySQL組複製實例的工做更加輕鬆。經過AdminAPI的幾條交互指令就可自動完成組複製配置。MySQL Router能夠根據集羣部署信息自動生成配置,將客戶端應用程序透明地鏈接到MySQL服務器實例。若是服務器實例意外故障,羣集將自動從新配置。在默認的單主模式下,InnoDB Cluster 具備單個讀寫主服務器實例。多個輔助服務器實例是主服務器實例的副本。若是主服務器出現故障,則輔助服務器將自動升級爲主服務器。MySQL Router能夠檢測到這種狀況並將客戶端應用程序自動轉發到新的主服務器。
下載mysql鏡像
[root@localhost ~]# docker pull mysql:5.7
建立master實例並啓動
docker run -p 3307:3306 --name mysql-master -v /bigdata/mysql/master/log:/var/log/mysql -v /bigdata/mysql/master/data:/var/lib/mysql -v /bigdata/mysql/master/conf:/etc/mysql -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7
-v 掛載文件
建立slave實例並啓動
docker run -p 3317:3306 --name mysql-slaver-01 -v /bigdata/mysql/slaver/log:/var/log/mysql -v /bigdata/mysql/slaver/data:/var/lib/mysql -v /bigdata/mysql/slaver/conf:/etc/mysql -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7
修改master基本配置
建立 /bigdata/mysql/master/conf/my.cnf
[client] default-character-set=utf8 [mysql] default-character-set=utf8 [mysqld] init_connect='SET collation_connection = utf8_unicode_ci' init_connect='SET NAMES utf8' character-set-server=utf8 collation-server=utf8_unicode_ci skip-character-set-client-handshake skip-name-resolve
init_connect一般用於:當一個鏈接進來時,作一些操做,好比設置autocommit爲0,好比記錄當前鏈接的ip來源和用戶等信息到一個新表裏,當作登錄日誌信息。
init_connect 是能夠動態在線調整的,這樣就有了一些其餘的用處。
通過測試init_connect 是用戶登陸到數據庫上以後,在執行第一次查詢以前執行 裏面的內容的。
若是init_connect 的內容有語法錯誤,致使執行失敗,會致使用戶沒法執行查詢,從mysql 退出。
init_connect 對具備super 權限的用戶是無效的。
collation大體的意思就是字符序。首先字符原本是不分大小的,那麼對字符的>, = , < 操做就須要有個字符序的規則。collation作的就是這個事情,你能夠對錶進行字符序的設置,也能夠單獨對某個字段進行字符序的設置。一個字符類型,它的字符序有多個。以_ci(表示大小寫不敏感),以_cs(表示大小寫敏感),以_bin(表示用編碼值進行比較)。
skip-character-set-client-handshake=1 跳過mysql程序起動時的字符參數設置 ,使用服務器端字符集設置
skip_name_resolve 禁止域名解析
同理,/bigdata/mysql/slaver/conf/my.cnf也複製上面配置
添加master主從複製部分配置
server_id=1 log-bin=mysql-bin read-only=0 binlog-do-db=gulimall_ums binlog-do-db=gulimall_pms binlog-do-db=gulimall_oms binlog-do-db=gulimall_sms binlog-do-db=gulimall_wms binlog-do-db=gulimall_admin replicate-ignore-db=mysql replicate-ignore-db=sys replicate-ignore-db=information_schema replicate-ignore-db=performance_schema
slaver
server_id=2 log-bin=mysql-bin read-only=1 binlog-do-db=gulimall_ums binlog-do-db=gulimall_pms binlog-do-db=gulimall_oms binlog-do-db=gulimall_sms binlog-do-db=gulimall_wms binlog-do-db=gulimall_admin replicate-ignore-db=mysql replicate-ignore-db=sys replicate-ignore-db=information_schema replicate-ignore-db=performance_schema
重啓兩個mysql
[root@localhost conf]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f8028c6dbf7d mysql:5.7 "docker-entrypoint.s…" 3 hours ago Up 3 hours 33060/tcp, 0.0.0.0:3317->3306/tcp mysql-slaver-01 00c17fb98652 mysql:5.7 "docker-entrypoint.s…" 3 hours ago Up 3 hours 33060/tcp, 0.0.0.0:3307->3306/tcp mysql-master [root@localhost conf]# docker restart mysql-master mysql-slaver-01
bridge網卡,沒法鏈接docker mysql的問題
爲master受權用戶來它的同步數據
鏈接master
添加用來同步的用戶
受權一個用戶能夠訪問主節點,進行日誌複製
mysql> GRANT REPLICATION SLAVE ON *.* TO 'backup'@'%' identified by 'root'; Query OK, 0 rows affected (0.00 sec)
配置slaver同步master數據
鏈接slaver
告訴mysql,須要同步哪一個主節點
mysql> change master to master_host='192.168.10.100',master_user='backup',master_password='root',master_log_file='mysql-bin.000001',master_log_pos=0,master_port=3307; Query OK, 0 rows affected (0.03 sec) mysql> start slave;
master上新建gulimall_oms數據庫,表中運行sql文件
/* Navicat MySQL Data Transfer Source Server : 192.168.56.10_3306 Source Server Version : 50727 Source Host : 192.168.56.10:3306 Source Database : gulimall_oms Target Server Type : MYSQL Target Server Version : 50727 File Encoding : 65001 Date: 2020-03-11 17:36:38 */ SET FOREIGN_KEY_CHECKS=0; -- ---------------------------- -- Table structure for mq_message -- ---------------------------- DROP TABLE IF EXISTS `mq_message`; CREATE TABLE `mq_message` ( `message_id` char(32) NOT NULL, `content` text, `to_exchane` varchar(255) DEFAULT NULL, `routing_key` varchar(255) DEFAULT NULL, `class_type` varchar(255) DEFAULT NULL, `message_status` int(1) DEFAULT '0' COMMENT '0-新建 1-已發送 2-錯誤抵達 3-已抵達', `create_time` datetime DEFAULT NULL, `update_time` datetime DEFAULT NULL, PRIMARY KEY (`message_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; -- ---------------------------- -- Records of mq_message -- ---------------------------- -- ---------------------------- -- Table structure for oms_order -- ---------------------------- DROP TABLE IF EXISTS `oms_order`; CREATE TABLE `oms_order` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `member_id` bigint(20) DEFAULT NULL COMMENT 'member_id', `order_sn` char(64) DEFAULT NULL COMMENT '訂單號', `coupon_id` bigint(20) DEFAULT NULL COMMENT '使用的優惠券', `create_time` datetime DEFAULT NULL COMMENT 'create_time', `member_username` varchar(200) DEFAULT NULL COMMENT '用戶名', `total_amount` decimal(18,4) DEFAULT NULL COMMENT '訂單總額', `pay_amount` decimal(18,4) DEFAULT NULL COMMENT '應付總額', `freight_amount` decimal(18,4) DEFAULT NULL COMMENT '運費金額', `promotion_amount` decimal(18,4) DEFAULT NULL COMMENT '促銷優化金額(促銷價、滿減、階梯價)', `integration_amount` decimal(18,4) DEFAULT NULL COMMENT '積分抵扣金額', `coupon_amount` decimal(18,4) DEFAULT NULL COMMENT '優惠券抵扣金額', `discount_amount` decimal(18,4) DEFAULT NULL COMMENT '後臺調整訂單使用的折扣金額', `pay_type` tinyint(4) DEFAULT NULL COMMENT '支付方式【1->支付寶;2->微信;3->銀聯; 4->貨到付款;】', `source_type` tinyint(4) DEFAULT NULL COMMENT '訂單來源[0->PC訂單;1->app訂單]', `status` tinyint(4) DEFAULT NULL COMMENT '訂單狀態【0->待付款;1->待發貨;2->已發貨;3->已完成;4->已關閉;5->無效訂單】', `delivery_company` varchar(64) DEFAULT NULL COMMENT '物流公司(配送方式)', `delivery_sn` varchar(64) DEFAULT NULL COMMENT '物流單號', `auto_confirm_day` int(11) DEFAULT NULL COMMENT '自動確認時間(天)', `integration` int(11) DEFAULT NULL COMMENT '能夠得到的積分', `growth` int(11) DEFAULT NULL COMMENT '能夠得到的成長值', `bill_type` tinyint(4) DEFAULT NULL COMMENT '發票類型[0->不開發票;1->電子發票;2->紙質發票]', `bill_header` varchar(255) DEFAULT NULL COMMENT '發票擡頭', `bill_content` varchar(255) DEFAULT NULL COMMENT '發票內容', `bill_receiver_phone` varchar(32) DEFAULT NULL COMMENT '收票人電話', `bill_receiver_email` varchar(64) DEFAULT NULL COMMENT '收票人郵箱', `receiver_name` varchar(100) DEFAULT NULL COMMENT '收貨人姓名', `receiver_phone` varchar(32) DEFAULT NULL COMMENT '收貨人電話', `receiver_post_code` varchar(32) DEFAULT NULL COMMENT '收貨人郵編', `receiver_province` varchar(32) DEFAULT NULL COMMENT '省份/直轄市', `receiver_city` varchar(32) DEFAULT NULL COMMENT '城市', `receiver_region` varchar(32) DEFAULT NULL COMMENT '區', `receiver_detail_address` varchar(200) DEFAULT NULL COMMENT '詳細地址', `note` varchar(500) DEFAULT NULL COMMENT '訂單備註', `confirm_status` tinyint(4) DEFAULT NULL COMMENT '確認收貨狀態[0->未確認;1->已確認]', `delete_status` tinyint(4) DEFAULT NULL COMMENT '刪除狀態【0->未刪除;1->已刪除】', `use_integration` int(11) DEFAULT NULL COMMENT '下單時使用的積分', `payment_time` datetime DEFAULT NULL COMMENT '支付時間', `delivery_time` datetime DEFAULT NULL COMMENT '發貨時間', `receive_time` datetime DEFAULT NULL COMMENT '確認收貨時間', `comment_time` datetime DEFAULT NULL COMMENT '評價時間', `modify_time` datetime DEFAULT NULL COMMENT '修改時間', PRIMARY KEY (`id`), UNIQUE KEY `order_sn` (`order_sn`) USING BTREE ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='訂單'; -- ---------------------------- -- Records of oms_order -- ---------------------------- -- ---------------------------- -- Table structure for oms_order_item -- ---------------------------- DROP TABLE IF EXISTS `oms_order_item`; CREATE TABLE `oms_order_item` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `order_id` bigint(20) DEFAULT NULL COMMENT 'order_id', `order_sn` char(64) DEFAULT NULL COMMENT 'order_sn', `spu_id` bigint(20) DEFAULT NULL COMMENT 'spu_id', `spu_name` varchar(255) DEFAULT NULL COMMENT 'spu_name', `spu_pic` varchar(500) DEFAULT NULL COMMENT 'spu_pic', `spu_brand` varchar(200) DEFAULT NULL COMMENT '品牌', `category_id` bigint(20) DEFAULT NULL COMMENT '商品分類id', `sku_id` bigint(20) DEFAULT NULL COMMENT '商品sku編號', `sku_name` varchar(255) DEFAULT NULL COMMENT '商品sku名字', `sku_pic` varchar(500) DEFAULT NULL COMMENT '商品sku圖片', `sku_price` decimal(18,4) DEFAULT NULL COMMENT '商品sku價格', `sku_quantity` int(11) DEFAULT NULL COMMENT '商品購買的數量', `sku_attrs_vals` varchar(500) DEFAULT NULL COMMENT '商品銷售屬性組合(JSON)', `promotion_amount` decimal(18,4) DEFAULT NULL COMMENT '商品促銷分解金額', `coupon_amount` decimal(18,4) DEFAULT NULL COMMENT '優惠券優惠分解金額', `integration_amount` decimal(18,4) DEFAULT NULL COMMENT '積分優惠分解金額', `real_amount` decimal(18,4) DEFAULT NULL COMMENT '該商品通過優惠後的分解金額', `gift_integration` int(11) DEFAULT NULL COMMENT '贈送積分', `gift_growth` int(11) DEFAULT NULL COMMENT '贈送成長值', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='訂單項信息'; -- ---------------------------- -- Records of oms_order_item -- ---------------------------- -- ---------------------------- -- Table structure for oms_order_operate_history -- ---------------------------- DROP TABLE IF EXISTS `oms_order_operate_history`; CREATE TABLE `oms_order_operate_history` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `order_id` bigint(20) DEFAULT NULL COMMENT '訂單id', `operate_man` varchar(100) DEFAULT NULL COMMENT '操做人[用戶;系統;後臺管理員]', `create_time` datetime DEFAULT NULL COMMENT '操做時間', `order_status` tinyint(4) DEFAULT NULL COMMENT '訂單狀態【0->待付款;1->待發貨;2->已發貨;3->已完成;4->已關閉;5->無效訂單】', `note` varchar(500) DEFAULT NULL COMMENT '備註', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='訂單操做歷史記錄'; -- ---------------------------- -- Records of oms_order_operate_history -- ---------------------------- -- ---------------------------- -- Table structure for oms_order_return_apply -- ---------------------------- DROP TABLE IF EXISTS `oms_order_return_apply`; CREATE TABLE `oms_order_return_apply` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `order_id` bigint(20) DEFAULT NULL COMMENT 'order_id', `sku_id` bigint(20) DEFAULT NULL COMMENT '退貨商品id', `order_sn` char(32) DEFAULT NULL COMMENT '訂單編號', `create_time` datetime DEFAULT NULL COMMENT '申請時間', `member_username` varchar(64) DEFAULT NULL COMMENT '會員用戶名', `return_amount` decimal(18,4) DEFAULT NULL COMMENT '退款金額', `return_name` varchar(100) DEFAULT NULL COMMENT '退貨人姓名', `return_phone` varchar(20) DEFAULT NULL COMMENT '退貨人電話', `status` tinyint(1) DEFAULT NULL COMMENT '申請狀態[0->待處理;1->退貨中;2->已完成;3->已拒絕]', `handle_time` datetime DEFAULT NULL COMMENT '處理時間', `sku_img` varchar(500) DEFAULT NULL COMMENT '商品圖片', `sku_name` varchar(200) DEFAULT NULL COMMENT '商品名稱', `sku_brand` varchar(200) DEFAULT NULL COMMENT '商品品牌', `sku_attrs_vals` varchar(500) DEFAULT NULL COMMENT '商品銷售屬性(JSON)', `sku_count` int(11) DEFAULT NULL COMMENT '退貨數量', `sku_price` decimal(18,4) DEFAULT NULL COMMENT '商品單價', `sku_real_price` decimal(18,4) DEFAULT NULL COMMENT '商品實際支付單價', `reason` varchar(200) DEFAULT NULL COMMENT '緣由', `description述` varchar(500) DEFAULT NULL COMMENT '描述', `desc_pics` varchar(2000) DEFAULT NULL COMMENT '憑證圖片,以逗號隔開', `handle_note` varchar(500) DEFAULT NULL COMMENT '處理備註', `handle_man` varchar(200) DEFAULT NULL COMMENT '處理人員', `receive_man` varchar(100) DEFAULT NULL COMMENT '收貨人', `receive_time` datetime DEFAULT NULL COMMENT '收貨時間', `receive_note` varchar(500) DEFAULT NULL COMMENT '收貨備註', `receive_phone` varchar(20) DEFAULT NULL COMMENT '收貨電話', `company_address` varchar(500) DEFAULT NULL COMMENT '公司收貨地址', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='訂單退貨申請'; -- ---------------------------- -- Records of oms_order_return_apply -- ---------------------------- -- ---------------------------- -- Table structure for oms_order_return_reason -- ---------------------------- DROP TABLE IF EXISTS `oms_order_return_reason`; CREATE TABLE `oms_order_return_reason` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `name` varchar(200) DEFAULT NULL COMMENT '退貨緣由名', `sort` int(11) DEFAULT NULL COMMENT '排序', `status` tinyint(1) DEFAULT NULL COMMENT '啓用狀態', `create_time` datetime DEFAULT NULL COMMENT 'create_time', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='退貨緣由'; -- ---------------------------- -- Records of oms_order_return_reason -- ---------------------------- -- ---------------------------- -- Table structure for oms_order_setting -- ---------------------------- DROP TABLE IF EXISTS `oms_order_setting`; CREATE TABLE `oms_order_setting` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `flash_order_overtime` int(11) DEFAULT NULL COMMENT '秒殺訂單超時關閉時間(分)', `normal_order_overtime` int(11) DEFAULT NULL COMMENT '正常訂單超時時間(分)', `confirm_overtime` int(11) DEFAULT NULL COMMENT '發貨後自動確認收貨時間(天)', `finish_overtime` int(11) DEFAULT NULL COMMENT '自動完成交易時間,不能申請退貨(天)', `comment_overtime` int(11) DEFAULT NULL COMMENT '訂單完成後自動好評時間(天)', `member_level` tinyint(2) DEFAULT NULL COMMENT '會員等級【0-不限會員等級,所有通用;其餘-對應的其餘會員等級】', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='訂單配置信息'; -- ---------------------------- -- Records of oms_order_setting -- ---------------------------- -- ---------------------------- -- Table structure for oms_payment_info -- ---------------------------- DROP TABLE IF EXISTS `oms_payment_info`; CREATE TABLE `oms_payment_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `order_sn` char(64) DEFAULT NULL COMMENT '訂單號(對外業務號)', `order_id` bigint(20) DEFAULT NULL COMMENT '訂單id', `alipay_trade_no` varchar(50) DEFAULT NULL COMMENT '支付寶交易流水號', `total_amount` decimal(18,4) DEFAULT NULL COMMENT '支付總金額', `subject` varchar(200) DEFAULT NULL COMMENT '交易內容', `payment_status` varchar(20) DEFAULT NULL COMMENT '支付狀態', `create_time` datetime DEFAULT NULL COMMENT '建立時間', `confirm_time` datetime DEFAULT NULL COMMENT '確認時間', `callback_content` varchar(4000) DEFAULT NULL COMMENT '回調內容', `callback_time` datetime DEFAULT NULL COMMENT '回調時間', PRIMARY KEY (`id`), UNIQUE KEY `order_sn` (`order_sn`) USING BTREE, UNIQUE KEY `alipay_trade_no` (`alipay_trade_no`) USING BTREE ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='支付信息表'; -- ---------------------------- -- Records of oms_payment_info -- ---------------------------- -- ---------------------------- -- Table structure for oms_refund_info -- ---------------------------- DROP TABLE IF EXISTS `oms_refund_info`; CREATE TABLE `oms_refund_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `order_return_id` bigint(20) DEFAULT NULL COMMENT '退款的訂單', `refund` decimal(18,4) DEFAULT NULL COMMENT '退款金額', `refund_sn` varchar(64) DEFAULT NULL COMMENT '退款交易流水號', `refund_status` tinyint(1) DEFAULT NULL COMMENT '退款狀態', `refund_channel` tinyint(4) DEFAULT NULL COMMENT '退款渠道[1-支付寶,2-微信,3-銀聯,4-匯款]', `refund_content` varchar(5000) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='退款信息'; -- ---------------------------- -- Records of oms_refund_info -- ---------------------------- -- ---------------------------- -- Table structure for undo_log -- ---------------------------- DROP TABLE IF EXISTS `undo_log`; CREATE TABLE `undo_log` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `branch_id` bigint(20) NOT NULL, `xid` varchar(100) NOT NULL, `context` varchar(128) NOT NULL, `rollback_info` longblob NOT NULL, `log_status` int(11) NOT NULL, `log_created` datetime NOT NULL, `log_modified` datetime NOT NULL, `ext` varchar(100) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`) USING BTREE ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of undo_log -- ----------------------------
此時,你會發現slaver中也有了該數據庫和表
ShardingSphere
Sharding-Proxy
透明
若是後端鏈接 MySQL 數據庫,須要下載 MySQL Connector/J, 解壓縮後,將 mysql-connector-java-5.1.47.jar
拷貝到 %SHARDINGSPHERE_PROXY_HOME%\lib
目錄。
配置下載文件conf中的server.yaml
配置認證信息,配置了root和sharding用戶,sharing用戶只能訪問sharding_db
authentication:
users:
root:
password: root
sharding:
password: sharding
authorizedSchemas: sharding_db
#
props:
# max.connections.size.per.query: 1
# acceptor.size: 16 # The default value is available processors count * 2.
executor.size: 16 # Infinite by default.
# proxy.frontend.flush.threshold: 128 # The default value is 128.
# # LOCAL: Proxy will run with LOCAL transaction.
# # XA: Proxy will run with XA transaction.
# # BASE: Proxy will run with B.A.S.E transaction.
# proxy.transaction.type: LOCAL
# proxy.opentracing.enabled: false
# proxy.hint.enabled: false
# query.with.cipher.column: true
sql.show: false
配置分庫分表,讀寫分離 config-sharding.yaml
數據庫名字 sharding_db,映射了真實數據庫demo_ds_0和demo_ds_1,咱們操做sharding_db,數據實際寫到兩個數據庫中
demo_ds_0和demo_ds_1中有兩個表t_order_0和t_order_1,t_order_item_0和t_order_item_1,都用order_id分表
綁定表能夠兩張表聯繫起來,查詢快
根據user_id分庫
schemaName: sharding_db # dataSources: ds_0: url: jdbc:mysql://192.168.10.100:3307/demo_ds_0?serverTimezone=UTC&useSSL=false username: root password: root connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 ds_1: url: jdbc:mysql://192.168.10.100:3307/demo_ds_1?serverTimezone=UTC&useSSL=false username: root password: root connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 # shardingRule: tables: t_order: actualDataNodes: ds_${0..1}.t_order_${0..1} tableStrategy: inline: shardingColumn: order_id # 用訂單id分表 algorithmExpression: t_order_${order_id % 2} # 餘數爲0在t_order_0表,不然在1表 keyGenerator: type: SNOWFLAKE # 使用雪花算法 column: order_id t_order_item: actualDataNodes: ds_${0..1}.t_order_item_${0..1} tableStrategy: inline: shardingColumn: order_id algorithmExpression: t_order_item_${order_id % 2} keyGenerator: type: SNOWFLAKE column: order_item_id bindingTables: - t_order,t_order_item # 綁定表 defaultDatabaseStrategy: inline: shardingColumn: user_id # 分庫策略 algorithmExpression: ds_${user_id % 2} defaultTableStrategy: none:
讀寫分離,config-master_slave.xml
主數據源
從數據源
有兩個庫,配置兩個主從
schemaName: sharding_db_1 # dataSources: master_0_ds: url: jdbc:mysql://192.168.10.100:3307/demo_ds_0?serverTimezone=UTC&useSSL=false username: root password: root connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 slave_ds_0: url: jdbc:mysql://192.168.10.100:3317/demo_ds_0?serverTimezone=UTC&useSSL=false username: root password: root connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 # masterSlaveRule: name: ms_ds_0 masterDataSourceName: master_0_ds slaveDataSourceNames: - slave_ds_0
config-master_slave_2.xml
schemaName: sharding_db_2 # dataSources: master_1_ds: url: jdbc:mysql://192.168.10.100:3307/demo_ds_1?serverTimezone=UTC&useSSL=false username: root password: root connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 slave_ds_1: url: jdbc:mysql://192.168.10.100:3317/demo_ds_1?serverTimezone=UTC&useSSL=false username: root password: root connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 # masterSlaveRule: name: ms_ds_1 masterDataSourceName: master_1_ds slaveDataSourceNames: - slave_ds_1
修改docker mysql配置文件
master和slaver都添加
binlog-do-db=demo_ds_0
binlog-do-db=demo_ds_1
建立demo_ds_0和1庫
windows的話必定要用cmd命令行鏈接,navicat等鏈接可能會出各類錯。
客戶端分區
代理分區
redis-cluster
一組Redis Cluster由多個Redis實例組成,官方推薦使用6實例,其中3個爲主節點,3個爲從節點。
把全部數據劃分爲16384個不一樣的槽位,能夠根據機器的性能把不一樣的槽位分配給不一樣的Redis實例,對於Redis實例來講,它們只會存儲部分的Redis數據。槽的數據是能夠遷移的,不一樣實例之間,能夠經過必定的協議,進行數據遷移。
槽
建立6個redis節點
3主,3從
使用腳本搭建docker的redis-cluster
for port in $(seq 7001 7006); \ do \ mkdir -p /bigdata/redis/node-${port}/conf touch /bigdata/redis/node-${port}/conf/redis.conf cat << EOF > /bigdata/redis/node-${port}/conf/redis.conf port ${port} cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 cluster-announce-ip 192.168.10.101 cluster-announce-port ${port} cluster-announce-bus-port 1${port} appendonly yes EOF docker run -p ${port}:${port} -p 1${port}:1${port} --name redis-${port} \ -v /bigdata/redis/node-${port}/data:/data \ -v /bigdata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \ -d redis:5.0.7 redis-server /etc/redis/redis.conf; \ done
使用redis創建集羣
進入一個docker
docker exec -it redis-7001 bash
三臺主節點,三個從節點
redis-cli --cluster create 192.168.10.101:7001 192.168.10.101:7002 192.168.10.101:7003 192.168.10.101:7004 192.168.10.101:7005 192.168.10.101:7006 --cluster-replicas 1
--cluster-replicas 1 每一個主節點必須有一個副本節點,
[root@node02 redis]# docker exec -it redis-7001 bash root@57e92e2ffa4c:/data# redis-cli --cluster create 192.168.10.101:7001 192.168.10.101:7002 192.168.10.101:7003 192.168.10.101:7004 192.168.10.101:7005 192.168.10.101:7006 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 192.168.10.101:7005 to 192.168.10.101:7001 Adding replica 192.168.10.101:7006 to 192.168.10.101:7002 Adding replica 192.168.10.101:7004 to 192.168.10.101:7003 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: aa624181696765f568bee0e650c4af216ba83c23 192.168.10.101:7001 slots:[0-5460] (5461 slots) master M: 825835dcfc9c34bc5d9f6cdc9d6afca45302e4a6 192.168.10.101:7002 slots:[5461-10922] (5462 slots) master M: 08b1626502539c77ec469e2032e693718649f69d 192.168.10.101:7003 slots:[10923-16383] (5461 slots) master S: 153c705b38fd253594a08989d49425d5ca4ee87f 192.168.10.101:7004 replicates 825835dcfc9c34bc5d9f6cdc9d6afca45302e4a6 S: 4c30599c2cabbb20eafc8b4eb9d4d93c597b20cc 192.168.10.101:7005 replicates 08b1626502539c77ec469e2032e693718649f69d S: e62234ef3c5206770ce29580c0309efb09a98429 192.168.10.101:7006 replicates aa624181696765f568bee0e650c4af216ba83c23 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ... >>> Performing Cluster Check (using node 192.168.10.101:7001) M: aa624181696765f568bee0e650c4af216ba83c23 192.168.10.101:7001 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 4c30599c2cabbb20eafc8b4eb9d4d93c597b20cc 192.168.10.101:7005 slots: (0 slots) slave replicates 08b1626502539c77ec469e2032e693718649f69d M: 08b1626502539c77ec469e2032e693718649f69d 192.168.10.101:7003 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 153c705b38fd253594a08989d49425d5ca4ee87f 192.168.10.101:7004 slots: (0 slots) slave replicates 825835dcfc9c34bc5d9f6cdc9d6afca45302e4a6 M: 825835dcfc9c34bc5d9f6cdc9d6afca45302e4a6 192.168.10.101:7002 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: e62234ef3c5206770ce29580c0309efb09a98429 192.168.10.101:7006 slots: (0 slots) slave replicates aa624181696765f568bee0e650c4af216ba83c23 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
M是主,S是從
發現會自動保存數據到不一樣節點
root@57e92e2ffa4c:/data# redis-cli -c -h 192.168.10.101 -p 7001
192.168.10.101:7001> set 1 2
-> Redirected to slot [9842] located at 192.168.10.101:7002
OK
192.168.10.101:7002> get 1
"2"
192.168.10.101:7002> set 3 6
-> Redirected to slot [1584] located at 192.168.10.101:7001
OK
192.168.10.101:7001> set u 8
-> Redirected to slot [11826] located at 192.168.10.101:7003
OK
192.168.10.101:7003> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:3
cluster_stats_messages_ping_sent:800
cluster_stats_messages_pong_sent:808
cluster_stats_messages_meet_sent:5
cluster_stats_messages_sent:1613
cluster_stats_messages_ping_received:808
cluster_stats_messages_pong_received:805
cluster_stats_messages_received:1613
192.168.10.101:7003> CLUSTER NODES
e62234ef3c5206770ce29580c0309efb09a98429 192.168.10.101:7006@17006 slave aa624181696765f568bee0e650c4af216ba83c23 0 1592193713000 6 connected
153c705b38fd253594a08989d49425d5ca4ee87f 192.168.10.101:7004@17004 slave 825835dcfc9c34bc5d9f6cdc9d6afca45302e4a6 0 1592193712000 4 connected
aa624181696765f568bee0e650c4af216ba83c23 192.168.10.101:7001@17001 master - 0 1592193712000 1 connected 0-5460
825835dcfc9c34bc5d9f6cdc9d6afca45302e4a6 192.168.10.101:7002@17002 master - 0 1592193714003 2 connected 5461-10922
08b1626502539c77ec469e2032e693718649f69d 192.168.10.101:7003@17003 myself,master - 0 1592193711000 3 connected 10923-16383
4c30599c2cabbb20eafc8b4eb9d4d93c597b20cc 192.168.10.101:7005@17005 slave 08b1626502539c77ec469e2032e693718649f69d 0 1592193712995 5 connected
一個運行中的ES實例稱爲一個節點,而集羣是由一個或多個擁有相同cluster.name配置的節點組成,它們共同承擔數據和負載的壓力。當有節點加入集羣中或者從集羣中移除節點時,集羣會從新平均分佈全部的數據。
當一個節點被選舉爲主節點時,它將負責管理集羣範圍內的全部變動,例如增長、刪除索引,增長、刪除節點等。主節點不須要涉及到文檔級別的變動和搜索等操做。任何節點均可以成爲主節點。
做爲用戶,咱們能夠將請求發送到集羣中的任何節點,包括主節點。每一個節點都知道任意文檔的位置,而且能將咱們的請求直接轉發到存儲咱們所需文檔的節點。
集羣健康是集羣監控信息最重要的一項,status字段包括green、yellow或red。
分片
建立網絡
[root@node03 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 11642c8df0a1 bridge bridge local f3824fdf854e host host local 3f3b1d92bc78 none null local [root@node03 ~]# docker network create --driver bridge --subnet=172.18.12.0/16 --gateway=172.18.1.1 mynet 60cbe8c50aa1cda30cf36571f8609bc7989d0040701932deba51c64a77796b49 [root@node03 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 11642c8df0a1 bridge bridge local f3824fdf854e host host local 60cbe8c50aa1 mynet bridge local 3f3b1d92bc78 none null local [root@node03 ~]# docker network inspect mynet [ { "Name": "mynet", "Id": "60cbe8c50aa1cda30cf36571f8609bc7989d0040701932deba51c64a77796b49", "Created": "2020-06-15T14:13:23.198137749+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.12.0/16", "Gateway": "172.18.1.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ]
修改虛擬內存,不然會報
max virutal memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
1、切換到root用戶修改配置sysctl.conf vi /etc/sysctl.conf 添加下面配置: vm.max_map_count=655360 並執行命令: sysctl -p
建立master節點
for port in $(seq 1 3); \ do \ mkdir -p /bigdata/elasticsearch/master-${port}/config mkdir -p /bigdata/elasticsearch/master-${port}/data chmod -R 777 /bigdata/elasticsearch/master-${port} touch /bigdata/elasticsearch/master-${port}/config/elasticsearch.yml cat << EOF > /bigdata/elasticsearch/master-${port}/config/elasticsearch.yml cluster.name: my-es # 集羣名稱,同一個集羣名稱相同 node.name: es-master-${port} # 節點名字 node.master: true # 該節點有機會成爲master node.data: false # 該節點能夠存儲數據 network.host: 0.0.0.0 http.host: 0.0.0.0 # 全部http都可訪問 http.port: 920${port} transport.tcp.port: 930${port} discovery.zen.ping_timeout: 10s # 設置集羣中自動發現其餘節點時ping鏈接的超時時間 discovery.seed_hosts: ["172.18.12.21:9301","172.18.12.22:9301","172.18.12.23:9301"] # 設置集羣中的master節點的初始列表,看經過這些節點自動發現其餘新加入集羣的節點,es7新配置 cluster.initial_master_nodes: ["172.18.12.21"] # 新集羣初始時的候選主節點,es7新增 EOF docker run -p 920${port}:920${port} -p 930${port}:930${port} --name elasticsearch-node-${port} \ --network=mynet --ip 172.18.12.2${port} \ -e ES_JAVA_OPTS="-Xms300m -Xmx300m" \ -v /bigdata/elasticsearch/master-${port}/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \ -v /bigdata/elasticsearch/master-${port}/data:/usr/share/elasticsearch/data \ -v /bigdata/elasticsearch/master-${port}/plugins:/usr/share/elasticsearch/plugins \ -d elasticsearch:7.4.2; \ done
node節點
for port in $(seq 4 6); \ do \ mkdir -p /bigdata/elasticsearch/node-${port}/config mkdir -p /bigdata/elasticsearch/node-${port}/data chmod -R 777 /bigdata/elasticsearch/node-${port} touch /bigdata/elasticsearch/node-${port}/config/elasticsearch.yml cat << EOF > /bigdata/elasticsearch/node-${port}/config/elasticsearch.yml cluster.name: my-es # 集羣名稱,同一個集羣名稱相同 node.name: es-node-${port} # 節點名字 node.master: false # 該節點有機會成爲master node.data: true # 該節點能夠存儲數據 network.host: 0.0.0.0 http.host: 0.0.0.0 # 全部http都可訪問 http.port: 920${port} transport.tcp.port: 930${port} discovery.zen.ping_timeout: 10s # 設置集羣中自動發現其餘節點時ping鏈接的超時時間 discovery.seed_hosts: ["172.18.12.21:9301","172.18.12.22:9301","172.18.12.23:9301"] # 設置集羣中的master節點的初始列表,看經過這些節點自動發現其餘新加入集羣的節點,es7新配置 cluster.initial_master_nodes: ["172.18.12.21"] # 新集羣初始時的候選主節點,es7新增 EOF docker run -p 920${port}:920${port} -p 930${port}:930${port} --name elasticsearch-node-${port} \ --network=mynet --ip 172.18.12.2${port} \ -e ES_JAVA_OPTS="-Xms300m -Xmx300m" \ -v /bigdata/elasticsearch/node-${port}/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \ -v /bigdata/elasticsearch/node-${port}/data:/usr/share/elasticsearch/data \ -v /bigdata/elasticsearch/node-${port}/plugins:/usr/share/elasticsearch/plugins \ -d elasticsearch:7.4.2; \ done
查看節點 http://192.168.10.102:9206/_cat/nodes
標星號的爲master
查看集羣健康情況 http://192.168.10.102:9206/_cluster/health?pretty
RabbitMQ是用Erlang開發的,集羣很是方便,Erlang天生就是分佈式語言,當其自己不支持負載均衡。
包括內存節點(RAM)、磁盤節點(Disk,消息持久化),至少有一個Disk節點。
普通模式
鏡像模式
集羣搭建
mkdir /bigdata/rabbitmq cd /bigdata/rabbitmq mkdir rabbitmq01 rabbitmq02 rabbitmq03 docker run -d --hostname rabbitmq01 --name rabbitmq01 -v /bigdata/rabbitmq/rabbitmq01:/var/lib/rabbitmq -p 15673:15672 -p 5673:5672 -e RABBITMQ_ERLANG_COOKIE='atguigu' rabbitmq:management docker run -d --hostname rabbitmq02 --name rabbitmq02 -v /bigdata/rabbitmq/rabbitmq01:/var/lib/rabbitmq -p 15674:15672 -p 5674:5672 -e RABBITMQ_ERLANG_COOKIE='atguigu' --link rabbitmq01:rabbitmq01 rabbitmq:management docker run -d --hostname rabbitmq03 --name rabbitmq03 -v /bigdata/rabbitmq/rabbitmq01:/var/lib/rabbitmq -p 15675:15672 -p 5675:5672 -e RABBITMQ_ERLANG_COOKIE='atguigu' --link rabbitmq01:rabbitmq01 --link rabbitmq02:rabbitmq02 rabbitmq:management
第一個節點
[root@node03 rabbitmq]# docker exec -it rabbitmq01 /bin/bash root@rabbitmq01:/# rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq01 ... root@rabbitmq01:/# rabbitmqctl reset Resetting node rabbit@rabbitmq01 ... root@rabbitmq01:/# rabbitmqctl start_app Starting node rabbit@rabbitmq01 ... root@rabbitmq01:/# exit exit
第二個節點
[root@node03 rabbitmq]# docker exec -it rabbitmq02 /bin/bash root@rabbitmq02:/# rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq02 ... root@rabbitmq02:/# rabbitmqctl reset Resetting node rabbit@rabbitmq02 ... root@rabbitmq02:/# rabbitmqctl join_cluster --ram rabbit@rabbitmq01 Clustering node rabbit@rabbitmq02 with rabbit@rabbitmq01 root@rabbitmq02:/# rabbitmqctl start_app Starting node rabbit@rabbitmq02 ... root@rabbitmq02:/# exit exit
第三節點
[root@node03 rabbitmq]# docker exec -it rabbitmq03 /bin/bash root@rabbitmq03:/# rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq03 ... ^[[Aroot@rabbitmq03rabbitmqctl reset Resetting node rabbit@rabbitmq03 ... root@rabbitmq03:/# rabbitmqctl join_cluster --ram rabbit@rabbitmq01 Clustering node rabbit@rabbitmq03 with rabbit@rabbitmq01 root@rabbitmq03:/# rabbitmqctl start_app Starting node rabbit@rabbitmq03 ... root@rabbitmq03:/# exit exit
實現鏡像集羣
隨便進入一個節點
[root@node03 rabbitmq]# docker exec -it rabbitmq01 bash
ha策略
root@rabbitmq01:/# rabbitmqctl set_policy -p / ha "^" '{"ha-mode":"all","ha-sync-mode":"automatic"}' Setting policy "ha" for pattern "^" to "{"ha-mode":"all","ha-sync-mode":"automatic"}" with priority "0" for vhost "/" ...
/ 當前主機
查看策略
root@rabbitmq01:/# rabbitmqctl list_policies -p / Listing policies for vhost "/" ... vhost name pattern apply-to definition priority / ha ^ all {"ha-mode":"all","ha-sync-mode":"automatic"} 0
01中添加一個隊列
發現03中也有該隊列
登陸project-regular
進入項目
配置,建立配置
下一步,添加數據
[client]
default-character-set=utf8 [mysql] default-character-set=utf8 [mysqld] init_connect='SET collation_connection = utf8_unicode_ci' init_connect='SET NAMES utf8' character-set-server=utf8 collation-server=utf8_unicode_ci skip-character-set-client-handshake skip-name-resolve
server_id=1
log-bin=mysql-bin
read-only=0
binlog-do-db=gulimall_ums
binlog-do-db=gulimall_pms
binlog-do-db=gulimall_oms
binlog-do-db=gulimall_sms
binlog-do-db=gulimall_wms
binlog-do-db=gulimall_admin
replicate-ignore-db=mysql
replicate-ignore-db=sys
replicate-ignore-db=information_schema
replicate-ignore-db=performance_schema
建立
存儲卷
建立
點 應用負載,點 服務,建立 有狀態服務
容器組分散部署,添加容器鏡像
使用默認端口
點 高級設置
環境變量,引用配置文件或密鑰
掛載配置文件或密鑰
添加存儲卷
建立
slaver
[client] default-character-set=utf8 [mysql] default-character-set=utf8 [mysqld] init_connect='SET collation_connection = utf8_unicode_ci' init_connect='SET NAMES utf8' character-set-server=utf8 collation-server=utf8_unicode_ci skip-character-set-client-handshake skip-name-resolve server_id=2 log-bin=mysql-bin read-only=1 binlog-do-db=gulimall_ums binlog-do-db=gulimall_pms binlog-do-db=gulimall_oms binlog-do-db=gulimall_sms binlog-do-db=gulimall_wms binlog-do-db=gulimall_admin replicate-ignore-db=mysql replicate-ignore-db=sys replicate-ignore-db=information_schema replicate-ignore-db=performance_schema
進入master容器
打開命令行
# mysql -uroot -p123456 mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.7.30-log MySQL Community Server (GPL) Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> GRANT REPLICATION SLAVE ON *.* TO 'backup'@'%' identified by '123456'; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> show master status; +------------------+----------+---------------------------------------------------------------------------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+---------------------------------------------------------------------------------+------------------+-------------------+ | mysql-bin.000003 | 439 | gulimall_ums,gulimall_pms,gulimall_oms,gulimall_sms,gulimall_wms,gulimall_admin | | | +------------------+----------+---------------------------------------------------------------------------------+------------------+-------------------+ 1 row in set (0.00 sec)
# mysql -uroot -p123456 mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.30-log MySQL Community Server (GPL) Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> change master to master_host='mysql-master.gulimall',master_user='backup',master_password='123456',master_log_file='mysql-bin.000003',master_log_pos=0,master_port=3306; Query OK, 0 rows affected, 2 warnings (0.02 sec) mysql> start slave; Query OK, 0 rows affected (0.01 sec)
關鍵:
Redis
建立配置
建立pvc
建立有狀態服務
redis:5.0.7鏡像
沒有環境變量可是要修改啓動命令
ElasticSearch
elasticsearch-pvc
建立有狀態服務
建立kibana
RabitMQ
存儲卷
有狀態服務
部署nacos
建立pvc
zipkin
sentinel