防僞碼:無邊落木蕭蕭下,不盡長江滾滾來。node
咱們很高興地宣佈Kubernetes 1.18的發佈,這是咱們2020年的第一個版本!Kubernetes 1.18包含38項功能加強:15個穩定版,11個beta版,12個alpha版。linux
Kubernetes 1.18是一個「表明精確度與完成度」的版本。爲了更好的用戶體驗,Kubernetes 1.18在Beta版和穩定版功能改進方面作了大量工做。同時努力增長了一些新功能開發和使人興奮的新特性,進一步提升用戶體驗。1.18版本在alpha、beta和穩定版本上幾乎有差很少的加強,這代表社區在提升Kubernetes的可靠性,以及繼續擴展其現有功能方面作出了巨大努力。git
核心主題github
Kubernetes拓撲管理器升級到Beta版-對齊!
docker
做爲Kubernetes在1.18版中的beta特性,拓撲管理器特性支持CPU和其餘設備(如SR-IOV VFs)實現NUMA對齊,容許工做負載運行在優化環境中下降延遲。在引入拓撲管理器以前,CPU和設備管理器只能各自獨立作出資源分配決策,致使在multi-socket 系統上出現不但願的分配,從而致使關鍵應用性能降低。json
Serverside Apply邁向Beta 2bootstrap
Server-side Apply在1.16中升級爲Beta版,如今在1.18中引入了第二個Beta版。該新版本將跟蹤和管理全部新Kubernetes對象的字段更改,確保用戶及時瞭解哪些資源作了變動,以及什麼時候更改的。vim
IngressClass擴展Ingress,並替換非推薦註解centos
在Kubernetes 1.18中,有兩項重要的Ingress添加:新的pathType字段和新的IngressClass資源。pathType字段容許指定如何匹配路徑。除了默認的特定實現的類型以外,還新增了Exact和Prefix兩種路徑類型。api
IngressClass資源用於描述Kubernetes集羣中的一種Ingress。經過在ingress上使用新的ingressClassName字段,ingress能夠指定與其相關聯的類。這一全新資源和字段代替了不被推薦的kubernets .io/ ings .class註解。
SIG-CLI引入kubectl調試
在至關長的一段時間內,SIG-CLI一直在討論是否須要調試功能。隨着短生命週期容器的開發,如何使用構建在kubectl exec上的工具來支持開發人員變得顯而易見。kubectl調試命令的添加(alpha版)容許開發人員輕鬆調試集羣中的pod,這一增長是無價的。這個命令容許建立一個即席容器,它運行在待檢查的pod旁邊,同時附帶控制檯以進行交互式故障排查。
爲Kubernetes推出Windows CSI支持的Alpha版
隨着Kubernetes 1.18的發佈,Windows的CSI代理的alpha版本也即將發佈。CSI代理容許無受權(預先批准的)容器在Windows上執行受權存儲操做。利用CSI代理能夠在Windows中支持CSI驅動。
其餘更新
升級到穩定版:
基於污點的驅逐
kubectl diff
CSI Block存儲支持
API Server試運行
在CSI calls中傳遞Pod信息
支持Out-of-Tree vSphere Cloud Provider
支持針對Windows工做負載的GMSA
跳過不可附加的CSI存儲卷
PVC cloning(克隆)
將kubectl包代碼移到staging
用於Windows的RunAsUserName
AppProtocol for Services and Endpoints
擴展Hugepage功能
client-go簽名重構,實現標準化選項和上下文處理
節點本地DNS緩存
重要變動
EndpointSlice API
將kubectl包代碼移到staging
CertificateSigningRequest API
擴展Hugepage特性
client-go簽名重構,實現標準化選項和上下文處理
發佈徽標
用戶亮點
愛立信正在使用Kubernetes和其餘雲原生技術來交付有強大表現的5G網絡,CI/CD實現成功下降高達90%。
Zendesk正在使用Kubernetes來運行大約70%的現有應用。全部新應用也都是運行在Kubernetes上,使其應用開發更加節省時間,得到更大的靈活性和快速。
LifeMiles公司自從遷移到Kubernetes,基礎設施支出減小了50%,現有資源能力增長一倍。
生態系統更新
CNCF公佈的年度調查結果顯示,生產環境中使用Kubernetes的數量正在飆升。調查發現,78%的受訪者在生產中使用Kubernetes,而去年這一比例爲58%。
由CNCF主辦的「Kubernetes概論」課程,報名人數超過10萬人。
環境說明:
#操做系統:centos8
#docker版本:19.03.8
#kubernetes版本:v1.18
#網絡插件:calico
#kube-proxy網絡轉發: ipvs
#kubernetes源:使用阿里雲源
#service-cidr:10.10.0.0/16
#pod-network-cidr:10.122.0.0/16
一、系統準備
查看系統版本
[root@yangwen ~]# cat /etc/centos-release CentOS Linux release 8.1.1911 (Core) [root@yangwen ~]# uname -r 4.18.0-147.el8.x86_64
配置網絡
[root@yangwen ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=dhcp DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33 UUID=660cffeb-fd7a-4552-aaea-972ee0559e5c DEVICE=ens33 ONBOOT=yes
添加阿里源
[root@yangwen ~]# rm -rfv /etc/yum.repos.d/* 已刪除'/etc/yum.repos.d/CentOS-AppStream.repo' 已刪除'/etc/yum.repos.d/CentOS-Base.repo' 已刪除'/etc/yum.repos.d/CentOS-centosplus.repo' 已刪除'/etc/yum.repos.d/CentOS-CR.repo' 已刪除'/etc/yum.repos.d/CentOS-Debuginfo.repo' 已刪除'/etc/yum.repos.d/CentOS-Extras.repo' 已刪除'/etc/yum.repos.d/CentOS-fasttrack.repo' 已刪除'/etc/yum.repos.d/CentOS-HA.repo' 已刪除'/etc/yum.repos.d/CentOS-Media.repo' 已刪除'/etc/yum.repos.d/CentOS-PowerTools.repo' 已刪除'/etc/yum.repos.d/CentOS-Sources.repo' 已刪除'/etc/yum.repos.d/CentOS-Vault.repo' [root@yangwen ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2595 100 2595 0 0 7587 0 --:--:-- --:--:-- --:--:-
配置主機名
[root@yangwen ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.40.131 yangwen
關閉swap,註釋swap分區
[root@yangwen ~]# swapoff -a [root@yangwen ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri Apr 17 04:23:58 2020 # # Accessible filesystems, by reference, are maintained under '/dev/disk/'. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. # # After editing this file, run 'systemctl daemon-reload' to update systemd # units generated from this file. # /dev/mapper/cl-root / xfs defaults 0 0 UUID=d49ac8b0-9833-4747-9a80-587d85f15a80 /boot ext4 defaults 1 2 #/dev/mapper/cl-swap swap swap defaults
配置內核參數,將橋接的IPv4流量傳遞到iptables的鏈
[root@yangwen ~]# cat > /etc/sysctl.d/k8s.conf <<EOF > > net.bridge.bridge-nf-call-ip6tables = 1 > > net.bridge.bridge-nf-call-iptables = 1 > > EOF [root@yangwen ~]# /usr/sbin/sysctl --system
二、安裝經常使用包
[root@yangwen ~]# yum install vim bash-completion net-tools gcc -y
三、使用aliyun源安裝docker-ce
[root@yangwen ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 [root@yangwen ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
添加倉庫自:https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@yangwen ~]# yum -y install docker-ce
安裝docker-ce若是出現如下錯
解決方法
[root@yangwen ~]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm [root@yangwen ~]# yum install containerd.io-1.2.6-3.3.el7.x86_64.rpm
而後再安裝docker-ce便可成功
[root@yangwen ~]# yum -y install docker-ce
添加aliyundocker倉庫加速器
[root@yangwen ~]# mkdir -p /etc/docker [root@yangwen ~]# tee /etc/docker/daemon.json <<-'EOF' > > { > > "registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"] > > } > > EOF { "registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"] } [root@yangwen ~]# systemctl daemon-reload [root@yangwen ~]# systemctl enable docker Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service. [root@yangwen ~]# systemctl restart docker
注:若是有報錯,yum update一下而後從新安裝docker便可
四、安裝kubectl、kubelet、kubeadm
添加阿里kubernetes源
[root@yangwen ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo > > [kubernetes] > > name=Kubernetes > > baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ > > enabled=1 > > gpgcheck=1 > > repo_gpgcheck=1 > > gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg > > EOF
安裝
[root@yangwen ~]# yum install kubectl kubelet kubeadm
一直輸入y,回車
[root@yangwen ~]# systemctl enable kubelet Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
五、初始化k8s集羣
[root@yangwen ~]#kubeadm init --apiserver-advertise-address=0.0.0.0 \ --apiserver-cert-extra-sans=127.0.0.1 \ --image-repository=registry.aliyuncs.com/google_containers \ --ignore-preflight-errors=all \ --kubernetes-version=v1.18.0 \ --service-cidr=10.10.0.0/16 \ --pod-network-cidr=10.122.0.0/16
POD的網段爲: 10.122.0.0/16, api server地址就是master本機IP。
這一步很關鍵,因爲kubeadm 默認從官網k8s.grc.io下載所需鏡像,國內沒法訪問,所以須要經過–image-repository指定阿里雲鏡像倉庫地址。
集羣初始化成功後返回以下信息:
W0408 09:36:36.121603 14098 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [WARNING FileExisting-tc]: tc not found in system path [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master01.paas.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.122.21] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master01.paas.com localhost] and IPs [192.168.122.21 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master01.paas.com localhost] and IPs [192.168.122.21 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0408 09:36:43.343191 14098 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0408 09:36:43.344303 14098 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 23.002541 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node master01.paas.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master01.paas.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: v2r5a4.veazy2xhzetpktfz [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.122.21:6443 --token v2r5a4.veazy2xhzetpktfz \ --discovery-token-ca-cert-hash sha256:daded8514c8350f7c238204979039ff9884d5b595ca950ba8bbce80724fd65d4
記錄生成的最後部份內容,此內容須要在其它節點加入Kubernetes集羣時執行。
根據提示建立kubectl
[root@yangwen ~]# mkdir -p $HOME/.kube [root@yangwen ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@yangwen ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config
執行下面命令,使kubectl能夠自動補充
[root@yangwen ~]#source <(kubectl completion bash)
查看節點,pod
[root@yangwen ~]# kubectl get node NAME STATUS ROLES AGE VERSION yangwen Ready master 110m v1.18.2 [root@yangwen ~]# kubectl get pod --all-namespaces
node節點爲NotReady,由於corednspod沒有啓動,缺乏網絡pod
六、安裝calico網絡
[root@yangwen ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created
[root@yangwen ~]# kubectl get pod --all-namespaces
此時集羣狀態正常
七、安裝kubernetes-dashboard
官方部署dashboard的服務沒使用nodeport,將yaml文件下載到本地,在service裏添加nodeport
[root@yangwen ~]#wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml [root@yangwen ~]# vim recommended.yaml kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30000 selector: k8s-app: kubernetes-dashboard
[root@yangwen ~]#kubectl create -f recommended.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created
查看pod,service
NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-dc6947fbf-869kf 1/1 Running 0 37s kubernetes-dashboard-5d4dc8b976-sdxxt 1/1 Running 0 37s [root@yangwen ~]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.10.25.162 <none> 8000/TCP 44s kubernetes-dashboard NodePort 10.10.150.164 <none> 443:30000/TCP 44s
經過頁面訪問,推薦使用firefox瀏覽器
使用token進行登陸,
建立 token
[root@yangwen ~]# kubectl create sa dashboard-admin -n kube-system serviceaccount/dashboard-admin created
受權token 訪問權限
[root@yangwen ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
獲取token
[root@yangwen ~]# ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
獲取dashboard.kubeconfig 使用token值
[root@yangwen ~]# DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}') [root@yangwen ~]# echo ${DASHBOARD_LOGIN_TOKEN} eyJhbGciOiJSUzI1NiIsImtpZCI6ImZRZWdIWENkZUUyZHlzX2duNHhxUjNmVFRIY3hqdlJyUzhnZ050TFY0cW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tMmZrd3ciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZmM5NzQyZDktMTdhZC00ZWI1LWIxYTItZTBhZmFhNmU5OWFlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.hrvBZTSaxVEQY3EQb_n_CerWyz22s3OKwfceh5YxIGTNO5hnw9AWSqmY7hjksrqZ82nsRUxVZeSYQ-G1P0UQs3TEKzUzP5NUppcZ56nW0hu7MHiiQ10uzNaAnhqTpdylKPiy6XgZGu4QVITiZwhUxf3t2dMeYfw3YdIfo5xJR_4OPhQuENlhBeq_5OxBHz_zBhfUazknKp4x1abfTONvWfqgaWBXCKD8EdCn2q4ydfkJR6e4UjCM-u3SAxUI1Cir15WfJOxW9GEXPtVCtlb7HiVQyJJ-McuhQRcMOnOF36tiFmb4vzhWUOsXxdHrIHgjz72x674bKTBn7PyNgb2XwA
注意事項:
#kubeadm token 默認時間是24 小時,過時記得重新生成token 而後加入節點 # 查看token kubeadm token list # 建立token kubeadm token create #忘記初始master節點時的node節點加入集羣命令怎麼辦 # 簡單方法 kubeadm token create --print-join-command # 第二種方法 token=$(kubeadm token generate) kubeadm token create $token --print-join-command --ttl=0 # 接下來就能夠部署監控,應用等。