參考
https://www.cnrancher.com/docs/rancher/v2.x/cn/installation/ha-install/node
option | required | description |
---|---|---|
address | yes | 公共域名或IP地址 |
user | yes | 能夠運行docker命令的用戶 |
role | yes | 分配給節點的Kubernetes角色列表 |
internal_address | no | 內部集羣通訊的私有域名或IP地址 |
ssh_key_path | no | 用於對節點進行身份驗證的SSH私鑰的路徑(默認爲~/.ssh/id_rsa) |
以上表格的意思是:linux
使用rke
安裝時,肯定各個服務器的IP地址,且須要採用非root
用戶,每一個服務器之間須要免密鑰處理。nginx
在此選用test-kube-master-01
節點生成公鑰,並分發給其餘master
節點實現安裝。git
dns
解析條件限制沒有dns服務器,因此在每一個節點都要配置上hostsgithub
cat >> /etc/hosts << EOF 172.18.1.4 test-kube-master-01 172.18.1.5 test-kube-master-02 172.18.1.9 test-kube-master-03 172.18.1.6 test-kube-node-01 172.18.1.7 test-kube-node-02 172.18.1.8 test-kube-node-03 EOF
master
配置普通用戶可操做docker具體操做方式,在第一章基礎環境準備
wangpeng@test-kube-master-01:~$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/wangpeng/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/wangpeng/.ssh/id_rsa. Your public key has been saved in /home/wangpeng/.ssh/id_rsa.pub. The key fingerprint is: SHA256:66NgD0FtJuv8ASIEQcdDoCyOklo+S5TJMgkTC8pDKwE wangpeng@test-kube-master-01 The key's randomart image is: +---[RSA 2048]----+ |E=+o | |O+oo . | |O* + + | |B=.+ = | |O.B + S | |oB + o . | |. + * . . | | . + = o. | | . +... | +----[SHA256]-----+
通常傳送密鑰方式是這樣的:web
ssh-copy-id wangpeng@test-kube-masterxx(包括本機)
rke
和rancher-cluser.yml
文件配置rke
二進制安裝瀏覽器訪問RKE Releases 頁面,下載符合操做系統的最新RKE安裝程序:docker
這裏使用的Linux(Intel / AMD):rke_linux-amd64
shell
wget https://github.com/rancher/rke/releases/download/v0.2.4/rke_linux-amd64
運行如下命令給與二進制文件執行權限:windows
chmod +x rke_linux-amd64
rke
配置文件有兩種簡單的方法能夠建立cluster.yml
:api
rke
配置cluster.yml
並根據將使用的節點更新它;rke config
嚮導式生成配置;rke config
配置嚮導在這隻須要將3個master添加進去便可,rancher是安裝在master組成的k8s集羣裏面
./rke_linux-amd64 config --name cluster.yml
cat cluster.yml # If you intened to deploy Kubernetes in an air-gapped environment, # please consult the documentation on how to configure custom RKE images. nodes: - address: 172.18.1.4 port: "22" internal_address: "" role: - controlplane - worker - etcd hostname_override: "" user: wangpeng docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} - address: 172.18.1.5 port: "22" internal_address: "" role: - controlplane - worker - etcd hostname_override: "" user: wangpeng docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} - address: 172.18.1.9 port: "22" internal_address: "" role: - controlplane - worker - etcd hostname_override: "" user: wangpeng docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} services: etcd: image: "" extra_args: {} extra_binds: [] extra_env: [] external_urls: [] ca_cert: "" cert: "" key: "" path: "" snapshot: null retention: "" creation: "" backup_config: null kube-api: image: "" extra_args: {} extra_binds: [] extra_env: [] service_cluster_ip_range: 10.43.0.0/16 service_node_port_range: "" pod_security_policy: false always_pull_images: false kube-controller: image: "" extra_args: {} extra_binds: [] extra_env: [] cluster_cidr: 10.42.0.0/16 service_cluster_ip_range: 10.43.0.0/16 scheduler: image: "" extra_args: {} extra_binds: [] extra_env: [] kubelet: image: "" extra_args: {} extra_binds: [] extra_env: [] cluster_domain: cluster.local infra_container_image: "" cluster_dns_server: 10.43.0.10 fail_swap_on: false kubeproxy: image: "" extra_args: {} extra_binds: [] extra_env: [] network: plugin: canal options: {} authentication: strategy: x509 sans: [] webhook: null addons: "" addons_include: [] system_images: etcd: rancher/coreos-etcd:v3.2.24-rancher1 alpine: rancher/rke-tools:v0.1.28 nginx_proxy: rancher/rke-tools:v0.1.28 cert_downloader: rancher/rke-tools:v0.1.28 kubernetes_services_sidecar: rancher/rke-tools:v0.1.28 kubedns: rancher/k8s-dns-kube-dns:1.15.0 dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0 kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0 kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.0.0 coredns: rancher/coredns-coredns:1.2.6 coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.0.0 kubernetes: rancher/hyperkube:v1.14.1-rancher1 flannel: rancher/coreos-flannel:v0.10.0-rancher1 flannel_cni: rancher/flannel-cni:v0.3.0-rancher1 calico_node: rancher/calico-node:v3.4.0 calico_cni: rancher/calico-cni:v3.4.0 calico_controllers: "" calico_ctl: rancher/calico-ctl:v2.0.0 canal_node: rancher/calico-node:v3.4.0 canal_cni: rancher/calico-cni:v3.4.0 canal_flannel: rancher/coreos-flannel:v0.10.0 weave_node: weaveworks/weave-kube:2.5.0 weave_cni: weaveworks/weave-npc:2.5.0 pod_infra_container: rancher/pause:3.1 ingress: rancher/nginx-ingress-controller:0.21.0-rancher3 ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4-rancher1 metrics_server: rancher/metrics-server:v0.3.1 ssh_key_path: ~/.ssh/id_rsa ssh_cert_path: "" ssh_agent_auth: false authorization: mode: rbac options: {} ignore_docker_version: false kubernetes_version: "" private_registries: [] ingress: provider: "" options: {} node_selector: {} extra_args: {} cluster_name: "" cloud_provider: name: "" prefix_path: "" addon_job_timeout: 0 bastion_host: address: "" port: "" user: "" ssh_key: "" ssh_key_path: "" ssh_cert: "" ssh_cert_path: "" monitoring: provider: "" options: {} restore: restore: false snapshot_name: "" dns: null
運行RKE命令建立Kubernetes集羣
./rke_linux-amd64 up --config cluster.yml
完成後,它應顯示:Finished building Kubernetes cluster successfully。
在docker中跑有rancher/hyperkube:v1.14.1-rancher1鏡像的容器中,將hyperkube拷貝出來,將hyperkube修改爲kubectl,並移動到/usr/bin/kubectl便可。
wangpeng@test-kube-master-01:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES af4de55e791f rancher/nginx-ingress-controller "/entrypoint.sh /ngi…" 14 minutes ago Up 14 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-jw7w5_ingress-nginx_da8868d3-972c-11e9-bfa4-0017fa0337ea_0 a572977ce68b rancher/pause:3.1 "/pause" 15 minutes ago Up 15 minutes k8s_POD_nginx-ingress-controller-jw7w5_ingress-nginx_da8868d3-972c-11e9-bfa4-0017fa0337ea_0 086c3aad92fe rancher/coreos-flannel "/opt/bin/flanneld -…" 15 minutes ago Up 15 minutes k8s_kube-flannel_canal-hncc5_kube-system_b957f9ba-972c-11e9-bfa4-0017fa0337ea_0 8d48b7e7c492 rancher/calico-node "start_runit" 15 minutes ago Up 15 minutes k8s_calico-node_canal-hncc5_kube-system_b957f9ba-972c-11e9-bfa4-0017fa0337ea_0 8babca986d3a rancher/pause:3.1 "/pause" 16 minutes ago Up 16 minutes k8s_POD_canal-hncc5_kube-system_b957f9ba-972c-11e9-bfa4-0017fa0337ea_0 e9cb76b6ee95 rancher/hyperkube:v1.14.1-rancher1 "/opt/rke-tools/entr…" 16 minutes ago Up 16 minutes kube-proxy 0ed5730300bc rancher/hyperkube:v1.14.1-rancher1 "/opt/rke-tools/entr…" 16 minutes ago Up 16 minutes kubelet 2b75e5aa7802 rancher/hyperkube:v1.14.1-rancher1 "/opt/rke-tools/entr…" 16 minutes ago Up 14 minutes kube-scheduler 46a1002715bd rancher/hyperkube:v1.14.1-rancher1 "/opt/rke-tools/entr…" 17 minutes ago Up 14 minutes kube-controller-manager 50c736c7b389 rancher/hyperkube:v1.14.1-rancher1 "/opt/rke-tools/entr…" 17 minutes ago Up 17 minutes kube-apiserver 33387faa5469 rancher/rke-tools:v0.1.28 "/opt/rke-tools/rke-…" 18 minutes ago Up 18 minutes etcd-rolling-snapshots 3d348dea1e88 rancher/coreos-etcd:v3.2.24-rancher1 "/usr/local/bin/etcd…" 18 minutes ago Up 18 minutes etcd wangpeng@test-kube-master-01:~$ docker cp e9cb76b6ee95:/hyperkube ./ wangpeng@test-kube-master-01:~$ ls cluster.rkestate cluster.yml hyperkube kube_config_cluster.yml rke_linux-amd64 wangpeng@test-kube-master-01:~$ sudo mv hyperkube /usr/bin/kubectl wangpeng@test-kube-master-01:~$ kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?
將rke安裝完成後生成的kube_config_cluster.yml
,將KUBECONFIG環境變量設置爲kube_config_rancher-cluster.yml
文件路徑。
wangpeng@test-kube-master-01:~$ mkdir -pv ~/.kube mkdir: created directory '/home/wangpeng/.kube' wangpeng@test-kube-master-01:~$ cp kube_config_cluster.yml ~/.kube/config
wangpeng@test-kube-master-01:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION 172.18.1.4 Ready controlplane,etcd,worker 17m v1.14.1 172.18.1.5 Ready controlplane,etcd,worker 17m v1.14.1 172.18.1.9 Ready controlplane,etcd,worker 17m v1.14.1
wangpeng@test-kube-master-01:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx default-http-backend-775b55c884-wfdgm 1/1 Running 0 23m ingress-nginx nginx-ingress-controller-jw7w5 1/1 Running 0 23m ingress-nginx nginx-ingress-controller-sg8gs 1/1 Running 0 23m ingress-nginx nginx-ingress-controller-tstp5 1/1 Running 0 23m kube-system canal-hncc5 2/2 Running 0 24m kube-system canal-qnxx4 2/2 Running 0 24m kube-system canal-sjpbv 2/2 Running 0 24m kube-system kube-dns-869c7b8d96-rtpmn 3/3 Running 0 23m kube-system kube-dns-autoscaler-78dbfd75b7-twpbm 1/1 Running 0 23m kube-system metrics-server-7f6bd4c888-gjwz8 1/1 Running 0 23m kube-system rke-ingress-controller-deploy-job-srz6h 0/1 Completed 0 23m kube-system rke-kube-dns-addon-deploy-job-lsnm4 0/1 Completed 0 24m kube-system rke-metrics-addon-deploy-job-k74sn 0/1 Completed 0 23m kube-system rke-network-plugin-deploy-job-t5btw 0/1 Completed 0 24m
保存kube_config_rancher-cluster.yml
和rancher-cluster.yml
文件的副本,您將須要這些文件來維護和升級Rancher實例。