1.每臺主機準備好有公鑰在 /root/.ssh/authorized_keys,私鑰則存放在第一臺主機的/root/.ssh/id_rsajava
2.肯定每臺主機的私網IP地址是固定的。node
3.設置DNS服務器,讓openshift.iqyuan.com 指向 HAproxy的公網IPpython
4. 設置DNS服務器,讓*.apps.iqyuan.com 指向 HAproxy的公網IPlinux
5. 公網開放防火牆端口844三、80、443,由雲平臺提供開放。git
6. 提早設定每臺主機的hostname,建議加上域名,如 master1.iqyuan.comgithub
設置命令以下: hostnamectl set-hostname master1.iqyuan.comredis
也能夠經過雲平臺提供的編排功能提早設定主機名稱.docker
// 本教程須要精通linux的運維人員才具備理解能力.確保您能讀懂以下腳本內容..任何疏忽的配置,均可能致使後續安裝失敗.shell
第一臺主機第一階段腳本:json
yum install -y epel-release yum -y install ansible lrzsz telnet wget pyOpenSSL wget http://mirrors.ustc.edu.cn/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm mkdir -p /etc/rhsm/ca/ rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem cat <<EOF > ~/.ssh/id_rsa -----BEGIN RSA PRIVATE KEY----- 私鑰粘貼到這裏.公鑰提早放到各個主機對應目錄,注意權限爲600 -----END RSA PRIVATE KEY----- EOF chmod 600 ~/.ssh/id_rsa sed -i 's/GSSAPIAuthentication yes/StrictHostKeyChecking no/g' /etc/ssh/ssh_config sed -i 's/#forks = 5/forks = 15/g' /etc/ansible/ansible.cfg cat <<EOF > /etc/ansible/hosts master1.iqyuan.com [okd] haproxy1.iqyuan.com master2.iqyuan.com master3.iqyuan.com node1.iqyuan.com node2.iqyuan.com node3.iqyuan.com infra-node1.iqyuan.com infra-node2.iqyuan.com infra-node3.iqyuan.com EOF cat <<EOF > /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.250 node1.iqyuan.com 192.168.0.251 node2.iqyuan.com 192.168.0.3 node3.iqyuan.com 192.168.0.1 infra-node1.iqyuan.com 192.168.0.252 infra-node2.iqyuan.com 192.168.0.2 infra-node3.iqyuan.com 192.168.0.249 master1.iqyuan.com 192.168.0.5 master2.iqyuan.com 192.168.0.6 master3.iqyuan.com 192.168.0.4 haproxy1.iqyuan.com openshift.iqyuan.com EOF for host in \ haproxy1.iqyuan.com \ master1.iqyuan.com \ master2.iqyuan.com \ master3.iqyuan.com \ node1.iqyuan.com \ node2.iqyuan.com \ node3.iqyuan.com \ infra-node1.iqyuan.com \ infra-node2.iqyuan.com \ infra-node3.iqyuan.com; \ do scp /etc/hosts $host:/etc/ ; \ done for host in \ haproxy1.iqyuan.com \ master1.iqyuan.com \ master2.iqyuan.com \ master3.iqyuan.com \ node1.iqyuan.com \ node2.iqyuan.com \ node3.iqyuan.com \ infra-node1.iqyuan.com \ infra-node2.iqyuan.com \ infra-node3.iqyuan.com; \ do scp -r /etc/rhsm/ $host:/etc/ ; \ done ansible all -m shell -a "wipefs -a /dev/vdb; wipefs -a /dev/vdc; sed -i 's/SELINUX=disabled/SELINUX=enforcing/g' /etc/selinux/config; yum update -y" ansible okd -m shell -a "systemctl reboot" #暫停2秒 sleep 2 reboot
第二階段腳本:
ansible all -m shell -a "yum install -y telnet lsof wget zip unzip lrzsz git net-tools bind-utils yum-utils bridge-utils bash-completion kexec-tools sos psacct docker glusterfs-fuse python-passlib httpd-tools java-1.8.0-openjdk-headless" ansible all -m shell -a "setsebool -P virt_sandbox_use_fusefs on; setsebool -P virt_use_fusefs on; echo { \\\"registry-mirrors\\\": [\\\"https://bo30b6ic.mirror.aliyuncs.com/\\\"] } > /etc/docker/daemon.json " # 修改docker存儲位置. cat <<EOF > /etc/sysconfig/docker-storage-setup DEVS="/dev/vdb" VG="docker-vg" DATA_SIZE="95%VG" STORAGE_DRIVER=overlay2 CONTAINER_ROOT_LV_NAME="dockerlv" CONTAINER_ROOT_LV_MOUNT_PATH="/var/lib/docker" EOF for host in \ haproxy1.iqyuan.com \ master1.iqyuan.com \ master2.iqyuan.com \ master3.iqyuan.com \ node1.iqyuan.com \ node2.iqyuan.com \ node3.iqyuan.com \ infra-node1.iqyuan.com \ infra-node2.iqyuan.com \ infra-node3.iqyuan.com; \ do scp /etc/sysconfig/docker-storage-setup $host:/etc/sysconfig/ ; \ done ansible all -m shell -a "docker-storage-setup; systemctl enable NetworkManager;systemctl enable docker; systemctl start NetworkManager;systemctl start docker; docker pull cockpit/kubernetes:latest" # 阿里雲特殊,他們鏡像緩存有缺陷太慢了. for host in \ haproxy1.iqyuan.com \ master1.iqyuan.com \ master2.iqyuan.com \ master3.iqyuan.com \ node1.iqyuan.com \ node2.iqyuan.com \ node3.iqyuan.com \ infra-node1.iqyuan.com \ infra-node2.iqyuan.com \ infra-node3.iqyuan.com; \ do scp /etc/yum.repos.d/CentOS-Base.repo $host:/etc/yum.repos.d/ ; \ done cd wget https://github.com/openshift/openshift-ansible/archive/openshift-ansible-3.9.40-1.tar.gz tar -xzf openshift-ansible-3.9.40-1.tar.gz mv openshift-ansible-openshift-ansible-3.9.40-1 openshift-ansible
開始上傳劇本參數文件
rz ~/inventory ,從windows機器上傳.
第三階段安裝腳本:
ansible-playbook -i ~/inventory ~/openshift-ansible/playbooks/prerequisites.yml ansible all -m shell -a "sed -i 's/mirror.centos.org/mirrors.ustc.edu.cn/g' /etc/yum.repos.d/CentOS-OpenShift-Origin.repo" # 初次執行改劇本若是遇到錯誤,建議分步驟執行,避免耗時. ansible-playbook -i ~/inventory ~/openshift-ansible/playbooks/deploy_cluster.yml ansible all -m shell -a "firewall-cmd --zone=public --add-service=http --add-service=https --permanent && firewall-cmd --reload"
修改HAproxy的配置,增長80,443端口映射:
修改的HAproxy配置參考:
# Global settings #--------------------------------------------------------------------- global maxconn 20000 log /dev/log local0 info chroot /var/lib/haproxy pidfile /var/run/haproxy.pid user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull # option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 300s timeout server 300s timeout http-keep-alive 10s timeout check 10s maxconn 20000 listen stats bind :9000 mode http stats enable stats uri / frontend atomic-openshift-api bind *:8443 default_backend atomic-openshift-api mode tcp option tcplog backend atomic-openshift-api balance source mode tcp server master0 192.168.0.249:8443 check server master1 192.168.0.5:8443 check server master2 192.168.0.6:8443 check frontend atomic-openshift-80 bind *:80 default_backend atomic-openshift-80 mode tcp option tcplog backend atomic-openshift-80 balance source mode tcp server infra-node1 infra-node1.iqyuan.com:80 check server infra-node2 infra-node2.iqyuan.com:80 check server infra-node3 infra-node3.iqyuan.com:80 check frontend atomic-openshift-443 bind *:443 default_backend atomic-openshift-443 mode tcp option tcplog backend atomic-openshift-443 balance source mode tcp server infra-node1 infra-node1.iqyuan.com:443 check server infra-node2 infra-node2.iqyuan.com:443 check server infra-node3 infra-node3.iqyuan.com:443 check
修改完成後執行重啓服務 systemctl restart haproxy.service
增長代理服務的防火牆
firewall-cmd --zone=public --add-service=http --add-service=https --permanent && firewall-cmd --reload
繼續執行其餘組件的安裝
ansible-playbook -i ~/inventory ~/openshift-ansible/playbooks/openshift-metrics/config.yml -e openshift_metrics_install_metrics=true ansible-playbook -i ~/inventory ~/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=true