dockers-k8s本地共享倉庫

docker k8s

組件名稱 說明
kube-dns 負責爲整個集羣提供DNS服務
Ingress Controller 爲服務提供外網入口
Heapster 提供資源監控
Dashboard 提供GUI
Federation 提供跨可用區的集羣
Fluentd-elasticsearch 提供集羣日誌採集、存儲與查詢

配置

主機 節點 hosts
10.0.0.202 master cat > /etc/hosts <<EOF
10.0.0.202 purple
10.0.0.203 yellow
10.0.0.204 blue
EOF
10.0.0.203 node cat > /etc/hosts <<EOF
10.0.0.202 purple
10.0.0.203 yellow
10.0.0.204 blue
EOF
10.0.0.204 node cat > /etc/hosts <<EOF
10.0.0.202 purple
10.0.0.203 yellow
10.0.0.204 blue
EOF

k8s集羣搭建

202:master節點安裝etcd

yum install etcd -y
vim /etc/etcd/etcd.conf
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.202:2379"

systemctl start etcd.service
systemctl enable etcd.service

etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0

etcdctl -C http://10.0.0.202:2379 cluster-health

202:master節點安裝kubernetes

yum install kubernetes-master.x86_64 -y

vim /etc/kubernetes/apiserver 
8行:  KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11行:KUBE_API_PORT="--port=8080"
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.202:2379"
23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.202:8080"

systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service

檢查服務是否安裝正常

[root@k8s-master ~]# kubectl get componentstatus 
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

203/204:node節點安裝kubernetes

yum install kubernetes-node.x86_64 -y

vim /etc/kubernetes/config 
22行:KUBE_MASTER="--master=http://10.0.0.202:8080"

vim /etc/kubernetes/kubelet
5行:KUBELET_ADDRESS="--address=0.0.0.0"
8行:KUBELET_PORT="--port=10250"
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.203"  ##204的node節點IP改成10.0.0.204
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.202:8080"

systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service

202:在master節點檢查

[root@k8s-master ~]# kubectl get nodes
NAME        STATUS    AGE
10.0.0.203   Ready     6m
10.0.0.204   Ready     3s
  • 若遇到報錯
[root@purple ~]#  kubectl get nodes
No resources found.
檢查 /etc/kubernetes/apiserver 文件中的23行是否修改成上述格式
檢查hosts文件,查看是否解析
或者重啓上述全部節點的各類服務刷新 再次嘗試檢查節點

6:全部節點配置flannel網絡

yum install flannel -y
sed -i 's#http://127.0.0.1:2379#http://10.0.0.202:2379#g' /etc/sysconfig/flanneld

##master節點:
etcdctl mk /atomic.io/network/config   '{ "Network": "172.16.0.0/16" }'
yum install docker -y
systemctl enable flanneld.service 
systemctl restart flanneld.service 
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

##node節點:
systemctl enable flanneld.service 
systemctl restart flanneld.service 
systemctl restart docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service

202:配置master爲鏡像倉庫

#全部節點
vim /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.0.202:5000'

systemctl restart docker

#master節點建立倉庫
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry  registry

驗證倉庫可用性

# 202節點上傳一個打過標籤的鏡像到倉庫中
[root@purple ~]# docker tag docker.io/busybox:latest 10.0.0.202:5000/docker.io/busybox:latest
[root@purple ~]# docker push 10.0.0.202:5000/docker.io/busybox:latest
The push refers to a repository [10.0.0.202:5000/docker.io/busybox]
1da8e4c8d307: Pushed 
latest: digest: sha256:679b1c1058c1f2dc59a3ee70eed986a88811c0205c8ceea57cec5f22d2c3fbb1 size: 527
#203/204節點嘗試拉取倉庫中的鏡像
[root@yellow ~]#  docker pull 10.0.0.202:5000/docker.io/busybox:latest
Trying to pull repository 10.0.0.202:5000/docker.io/busybox ... 
latest: Pulling from 10.0.0.202:5000/docker.io/busybox
0f8c40e1270f: Pull complete 
Digest: sha256:679b1c1058c1f2dc59a3ee70eed986a88811c0205c8ceea57cec5f22d2c3fbb1
Status: Downloaded newer image for 10.0.0.202:5000/docker.io/busybox:latest
##拉取成功,說明咱們的本地倉庫可供三臺主機共用

: Docker跨主機容器之間的通訊macvlan

##在203上建立macvlan網絡
docker network create --driver macvlan --subnet 10.0.0.0/24 --gateway 10.0.0.254 -o parent=eth0 macvlan_1
##設置eth0的網卡爲混雜模式
ip link set eth0 promisc on
##建立使用macvlan網絡的容器
docker run -it --network macvlan_1 --ip=10.0.0.5 10.0.0.202:5000/docker.io/busybox:latest
[root@yellow ~]# docker run -it --network macvlan_1 --ip=10.0.0.5 10.0.0.202:5000/docker.io/busybox:latest 
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:0a:00:00:05 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.5/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe00:5/64 scope link 
       valid_lft forever preferred_lft forever
/ # 
能夠看到新建立的容器IP爲指定ip
咱們能夠嘗試去202/204上使用ping命令,查看是否能ping通
[root@purple ~]# ping -c 2 10.0.0.5
PING 10.0.0.5 (10.0.0.5) 56(84) bytes of data.
64 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=0.394 ms
64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=0.919 ms
--- 10.0.0.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.394/0.656/0.919/0.263 ms
----------------purple-------------------------
咱們在204上也建立一個指定IP的容器 看看是否能夠與10.0.0.5的容器通訊
[root@blue ~]# docker network create --driver macvlan --subnet 10.0.0.0/24 --gateway 10.0.0.254 -o parent=eth0 macvlan_1
ed9af47d206c7790959ad6f9a560f45fd2e42144ff36763750c129d0ea52a335
[root@blue ~]# ip link set eth0 promisc on
[root@blue ~]# docker run -it --network macvlan_1 --ip=10.0.0.6 10.0.0.202:5000/docker.io/busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:0a:00:00:06 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.6/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe00:6/64 scope link 
       valid_lft forever preferred_lft forever
/ # ping -c 3 10.0.0.5
PING 10.0.0.5 (10.0.0.5): 56 data bytes
64 bytes from 10.0.0.5: seq=0 ttl=64 time=2.725 ms
64 bytes from 10.0.0.5: seq=1 ttl=64 time=0.435 ms
64 bytes from 10.0.0.5: seq=2 ttl=64 time=0.352 ms

--- 10.0.0.5 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.352/1.170/2.725 ms
/ # 
能夠看到兩個不一樣主機上的容器在同一網段下能夠相互通訊
相關文章
相關標籤/搜索