Kubernetes(一般寫成「k8s」)Kubernetes是Google開源的容器集羣管理系統。其設計目標是在主機集羣之間提供一個可以自動化部署、可拓展、應用容器可運營的平臺。Kubernetes一般結合docker容器工具工做,而且整合多個運行着docker容器的主機集羣,Kubernetes不只僅支持Docker,還支持Rocket,這是另外一種容器技術。 ** 功能特性:**node
Master節點上面主要由四個模塊組成:APIServer、scheduler、controller manager、etcdlinux
每一個Node節點主要由三個模塊組成:kubelet、kube-proxy、runtime。 runtime。runtime指的是容器運行環境,目前Kubernetes支持docker和rkt兩種容器。nginx
Pod是k8s進行資源調度的最小單位,每一個Pod中運行着一個或多個密切相關的業務容器,這些業務容器共享這個Pause容器的IP和Volume,咱們以這個不易死亡的Pause容器做爲Pod的根容器,以它的狀態表示整個容器組的狀態。一個Pod一旦被建立就會放到Etcd中存儲,而後由Master調度到一個Node綁定,由這個Node上的Kubelet進行實例化。 每一個Pod會被分配一個單獨的Pod IP,Pod IP + ContainerPort 組成了一個Endpoint。git
Service其功能使應用暴露,Pods 是有生命週期的,也有獨立的 IP 地址,隨着 Pods 的建立與銷燬,一個必不可少的工做就是保證各個應用可以感知這種變化。這就要提到 Service 了,Service 是 YAML 或 JSON 定義的由 Pods 經過某種策略的邏輯組合。更重要的是,Pods 的獨立 IP 須要經過 Service 暴露到網絡中。github
安裝有較多方式,在此使用二進制安裝和利用kubadm進行安裝部署web
| 名稱 | 主機名稱 | IP地址 |安裝軟件包|系統版本 | -------- | -------- | -------- | | kubernets server | master | 172.16.0.67 |etcd,kube-apiserver,kube-controller-manager,kube-scheduler|CentOS7.3 64位 | kubernets node1 | node01 | 172.16.0.66 |kubelet,kube-proxy,docker|CentOS7.3 64位 | kubernets node1 | node02 | 172.16.0.68 |kubelet,kube-proxy,docker|CentOS7.3 64位算法
軟件版本 kubenets網址 github.com/kubernetes/… server端二進制文件 dl.k8s.io/v1.8.13/kub… node端二進制文件 dl.k8s.io/v1.8.13/kub…docker
防火牆配置 systemctl stop firewalld systemctl disable firewalld systemctl mask firewalldapache
主機名修改,添加hosts解析 json
yum install etcd -y
複製代碼
配置etcd,並啓動服務器配置開機自啓動
cd /tmp && wget -c https://dl.k8s.io/v1.8.13/kubernetes-server-linux-amd64.tar.gz
tar -zxf kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,cfg}
mv kubernetes/server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager,kubectl} /opt/kubernetes/bin
複製代碼
cat > /opt/kubernetes/cfg/kube-apiserver<<EOF
KUBE_LOGTOSTDERR='--logtostderr=true'
KUBE_LOG_LEVEL="--v=4"
KUBE_ETCD_SERVERS="--etcd-servers=http://172.16.0.67:2379"
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--insecure-port=8080"
KUBE_ADVERTISE_ADDR="--advertise-address=172.16.0.67"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.10.10.0/24"
EOF
複製代碼
cat >/lib/systemd/system/kube-apiserver.service<<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
#ExecStart=/opt/kubernetes/bin/kube-apiserver ${KUBE_APISERVER_OPTS}
ExecStart=/opt/kubernetes/bin/kube-apiserver \
\${KUBE_LOGTOSTDERR} \
\${KUBE_LOG_LEVEL} \
\${KUBE_ETCD_SERVERS} \
\${KUBE_API_ADDRESS} \
\${KUBE_API_PORT} \
\${KUBE_ADVERTISE_ADDR} \
\${KUBE_ALLOW_PRIV} \
\${KUBE_SERVICE_ADDRESSES}
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
複製代碼
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
複製代碼
cat >/opt/kubernetes/cfg/kube-scheduler <<EOF
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=4"
KUBE_MASTER="--master=172.16.0.67:8080"
KUBE_LEADER_ELECT="--leader-elect"
EOF
複製代碼
cat>/lib/systemd/system/kube-scheduler.service<<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \
\${KUBE_LOGTOSTDERR} \
\${KUBE_LOG_LEVEL} \
\${KUBE_MASTER} \
\${KUBE_LEADER_ELECT}
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
複製代碼
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
複製代碼
cat > /opt/kubernetes/cfg/kube-controller-manager<<EOF
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=4"
KUBE_MASTER="--master=172.16.0.67:8080"
EOF
複製代碼
cat > /lib/systemd/system/kube-controller-manager.service<<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
\${KUBE_LOGTOSTDERR} \
\${KUBE_LOG_LEVEL} \
\${KUBE_MASTER} \
\${KUBE_LEADER_ELECT}
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
複製代碼
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
複製代碼
至此master就已經配置完成,如若配置中有錯誤,能夠經過#journalctl -u 服務名稱
查看報錯, 爲方便使用添加環境變量
echo "export PATH=\$PATH:/opt/kubernetes/bin" >> /etc/profile
source /etc/profile
複製代碼
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum install docker-ce -y
複製代碼
cd /tmp && wget https://dl.k8s.io/v1.8.13/kubernetes-node-linux-amd64.tar.gz
tar -zxf kubernetes-node-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,cfg}
mv kubernetes/node/bin/{kubelet,kube-proxy} /opt/kubernetes/bin/
複製代碼
cat > /opt/kubernetes/cfg/kubelet.kubeconfig <<EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
server: http://172.16.0.67:8080
name: local
contexts:
- context:
cluster: local
name: local
current-context: local
EOF
複製代碼
cat> /opt/kubernetes/cfg/kubelet <<EOF
# 啓用日誌標準錯誤
KUBE_LOGTOSTDERR="--logtostderr=true"
# 日誌級別
KUBE_LOG_LEVEL="--v=4"
# Kubelet服務IP地址
NODE_ADDRESS="--address=172.16.0.66"
# Kubelet服務端口
NODE_PORT="--port=10250"
# 自定義節點名稱
NODE_HOSTNAME="--hostname-override=172.16.0.66"
# kubeconfig路徑,指定鏈接API服務器
KUBELET_KUBECONFIG="--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig"
# 容許容器請求特權模式,默認false
KUBE_ALLOW_PRIV="--allow-privileged=false"
# DNS信息
KUBELET_DNS_IP="--cluster-dns=10.10.10.2"
KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local"
# 禁用使用Swap
KUBELET_SWAP="--fail-swap-on=false"
EOF
複製代碼
cat>/lib/systemd/system/kubelet.service<<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
\${KUBE_LOGTOSTDERR} \
\${KUBE_LOG_LEVEL} \
\${NODE_ADDRESS} \
\${NODE_PORT} \
\${NODE_HOSTNAME} \
\${KUBELET_KUBECONFIG} \
\${KUBE_ALLOW_PRIV} \
\${KUBELET_DNS_IP} \
\${KUBELET_DNS_DOMAIN} \
\${KUBELET_SWAP}
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
複製代碼
啓動服務 systemctl daemon-reload systemctl enable kubelet systemctl start kubelet
node節點安裝kube-proxy
建立配置文件
cat>/opt/kubernetes/cfg/kube-proxy <<EOF
# 啓用日誌標準錯誤
KUBE_LOGTOSTDERR="--logtostderr=true"
# 日誌級別
KUBE_LOG_LEVEL="--v=4"
# 自定義節點名稱
NODE_HOSTNAME="--hostname-override=172.16.0.66"
# API服務地址
KUBE_MASTER="--master=http://172.16.0.67:8080"
EOF
複製代碼
建立systemd服務文件
cat > /lib/systemd/system/kube-proxy.service<<EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
\${KUBE_LOGTOSTDERR} \
\${KUBE_LOG_LEVEL} \
\${NODE_HOSTNAME} \
\${KUBE_MASTER}
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
複製代碼
啓動服務
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
複製代碼
其餘節點加入集羣與node01方式相同,但需修改kubelet的--address和--hostname-override選項爲本機IP便可。
yum install -y docker
systemctl enable docker && systemctl start docker
複製代碼
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
複製代碼
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
複製代碼
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
systemctl daemon-reload
systemctl restart kubelet
複製代碼
kubeadm init --pod-network-cidr=10.244.0.0/16
複製代碼
--apiserver-advertise-address 指明用 Master 的哪一個 interface 與 Cluster 的其餘節點通訊。若是 Master 有多個 interface,建議明確指定,若是不指定,kubeadm 會自動選擇有默認網關的 interface。 --pod-network-cidr 指定 Pod 網絡的範圍。Kubernetes 支持多種網絡方案,並且不一樣網絡方案對 --pod-network-cidr 有本身的要求,這裏設置爲 10.244.0.0/16 是由於咱們將使用 flannel 網絡方案,必須設置成這個 CIDR。 命令執行完成會返回提示如何註冊其餘節點到 Cluster,此處須要記錄下token值,或整條命令。
# 建立用戶
useradd xuel
passwd xuel
# 切換到普通用戶
su - xuel
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 配置環境變量
export KUBECONFIG=/etc/kubernetes/admin.conf
echo "source <(kubectl completion bash)" >> ~/.bashrc
複製代碼
建議用普通用戶操做kubectl
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
複製代碼
在須要加入集羣的node節點也須要安裝docker 和 kubeadm ,啓動kubelet服務等操做,和master節點同樣,在此就省略。
kubeadm join 172.16.0.64:6443 --token dt5tet.26peoqdwftx7yafv --discovery-token-ca-cert-hash sha256:5b4030d19662122204ff78a4fd0ac496b739a9945517deca67a9384f0bab2b21
複製代碼
kubectl get nodes
kubectl get pod --all-namespaces
複製代碼
git clone https://github.com/redhatxl/k8s-prometheus-grafana.git
複製代碼
docker pull prom/node-exporter
docker pull prom/prometheus:v2.0.0
docker pull grafana/grafana:4.2.0
複製代碼
kubectl create -f node-exporter.yaml
複製代碼
kubectl create -f k8s-prometheus-grafana/prometheus/rbac-setup.yaml
複製代碼
kubectl create -f k8s-prometheus-grafana/prometheus/configmap.yaml
複製代碼
kubectl create -f k8s-prometheus-grafana/prometheus/prometheus.deploy.yml
複製代碼
kubectl create -f k8s-prometheus-grafana/prometheus/prometheus.svc.yml
複製代碼
kubectl create -f k8s-prometheus-grafana/grafana/grafana-deploy.yaml
複製代碼
kubectl create -f k8s-prometheus-grafana/grafana/grafana-svc.yaml
複製代碼
kubectl create -f k8s-prometheus-grafana/grafana/grafana-ing.yaml
複製代碼
查看node-exporter http://47.52.166.125:31672/metrics
prometheus對應的nodeport端口爲30003,經過訪問http://47.52.166.125:30003/target 能夠看到prometheus已經成功鏈接上了k8s的apiserver
經過端口進行granfa訪問,默認用戶名密碼均爲admin 添加數據源 導入面板,能夠直接輸入模板編號315在線導入,或者下載好對應的json模板文件本地導入,面板模板下載地址https:///dashboards/315 查看展現效果kubectl delete deployment apache
查看具體詳細事件 kubectl get pods -o widekubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
複製代碼
配置kubernetes-dashboard.yaml
cat >kubernetes-dashboard.yaml<<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
# Comment the following annotation if Dashboard must not be deployed on master
annotations:
scheduler.alpha.kubernetes.io/tolerations: |
[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
spec:
containers:
- name: kubernetes-dashboard
image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.7.0
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
- --apiserver-host=http://172.16.0.67:8080 #配置爲apiserver 地址
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard
EOF
複製代碼