centos-master:172.16.100.60php
centos-minion:172.16.100.62node
k8s,etcd,docker等都是採用yum裝的,部署參考的k8s權威指南和一個視頻,視頻在百度網盤裏,忘記具體步驟了,安裝不難,關鍵在於第一次接觸,改的文件記不住下次在寫個安裝的步驟吧docker
首先安裝heapster 我安裝的是1.2.0的版本apache
我的感受只要幾個yaml文件就好了,裏面其餘東西幹嗎用的,沒用到嘛centos
[root@centos-master influxdb]# pwd /usr/src/heapster-1.2.0/deploy/kube-config/influxdb [root@centos-master influxdb]# ls grafana-deploment.yaml heapster-deployment.yaml influxdb-deployment.yaml grafana-service.yaml heapster-service.yaml influxdb-service.yaml
[root@centos-master influxdb]# cat heapster-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: heapster namespace: kube-system spec: replicas: 2 template: metadata: labels: task: monitoring k8s-app: heapster spec: containers: - name: heapster image: docker.io/ist0ne/heapster-amd64:latest imagePullPolicy: IfNotPresent command: - /heapster - --source=kubernetes:http://172.16.100.60:8080?inClusterConfig=false - --sink=influxdb:http://10.254.129.95:8086
關於--source和--sinkapi
剛下載完的是這樣的,具體表明什麼能夠查一下,100.60是集羣的master,129.95是個啥?我忘了好像是某個kubectl get svc裏面的某個地址,由於svc我刪過,找不到原來的IP了bash
- --source=kubernetes:https://kubernetes.default - --sink=influxdb:http://monitoring-influxdb:8086
[root@centos-master influxdb]# cat heapster-service.yaml apiVersion: v1 kind: Service metadata: labels: kubernetes.io/cluster-service: 'true' kubernetes.io/name: Heapster name: heapster namespace: kube-system spec: ports: - port: 80 targetPort: 8082 selector: k8s-app: heapster
[root@centos-master influxdb]# cat grafana-deploment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: monitoring-grafana namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: grafana spec: containers: - name: grafana image: docker.io/ist0ne/heapster-grafana-amd64:latest ports: - containerPort: 3000 protocol: TCP volumeMounts: - mountPath: /var name: grafana-storage env: - name: INFLUXDB_HOST value: monitoring-influxdb - name: GRAFANA_PORT value: "3000" - name: GF_AUTH_BASIC_ENABLED value: "false" - name: GF_AUTH_ANONYMOUS_ENABLED value: "true" - name: GF_AUTH_ANONYMOUS_ORG_ROLE value: Admin - name: GF_SERVER_ROOT_URL value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ volumes: - name: grafana-storage emptyDir: {}
[root@centos-master influxdb]# cat grafana-service.yaml apiVersion: v1 kind: Service metadata: labels: kubernetes.io/cluster-service: 'true' kubernetes.io/name: monitoring-grafana name: monitoring-grafana namespace: kube-system spec: # In a production setup, we recommend accessing Grafana through an external Loadbalancer # or through a public IP. # type: LoadBalancer ports: - port: 80 targetPort: 3000 selector: name: influxGrafana
[root@centos-master influxdb]# cat influxdb-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: monitoring-influxdb namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: influxdb spec: containers: - name: influxdb image: docker.io/ist0ne/heapster-influxdb-amd64:v1.1.1 volumeMounts: - mountPath: /data name: influxdb-storage volumes: - name: influxdb-storage emptyDir: {}
[root@centos-master influxdb]# cat influxdb-service.yaml apiVersion: v1 kind: Service metadata: labels: null name: monitoring-influxdb namespace: kube-system spec: ports: - name: http port: 8083 targetPort: 8083 - name: api port: 8086 targetPort: 8086 selector: name: influxGrafana
以上是所有的heapster須要的yaml文件app
kubectl create -f ../influxdbide
生成相應的pod和svc測試
而後就改建立具體的pod應用了
[root@centos-master yaml]# cat php-apache-rc.yaml apiVersion: v1 kind: ReplicationController metadata: name: php-apache spec: replicas: 1 template: metadata: name: php-apache labels: app: php-apache spec: containers: - name: php-apache image: siriuszg/hpa-example resources: requests: cpu: 200m ports: - containerPort: 80
[root@centos-master yaml]# cat php-apache-svc.yaml apiVersion: v1 kind: Service metadata: name: php-apache spec: ports: - port: 80 selector: app: php-apache
[root@centos-master yaml]# cat busybox.yaml apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox command: - sleep - "3600" name: busybox
[root@centos-master yaml]# cat hpa-php-apache.yaml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: php-apache spec: scaleTargetRef: apiVersion: v1 kind: ReplicationController name: php-apache minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 10
注意若是是集羣狀態,php-apache和busybox可能不是 node的,訪問會出問題,要麼把他們弄到一個node裏要麼安裝flannel把node都連起來,早晚都要作這一步的。
kubect create -f 上面的這些文件
檢查heaptser是否成功
[root@centos-master yaml]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% centos-minion 105m 2% 1368Mi 34%
若是出現上面的表明成功了,不知道爲何個人在這檢測不到127.0.0.1的這個node
沒出現的話仔細查看日誌,查看node是否起來是否加入到了集羣,/var/log/message 或者 kubectl describe hpa php-apache 或者查pod php-apapche的日誌看看
還有一點就是網上都是用的kube-system這個namespace我用的時候老是檢測不到,後來去掉了,用的默認的命名空間就是上面的配置,發現能夠檢測了,不知道緣由
檢查hpa
[root@centos-master yaml]# kubectl get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE php-apache ReplicationController/php-apache 10% 0% 1 10 23h [root@centos-master yaml]# kubectl get hpa --namespace=kube-system NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE php-apache ReplicationController/php-apache 50% <waiting> 1 10 20h
會發現默認的hpa是current是能檢測到值的,可是以前用的kube-system的一直是 waitng狀態
進入到busybox裏進行壓力測試
[root@centos-master ~]# kubectl exec -ti busybox -- sh / # while true; do wget -q -O- http://10.254.221.176 > /dev/null ; done
過十幾秒發現pod增長了,並且cpu的current也增大了,可是有個問題,按理來講應該會自動收縮的,可是他只會自動擴容,收縮不成功,當停掉壓力測試的時候,仍是這麼多的pod
並無根據cpu的下降而減小pod的數量,奇怪
[root@centos-master yaml]# kubectl get pods -o wide | grep php-apache php-apache-5bcgk 1/1 Running 0 44s 10.0.34.2 127.0.0.1 php-apache-b4nv5 1/1 Running 0 44s 10.0.16.4 centos-minion php-apache-kw1m0 1/1 Running 0 44s 10.0.34.17 127.0.0.1 php-apache-vz2rx 1/1 Running 0 3h 10.0.16.3 centos-minion [root@centos-master yaml]# kubectl get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE php-apache ReplicationController/php-apache 10% 25% 1 10 23h [root@centos-master yaml]#