Istio微服務架構初試

感謝html

http://blog.csdn.net/qq_34463875/article/details/77866072java

 

看了一些文檔,有些半懂不懂,因此仍是須要helloworld一下。由於istio須要kubernetes 1.7的環境,因此又把環境從新安裝了一邊,詳情看隨筆。node

文章比較少,我也遇到很多問題,基本仍是出於對一些東西的理解不夠深入,踩坑下來也算是學習啦。linux

重要事情先說一次git

1.Kube-apiserver須要打開ServiceAccount配置github

2.Kube-apiserver須要配置ServiceAccountweb

3.集羣須要配置DNSdocker

 

架構shell

理解微服務架構,就不得不提目前很火熱的一個概念,服務網絡。 ubuntu

Service Mesh是專用的基礎設施層。 
輕量級高性能網絡代理。 
提供安全的、快速的、可靠地服務間通信。 
與實際應用部署一塊兒,但對應用透明。 

應用做爲服務的發起方,只須要用最簡單的方式將請求發送給本地的服務網格代理,而後網格代理會進行後續的操做,如服務發現,負載均衡,最後將請求轉發給目標服務。

 

先看一張Service Mesh的架構圖

 

Istio 首先是一個服務網絡,可是Istio又不單單是服務網格: 在 Linkerd, Envoy 這樣的典型服務網格之上,Istio提供了一個完整的解決方案,爲整個服務網格提供行爲洞察和操做控制,以知足微服務應用程序的多樣化需求。

Istio在服務網絡中統一提供了許多關鍵功能(如下內容來自官方文檔):

  • 流量管理:控制服務之間的流量和API調用的流向,使得調用更可靠,並使網絡在惡劣狀況下更加健壯。

  • 可觀察性:瞭解服務之間的依賴關係,以及它們之間流量的本質和流向,從而提供快速識別問題的能力。

  • 策略執行:將組織策略應用於服務之間的互動,確保訪問策略得以執行,資源在消費者之間良好分配。策略的更改是經過配置網格而不是修改應用程序代碼。

  • 服務身份和安全:爲網格中的服務提供可驗證身份,並提供保護服務流量的能力,使其能夠在不一樣可信度的網絡上流轉。

除此以外,Istio針對可擴展性進行了設計,以知足不一樣的部署須要:

  • 平臺支持:Istio旨在在各類環境中運行,包括跨雲, 預置,Kubernetes,Mesos等。最初專一於Kubernetes,但很快將支持其餘環境。

  • 集成和定製:策略執行組件能夠擴展和定製,以便與現有的ACL,日誌,監控,配額,審覈等解決方案集成。

這些功能極大的減小了應用程序代碼,底層平臺和策略之間的耦合,使微服務更容易實現。

 

istio架構圖 

 

Istio的關鍵功能包括:

  • HTTP/1.1,HTTP/2,gRPC和TCP流量的自動區域感知負載平衡和故障切換。
  • 經過豐富的路由規則,容錯和故障注入,對流行爲的細粒度控制。
  • 支持訪問控制,速率限制和配額的可插拔策略層和配置API。
  • 集羣內全部流量的自動量度,日誌和跟蹤,包括集羣入口和出口。
  • 安全的服務到服務身份驗證,在集羣中的服務之間具備強大的身份標識。

 

安裝

下載地址 https://github.com/istio/istio/releases

我下載的是0.1.6版本。 https://github.com/istio/istio/releases/download/0.1.6/istio-0.1.6-linux.tar.gz

解壓,而後下載鏡像,涉及鏡像包括

  • istio/mixer:0.1.6
  • pilot:0.1.6
  • proxy_debug:0.1.6
  • istio-ca:0.1.6
[root@node1 ~]# docker images
REPOSITORY                                                               TAG                  IMAGE ID            CREATED             SIZE
docker.io/tomcat                                                         9.0-jre8             e882239f2a28        2 weeks ago         557.3 MB
docker.io/alpine                                                         latest               053cde6e8953        3 weeks ago         3.962 MB
registry.cn-hangzhou.aliyuncs.com/szss_k8s/k8s-dns-sidecar-amd64         1.14.5               fed89e8b4248        8 weeks ago         41.81 MB
registry.cn-hangzhou.aliyuncs.com/szss_k8s/k8s-dns-kube-dns-amd64        1.14.5               512cd7425a73        8 weeks ago         49.38 MB
registry.cn-hangzhou.aliyuncs.com/szss_k8s/k8s-dns-dnsmasq-nanny-amd64   1.14.5               459944ce8cc4        8 weeks ago         41.42 MB
gcr.io/google_containers/exechealthz                                     1.0                  82a141f5d06d        20 months ago       7.116 MB
gcr.io/google_containers/kube2sky                                        1.14                 a4892326f8cf        21 months ago       27.8 MB
gcr.io/google_containers/etcd-amd64                                      2.2.1                3ae398308ded        22 months ago       28.19 MB
gcr.io/google_containers/skydns                                          2015-10-13-8c72f8c   718809956625        2 years ago         40.55 MB
docker.io/kubernetes/pause                                               latest               f9d5de079539        3 years ago         239.8 kB
docker.io/istio/istio-ca                                                 0.1.6                c25b02aba82d        292 years ago       153.6 MB
docker.io/istio/mixer                                                    0.1.6                1f4a2ce90af6        292 years ago       158.9 MB
docker.io/istio/proxy_debug                                              0.1                  5623de9317ff        292 years ago       825 MB
docker.io/istio/proxy_debug                                              0.1.6                5623de9317ff        292 years ago       825 MB
docker.io/istio/pilot                                                    0.1.6                e0c24bd68c04        292 years ago       144.4 MB
docker.io/istio/init                                                     0.1                  0cbd83e9df59        292 years ago       119.3 MB

 

進入istio.yaml後先把pullPolicy給修改了

imagePullPolicy: IfNotPresent

 

而後運行

kubectl create     -f istio-rbac-beta.yaml

kubectl create     -f istio.yaml

此處遇到無數問題,都和環境不ready相關

1.Kube-apiserver須要打開ServiceAccount配置

2.Kube-apiserver須要配置ServiceAccount

3.集羣須要配置DNS

運行起來後一看service

[root@k8s-master kubernetes]# kubectl get services
NAME            CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
helloworldsvc   10.254.145.112   <none>        8080/TCP                      47m
istio-egress    10.254.164.118   <none>        80/TCP                        14h
istio-ingress   10.254.234.8     <pending>     80:32031/TCP,443:32559/TCP    14h
istio-mixer     10.254.227.198   <none>        9091/TCP,9094/TCP,42422/TCP   14h
istio-pilot     10.254.15.121    <none>        8080/TCP,8081/TCP             14h
kubernetes      10.254.0.1       <none>        443/TCP                       1d
tool            10.254.87.52     <none>        8080/TCP                      44m

這個ingress服務一直處於pending狀態,後來查了半天說和是否支持外部負載均衡有關,暫時不理。

 

準備測試應用

創建PV和PVC,初步設想是準備一個tomcat鏡像,而後放上HelloWorld應用

[root@k8s-master ~]# cat pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
    name: pv0003
spec:
    capacity:
      storage: 1Gi
    accessModes:
      - ReadWriteOnce
    persistentVolumeReclaimPolicy: Recycle
    hostPath:
      path: /webapps
[root@k8s-master ~]# cat pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: tomcatwebapp
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

關於HelloWorld應用

index.jsp

<%@ page language="java" contentType="text/html; charset=utf-8"  import="java.net.InetAddress"   pageEncoding="utf-8"%>  
<html>
 <body> This is a Helloworld test</body>
<%
    System.out.println("this is a session test!");
    
                     InetAddress addr = InetAddress.getLocalHost();  
         out.println("HostAddress="+addr.getHostAddress());  
         out.println("HostName="+addr.getHostName());   
         
         String version = System.getenv("SERVICE_VERSION");
         out.println("SERVICE_VERSION="+version);
         
    %>
</html>

 

創建第一個版本的rc-v1.yaml文件

[root@k8s-master ~]# cat rc-v1.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: helloworld-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        tomcat-app: "helloworld"
        version: "1"
    spec:
      containers:
      - name: tomcathelloworld
        image: docker.io/tomcat:9.0-jre8
        volumeMounts:
        - mountPath: "/usr/local/tomcat/webapps"
          name: mypd
        ports:
        - containerPort: 8080
        env:
        - name: "SERVICE_VERSION"
          value: "1"
      volumes:
      - name: mypd
        persistentVolumeClaim:
          claimName: tomcatwebapp

rc-service文件

[root@k8s-master ~]# cat rc-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: helloworldsvc
  labels:
    tomcat-app: helloworld
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
    name: http
  selector:
    tomcat-app: helloworld

 

而後經過istioctl kube-inject注入

istioctl kube-inject -f  rc-v1.yaml > rc-v1-istio.yaml

注入後發現,多了一個Sidecar Container

[root@k8s-master ~]# cat rc-v1-istio.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  name: helloworld-service
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      annotations:
        alpha.istio.io/sidecar: injected
        alpha.istio.io/version: jenkins@ubuntu-16-04-build-12ac793f80be71-0.1.6-dab2033
        pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"IfNotPresent","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}},{"args":["-c","sysctl
          -w kernel.core_pattern=/tmp/core.%e.%p.%t \u0026\u0026 ulimit -c unlimited"],"command":["/bin/sh"],"image":"alpine","imagePullPolicy":"IfNotPresent","name":"enable-core-dump","securityContext":{"privileged":true}}]'
      creationTimestamp: null
      labels:
        tomcat-app: helloworld
        version: "1"
    spec:
      containers:
      - env:
        - name: SERVICE_VERSION
          value: "1"
        image: docker.io/tomcat:9.0-jre8
        name: tomcathelloworld
        ports:
        - containerPort: 8080
        resources: {}
        volumeMounts:
        - mountPath: /usr/local/tomcat/webapps
          name: mypd
      - args: - proxy - sidecar - -v - "2" env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
        - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP image: docker.io/istio/proxy_debug:0.1 imagePullPolicy: IfNotPresent name: proxy resources: {} securityContext: runAsUser: 1337
      volumes:
      - name: mypd
        persistentVolumeClaim:
          claimName: tomcatwebapp
status: {}
---

 inject以後又要下載幾個鏡像 :(

  • docker.io/istio/proxy_debug:0.1
  • docker.io/istio/init:0.1
  • alpine

同時注意把imagePullPolicy改掉。。。。

再運行

kubectl create -f rc-v1-istio.yaml

此處又遇到無數坑

1.權限問題,須要在/etc/kubernetes/config下打開--allow-privileged,master和節點都須要打開

[root@k8s-master ~]# cat /etc/kubernetes/config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.44.108:8080"

 

2.發現只要一加上 securityContext:          runAsUser: 1337 POD不管如何都不啓動,去掉至少能夠啓動,一直在desired階段,由於提示信息有限,比較燒腦,後發現須要修改APIServer中的配置,去掉--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota,ServiceAccount中的SecurityContextDeny

最後kube-apiserver配置爲

[root@k8s-master ~]# cat /etc/kubernetes/apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=192.168.44.108"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.44.108:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--secure-port=443 --client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key"

好了,搞完能順利啓動。

 

再創建一個rc-v2.yaml

[root@k8s-master ~]# cat rc-v2.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: helloworld-service-v2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        tomcat-app: "helloworld"
        version: "2"
    spec:
      containers:
      - name: tomcathelloworld
        image: docker.io/tomcat:9.0-jre8
        volumeMounts:
        - mountPath: "/usr/local/tomcat/webapps"
          name: mypd
        ports:
        - containerPort: 8080
        env:
        - name: "SERVICE_VERSION"
          value: "2"
      volumes:
      - name: mypd
        persistentVolumeClaim:
          claimName: tomcatwebapp

 

tool.yaml,用於在服務網絡中進行測試用.其實就是一個shell

[root@k8s-master ~]# cat tool.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: tool
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: tool
        version: "1"
    spec:
      containers:
      - name: tool
        image: docker.io/tomcat:9.0-jre8
        volumeMounts:
        - mountPath: "/usr/local/tomcat/webapps"
          name: mypd
        ports:
        - containerPort: 8080
      volumes:
      - name: mypd
        persistentVolumeClaim:
          claimName: tomcatwebapp
---
apiVersion: v1
kind: Service
metadata:
  name: tool
  labels:
    name: tool
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
    name: http
  selector:
    name: tool

兩個都須要kube-inject而且經過apply進行部署。

最後結果

[root@k8s-master ~]# kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
helloworld-service-2437162702-x8w05      2/2       Running   0          1h
helloworld-service-v2-2637126738-s7l4s   2/2       Running   0          1h
istio-egress-2869428605-2ftgl            1/1       Running   2          14h
istio-ingress-1286550044-6g3vj           1/1       Running   2          14h
istio-mixer-765485573-23wc6              1/1       Running   2          14h
istio-pilot-1495912787-g5r9s             2/2       Running   4          14h
tool-185907110-fsr04                     2/2       Running   0          1h

 

流量分配

創建一個路由規則

istioctl create -f default.yaml

[root@k8s-master ~]# cat default.yaml 
type: route-rule
name: helloworld-default
spec:
  destination: helloworldsvc.default.svc.cluster.local
  precedence: 1
  route:
  - tags:
      version: "2"
    weight: 10
  - tags:
      version: "1"
    weight: 90

也就是訪問helloworldsvc,有90%的流量會訪問到version 1的pod,而10%的流量會訪問到version 2的節點

如何判斷這個helloworldsvc確實是指到後端兩個pod呢,能夠經過下面命令確認

[root@k8s-master ~]# kubectl describe service helloworldsvc
Name:            helloworldsvc
Namespace:        default
Labels:            tomcat-app=helloworld
Annotations:        <none>
Selector:        tomcat-app=helloworld
Type:            ClusterIP
IP:            10.254.145.112
Port:            http    8080/TCP
Endpoints:        10.1.40.3:8080,10.1.40.7:8080
Session Affinity:    None
Events:            <none>

說明service和deployment的配置沒有問題

 

進入到tools

[root@k8s-master ~]# kubectl exec -it tool-185907110-fsr04 bash
Defaulting container name to tool.
Use 'kubectl describe pod/tool-185907110-fsr04' to see all of the containers in this pod.
root@tool-185907110-fsr04:/usr/local/tomcat# 

運行

<usr/local/tomcat# curl helloworldsvc:8080/HelloWorld/index.jsp              
  
<html>
 <body> This is a Helloworld test</body>
HostAddress=10.1.40.3
HostName=helloworld-service-v2-2637126738-s7l4s
SERVICE_VERSION=2

這裏又折騰好久,開始怎麼都返回connection refuse,在pod中訪問localhost通但curl ip不通,後來嘗試採用不注入的tool發現沒有問題,但並不進行流量控制,最後又切換會inject後的pod後竟然發現可以聯通了,解決方法是: 把inject的從新create一遍,同時把service又create一遍。

寫個shell腳本

echo "for i in {1..100}
do
curl -s helloworldsvc:8080/HelloWorld/index.jsp | grep SERVICE_VERSION
done" > batch.sh

而後運行

 而後經過grep統計驗證流量分佈

root@tool-185907110-fsr04:/usr/local/tomcat# ./batch.sh | grep 2 | wc -l
10
root@tool-185907110-fsr04:/usr/local/tomcat# ./batch.sh | grep 1 | wc -l
90

 

超時策略

[root@k8s-master ~]# cat delay.yaml 
type: route-rule
name: helloworld-timeout
spec:
  destination: helloworldsvc.default.svc.cluster.local
  precedence: 9
  route:
  - tags:
      version: "1"
  httpReqTimeout:
    simpleTimeout:
      timeout: 2s

 

設置2秒超時,而後繼續Curl

root@tool-185907110-nrn9l:/usr/local/tomcat# curl  -s helloworldsvc:8080/HelloWorld/delay.jsp
upstream request timeout

須要注意的是開始怎麼也不生效,後來把tool工具的pod刪除再從新創建就能夠了

 

重試策略

須要先把以前的timeout去掉

[root@k8s-master ~]# cat retry.yaml 
type: route-rule
name: helloworld-timeout
spec:
  destination: helloworldsvc.default.svc.cluster.local
  precedence: 9
  route:
  - tags:
      version: "1"
  httpReqRetries:
    simpleRetry:
      attempts: 2 
      perTryTimeout: 2s

訪問結果

root@tool-185907110-ms991:/usr/local/tomcat# curl -s helloworldsvc:8080/HelloWorld/delay.jsp
upstream request timeout

root@tool-185907110-ms991:/usr/local/tomcat# curl -s helloworldsvc:8080/HelloWorld<transfer}:%{time_total}\n' 'helloworldsvc:8080/HelloWorld/delay.jsp ' 
                    
0.004545:6.087113:6.087190

超時後每次兩秒都沒出來,重試了2次

 

 

 

 

 

 未完待續。。。

相關文章
相關標籤/搜索