項目的組件相對比較複雜,原有的一些選項是靠 ConfigMap 以及 istioctl 分別調整的,如今經過從新設計的Helm Chart
,安裝選項用values.yml
或者 helm 命令行的方式來進行集中管理了。html
在安裝 Istio 以前要確保 Kubernetes 集羣(僅支持v1.9
及之後版本)已部署並配置好本地的 kubectl 客戶端。node
$ wget https://github.com/istio/istio/releases/download/1.0.2/istio-1.0.2-linux.tar.gz $ tar zxf istio-1.0.2-linux.tar.gz $ cp istio-1.0.2/bin/istioctl /usr/local/bin/
git clone https://github.com/istio/istio.git cd istio
安裝包內的 Helm 目錄中包含了 Istio 的 Chart,官方提供了兩種方法:linux
istio.yaml
,而後自行安裝。Tiller
直接安裝。很明顯,兩種方法並無什麼本質區別,這裏咱們採用第一種方法來部署。ios
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set ingress.enabled=true --set sidecarInjectorWebhook.enabled=true --set ingress.service.type=NodePort --set gateways.istio-ingressgateway.type=NodePort --set gateways.istio-egressgateway.type=NodePort --set tracing.enabled=true --set servicegraph.enabled=true --set prometheus.enabled=true --set tracing.jaeger.enabled=true --set grafana.enabled=true --set kiali.enabled=true > istio.yaml $ kubectl create namespace istio-system $ kubectl create -f istio.yaml
--- # Source: istio/charts/kiali/templates/secrets.yaml apiVersion: v1 kind: Secret metadata: name: kiali namespace: istio-system labels: app: kiali type: Opaque data: username: "YWRtaW4=" passphrase: "YWRtaW4=" --- # Source: istio/charts/galley/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: istio-galley-configuration namespace: istio-system labels: app: istio-galley chart: galley-1.0.1 release: istio heritage: Tiller istio: mixer data: validatingwebhookconfiguration.yaml: |- apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: istio-galley namespace: istio-system labels: app: istio-galley chart: galley-1.0.1 release: istio heritage: Tiller webhooks: - name: pilot.validation.istio.io clientConfig: service: name: istio-galley namespace: istio-system path: "/admitpilot" caBundle: "" rules: - operations: - CREATE - UPDATE apiGroups: - config.istio.io apiVersions: - v1alpha2 resources: - httpapispecs - httpapispecbindings - quotaspecs - quotaspecbindings - operations: - CREATE - UPDATE apiGroups: - rbac.istio.io apiVersions: - "*" resources: - "*" - operations: - CREATE - UPDATE apiGroups: - authentication.istio.io apiVersions: - "*" resources: - "*" - operations: - CREATE - UPDATE apiGroups: - networking.istio.io apiVersions: - "*" resources: - destinationrules - envoyfilters - gateways # disabled per @costinm's request # - serviceentries - virtualservices failurePolicy: Fail - name: mixer.validation.istio.io clientConfig: service: name: istio-galley namespace: istio-system path: "/admitmixer" caBundle: "" rules: - operations: - CREATE - UPDATE apiGroups: - config.istio.io apiVersions: - v1alpha2 resources: - rules - attributemanifests - circonuses - deniers - fluentds - kubernetesenvs - listcheckers - memquotas - noops - opas - prometheuses - rbacs - servicecontrols - solarwindses - stackdrivers - statsds - stdios - apikeys - authorizations - checknothings # - kuberneteses - listentries - logentries - metrics - quotas - reportnothings - servicecontrolreports - tracespans failurePolicy: Fail --- # Source: istio/charts/grafana/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: istio-grafana-custom-resources namespace: istio-system labels: app: istio-grafana chart: grafana-1.0.1 release: istio heritage: Tiller istio: grafana data: custom-resources.yaml: |- apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: grafana-ports-mtls-disabled namespace: istio-system spec: targets: - name: grafana ports: - number: 3000 run.sh: |- #!/bin/sh set -x if [ "$#" -ne "1" ]; then echo "first argument should be path to custom resource yaml" exit 1 fi pathToResourceYAML=${1} /kubectl get validatingwebhookconfiguration istio-galley 2>/dev/null if [ "$?" -eq 0 ]; then echo "istio-galley validatingwebhookconfiguration found - waiting for istio-galley deployment to be ready" while true; do /kubectl -n istio-system get deployment istio-galley 2>/dev/null if [ "$?" -eq 0 ]; then break fi sleep 1 done /kubectl -n istio-system rollout status deployment istio-galley if [ "$?" -ne 0 ]; then echo "istio-galley deployment rollout status check failed" exit 1 fi echo "istio-galley deployment ready for configuration validation" fi sleep 5 /kubectl apply -f ${pathToResourceYAML} --- # Source: istio/charts/kiali/templates/configmap.yaml #apiVersion: v1 #kind: ConfigMap #metadata: # name: kiali # namespace: istio-system # labels: # app: kiali #data: # config.yaml: | # server: # port: 20001 # static_content_root_directory: /opt/kiali/console apiVersion: v1 kind: ConfigMap metadata: name: kiali namespace: istio-system labels: app: kiali version: "v0.10.0" data: config.yaml: | istio_namespace: istio-system server: port: 20001 static_content_root_directory: /opt/kiali/console external_services: jaeger: url: "http://jaeger-query:16686/jaeger-query" grafana: url: "http://grafana:3000/grafana" --- # Source: istio/charts/mixer/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: istio-statsd-prom-bridge namespace: istio-system labels: app: istio-statsd-prom-bridge chart: mixer-1.0.1 release: istio heritage: Tiller istio: mixer data: mapping.conf: |- --- # Source: istio/charts/prometheus/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: prometheus namespace: istio-system labels: app: prometheus chart: prometheus-1.0.1 release: istio heritage: Tiller data: prometheus.yml: |- global: scrape_interval: 15s scrape_configs: - job_name: 'istio-mesh' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s kubernetes_sd_configs: - role: endpoints namespaces: names: - istio-system relabel_configs: - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: istio-telemetry;prometheus - job_name: 'envoy' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. kubernetes_sd_configs: - role: endpoints namespaces: names: - istio-system relabel_configs: - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: istio-statsd-prom-bridge;statsd-prom - job_name: 'istio-policy' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. kubernetes_sd_configs: - role: endpoints namespaces: names: - istio-system relabel_configs: - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: istio-policy;http-monitoring - job_name: 'istio-telemetry' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. kubernetes_sd_configs: - role: endpoints namespaces: names: - istio-system relabel_configs: - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: istio-telemetry;http-monitoring - job_name: 'pilot' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. kubernetes_sd_configs: - role: endpoints namespaces: names: - istio-system relabel_configs: - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: istio-pilot;http-monitoring - job_name: 'galley' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. kubernetes_sd_configs: - role: endpoints namespaces: names: - istio-system relabel_configs: - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: istio-galley;http-monitoring # scrape config for API servers - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints namespaces: names: - default scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: kubernetes;https # scrape config for nodes (kubelet) - job_name: 'kubernetes-nodes' scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics # Scrape config for Kubelet cAdvisor. # # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics # (those whose names begin with 'container_') have been removed from the # Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to # retrieve those metrics. # # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics" # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with # the --cadvisor-port=0 Kubelet flag). # # This job is not necessary and should be removed in Kubernetes 1.6 and # earlier versions, or it will cause the metrics to be scraped twice. - job_name: 'kubernetes-cadvisor' scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor # scrape config for service endpoints. - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name # Example scrape config for pods - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: pod_name --- # Source: istio/charts/security/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: istio-security-custom-resources namespace: istio-system labels: app: istio-security chart: security-1.0.1 release: istio heritage: Tiller istio: security data: custom-resources.yaml: |- run.sh: |- #!/bin/sh set -x if [ "$#" -ne "1" ]; then echo "first argument should be path to custom resource yaml" exit 1 fi pathToResourceYAML=${1} /kubectl get validatingwebhookconfiguration istio-galley 2>/dev/null if [ "$?" -eq 0 ]; then echo "istio-galley validatingwebhookconfiguration found - waiting for istio-galley deployment to be ready" while true; do /kubectl -n istio-system get deployment istio-galley 2>/dev/null if [ "$?" -eq 0 ]; then break fi sleep 1 done /kubectl -n istio-system rollout status deployment istio-galley if [ "$?" -ne 0 ]; then echo "istio-galley deployment rollout status check failed" exit 1 fi echo "istio-galley deployment ready for configuration validation" fi sleep 5 /kubectl apply -f ${pathToResourceYAML} --- # Source: istio/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: istio namespace: istio-system labels: app: istio chart: istio-1.0.1 release: istio heritage: Tiller data: mesh: |- # Set the following variable to true to disable policy checks by the Mixer. # Note that metrics will still be reported to the Mixer. disablePolicyChecks: false # Set enableTracing to false to disable request tracing. enableTracing: true # Set accessLogFile to empty string to disable access log. accessLogFile: "/dev/stdout" # # Deprecated: mixer is using EDS mixerCheckServer: istio-policy.istio-system.svc.cluster.local:9091 mixerReportServer: istio-telemetry.istio-system.svc.cluster.local:9091 # This is the k8s ingress service name, update if you used a different name ingressService: istio-ingress # Unix Domain Socket through which envoy communicates with NodeAgent SDS to get # key/cert for mTLS. Use secret-mount files instead of SDS if set to empty. sdsUdsPath: "" # How frequently should Envoy fetch key/cert from NodeAgent. sdsRefreshDelay: 15s # defaultConfig: # # TCP connection timeout between Envoy & the application, and between Envoys. connectTimeout: 10s # ### ADVANCED SETTINGS ############# # Where should envoy's configuration be stored in the istio-proxy container configPath: "/etc/istio/proxy" binaryPath: "/usr/local/bin/envoy" # The pseudo service name used for Envoy. serviceCluster: istio-proxy # These settings that determine how long an old Envoy # process should be kept alive after an occasional reload. drainDuration: 45s parentShutdownDuration: 1m0s # # The mode used to redirect inbound connections to Envoy. This setting # has no effect on outbound traffic: iptables REDIRECT is always used for # outbound connections. # If "REDIRECT", use iptables REDIRECT to NAT and redirect to Envoy. # The "REDIRECT" mode loses source addresses during redirection. # If "TPROXY", use iptables TPROXY to redirect to Envoy. # The "TPROXY" mode preserves both the source and destination IP # addresses and ports, so that they can be used for advanced filtering # and manipulation. # The "TPROXY" mode also configures the sidecar to run with the # CAP_NET_ADMIN capability, which is required to use TPROXY. #interceptionMode: REDIRECT # # Port where Envoy listens (on local host) for admin commands # You can exec into the istio-proxy container in a pod and # curl the admin port (curl http://localhost:15000/) to obtain # diagnostic information from Envoy. See # https://lyft.github.io/envoy/docs/operations/admin.html # for more details proxyAdminPort: 15000 # # Set concurrency to a specific number to control the number of Proxy worker threads. # If set to 0 (default), then start worker thread for each CPU thread/core. concurrency: 0 # # Zipkin trace collector zipkinAddress: zipkin.istio-system:9411 # # Statsd metrics collector converts statsd metrics into Prometheus metrics. statsdUdpAddress: istio-statsd-prom-bridge.istio-system:9125 # # Mutual TLS authentication between sidecars and istio control plane. controlPlaneAuthPolicy: NONE # # Address where istio Pilot service is running discoveryAddress: istio-pilot.istio-system:15007 --- # Source: istio/templates/sidecar-injector-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: istio-sidecar-injector namespace: istio-system labels: app: istio chart: istio-1.0.1 release: istio heritage: Tiller istio: sidecar-injector data: config: |- policy: enabled template: |- initContainers: - name: istio-init image: "192.168.200.10/istio-release/proxy_init:1.0.2" args: - "-p" - [[ .MeshConfig.ProxyListenPort ]] - "-u" - 1337 - "-m" - [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]] - "-i" [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges") -]] - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges" ]]" [[ else -]] - "*" [[ end -]] - "-x" [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges") -]] - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges" ]]" [[ else -]] - "" [[ end -]] - "-b" [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts") -]] - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts" ]]" [[ else -]] - [[ range .Spec.Containers -]][[ range .Ports -]][[ .ContainerPort -]], [[ end -]][[ end -]][[ end]] - "-d" [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts") -]] - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts" ]]" [[ else -]] - "" [[ end -]] imagePullPolicy: IfNotPresent securityContext: capabilities: add: - NET_ADMIN restartPolicy: Always containers: - name: istio-proxy image: [[ if (isset .ObjectMeta.Annotations "sidecar.istio.io/proxyImage") -]] "[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyImage" ]]" [[ else -]] 192.168.200.10/istio-release/proxyv2:1.0.2 [[ end -]] args: - proxy - sidecar - --configPath - [[ .ProxyConfig.ConfigPath ]] - --binaryPath - [[ .ProxyConfig.BinaryPath ]] - --serviceCluster [[ if ne "" (index .ObjectMeta.Labels "app") -]] - [[ index .ObjectMeta.Labels "app" ]] [[ else -]] - "istio-proxy" [[ end -]] - --drainDuration - [[ formatDuration .ProxyConfig.DrainDuration ]] - --parentShutdownDuration - [[ formatDuration .ProxyConfig.ParentShutdownDuration ]] - --discoveryAddress - [[ .ProxyConfig.DiscoveryAddress ]] - --discoveryRefreshDelay - [[ formatDuration .ProxyConfig.DiscoveryRefreshDelay ]] - --zipkinAddress - [[ .ProxyConfig.ZipkinAddress ]] - --connectTimeout - [[ formatDuration .ProxyConfig.ConnectTimeout ]] - --statsdUdpAddress - [[ .ProxyConfig.StatsdUdpAddress ]] - --proxyAdminPort - [[ .ProxyConfig.ProxyAdminPort ]] [[ if gt .ProxyConfig.Concurrency 0 -]] - --concurrency - [[ .ProxyConfig.Concurrency ]] [[ end -]] - --controlPlaneAuthPolicy - [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/controlPlaneAuthPolicy") .ProxyConfig.ControlPlaneAuthPolicy ]] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: fieldPath: status.podIP - name: ISTIO_META_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: ISTIO_META_INTERCEPTION_MODE value: [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]] imagePullPolicy: IfNotPresent securityContext: readOnlyRootFilesystem: true [[ if eq (or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String) "TPROXY" -]] capabilities: add: - NET_ADMIN runAsGroup: 1337 [[ else -]] runAsUser: 1337 [[ end -]] restartPolicy: Always resources: [[ if (isset .ObjectMeta.Annotations "sidecar.istio.io/proxyCPU") -]] requests: cpu: "[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyCPU" ]]" memory: "[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyMemory" ]]" [[ else -]] requests: cpu: 10m [[ end -]] volumeMounts: - mountPath: /etc/istio/proxy name: istio-envoy - mountPath: /etc/certs/ name: istio-certs readOnly: true volumes: - emptyDir: medium: Memory name: istio-envoy - name: istio-certs secret: optional: true [[ if eq .Spec.ServiceAccountName "" -]] secretName: istio.default [[ else -]] secretName: [[ printf "istio.%s" .Spec.ServiceAccountName ]] [[ end -]] --- # Source: istio/charts/galley/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: istio-galley-service-account namespace: istio-system labels: app: istio-galley chart: galley-1.0.1 heritage: Tiller release: istio --- # Source: istio/charts/gateways/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: istio-egressgateway-service-account namespace: istio-system labels: app: egressgateway chart: gateways-1.0.1 heritage: Tiller release: istio --- apiVersion: v1 kind: ServiceAccount metadata: name: istio-ingressgateway-service-account namespace: istio-system labels: app: ingressgateway chart: gateways-1.0.1 heritage: Tiller release: istio --- --- # Source: istio/charts/grafana/templates/create-custom-resources-job.yaml apiVersion: v1 kind: ServiceAccount metadata: name: istio-grafana-post-install-account namespace: istio-system labels: app: istio-grafana chart: grafana-1.0.1 heritage: Tiller release: istio --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: istio-grafana-post-install-istio-system labels: app: istio-grafana chart: grafana-1.0.1 heritage: Tiller release: istio rules: - apiGroups: ["authentication.istio.io"] # needed to create default authn policy resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: istio-grafana-post-install-role-binding-istio-system labels: app: istio-grafana chart: grafana-1.0.1 heritage: Tiller release: istio roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: istio-grafana-post-install-istio-system subjects: - kind: ServiceAccount name: istio-grafana-post-install-account namespace: istio-system --- apiVersion: batch/v1 kind: Job metadata: name: istio-grafana-post-install namespace: istio-system annotations: "helm.sh/hook": post-install "helm.sh/hook-delete-policy": hook-succeeded labels: app: istio-grafana chart: grafana-1.0.1 release: istio heritage: Tiller spec: template: metadata: name: istio-grafana-post-install labels: app: istio-grafana release: istio spec: serviceAccountName: istio-grafana-post-install-account containers: - name: hyperkube image: "192.168.200.10/istio-release/hyperkube:v1.7.6_coreos.0" command: [ "/bin/bash", "/tmp/grafana/run.sh", "/tmp/grafana/custom-resources.yaml" ] volumeMounts: - mountPath: "/tmp/grafana" name: tmp-configmap-grafana volumes: - name: tmp-configmap-grafana configMap: name: istio-grafana-custom-resources restartPolicy: OnFailure --- # Source: istio/charts/ingress/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: istio-ingress-service-account namespace: istio-system labels: app: ingress chart: ingress-1.0.1 heritage: Tiller release: istio --- # Source: istio/charts/kiali/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: kiali-service-account namespace: istio-system labels: app: kiali chart: kiali-1.0.1 heritage: Tiller release: istio --- # Source: istio/charts/mixer/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: istio-mixer-service-account namespace: istio-system labels: app: mixer chart: mixer-1.0.1 heritage: Tiller release: istio --- # Source: istio/charts/pilot/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: istio-pilot-service-account namespace: istio-system labels: app: istio-pilot chart: pilot-1.0.1 heritage: Tiller release: istio --- # Source: istio/charts/prometheus/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: prometheus namespace: istio-system --- # Source: istio/charts/security/templates/cleanup-secrets.yaml # The reason for creating a ServiceAccount and ClusterRole specifically for this # post-delete hooked job is because the citadel ServiceAccount is being deleted # before this hook is launched. On the other hand, running this hook before the # deletion of the citadel (e.g. pre-delete) won't delete the secrets because they # will be re-created immediately by the to-be-deleted citadel. # # It's also important that the ServiceAccount, ClusterRole and ClusterRoleBinding # will be ready before running the hooked Job therefore the hook weights. apiVersion: v1 kind: ServiceAccount metadata: name: istio-cleanup-secrets-service-account namespace: istio-system annotations: "helm.sh/hook": post-delete "helm.sh/hook-delete-policy": hook-succeeded "helm.sh/hook-weight": "1" labels: app: security chart: security-1.0.1 heritage: Tiller release: istio --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: istio-cleanup-secrets-istio-system annotations: "helm.sh/hook": post-delete "helm.sh/hook-delete-policy": hook-succeeded "helm.sh/hook-weight": "1" labels: app: security chart: security-1.0.1 heritage: Tiller release: istio rules: - apiGroups: [""] resources: ["secrets"] verbs: ["list", "delete"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: istio-cleanup-secrets-istio-system annotations: "helm.sh/hook": post-delete "helm.sh/hook-delete-policy": hook-succeeded "helm.sh/hook-weight": "2" labels: app: security chart: security-1.0.1 heritage: Tiller release: istio roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: istio-cleanup-secrets-istio-system subjects: - kind: ServiceAccount name: istio-cleanup-secrets-service-account namespace: istio-system --- apiVersion: batch/v1 kind: Job metadata: name: istio-cleanup-secrets namespace: istio-system annotations: "helm.sh/hook": post-delete "helm.sh/hook-delete-policy": hook-succeeded "helm.sh/hook-weight": "3" labels: app: security chart: security-1.0.1 release: istio heritage: Tiller spec: template: metadata: name: istio-cleanup-secrets labels: app: security release: istio spec: serviceAccountName: istio-cleanup-secrets-service-account containers: - name: hyperkube image: "192.168.200.10/istio-release/hyperkube:v1.7.6_coreos.0" command: - /bin/bash - -c - > kubectl get secret --all-namespaces | grep "istio.io/key-and-cert" | while read -r entry; do ns=$(echo $entry | awk '{print $1}'); name=$(echo $entry | awk '{print $2}'); kubectl delete secret $name -n $ns; done restartPolicy: OnFailure --- # Source: istio/charts/security/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: istio-citadel-service-account namespace: istio-system labels: app: security chart: security-1.0.1 heritage: Tiller release: istio --- # Source: istio/charts/sidecarInjectorWebhook/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: istio-sidecar-injector-service-account namespace: istio-system labels: app: istio-sidecar-injector chart: sidecarInjectorWebhook-1.0.1 heritage: Tiller release: istio --- # Source: istio/templates/crds.yaml # # these CRDs only make sense when pilot is enabled # apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: virtualservices.networking.istio.io annotations: "helm.sh/hook": crd-install labels: app: istio-pilot spec: group: networking.istio.io names: kind: VirtualService listKind: VirtualServiceList plural: virtualservices singular: virtualservice categories: - istio-io - networking-istio-io scope: Namespaced version: v1alpha3 --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: destinationrules.networking.istio.io annotations: "helm.sh/hook": crd-install labels: app: istio-pilot spec: group: networking.istio.io names: kind: DestinationRule listKind: DestinationRuleList plural: destinationrules singular: destinationrule categories: - istio-io - networking-istio-io scope: Namespaced version: v1alpha3 --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: serviceentries.networking.istio.io annotations: "helm.sh/hook": crd-install labels: app: istio-pilot spec: group: networking.istio.io names: kind: ServiceEntry listKind: ServiceEntryList plural: serviceentries singular: serviceentry categories: - istio-io - networking-istio-io scope: Namespaced version: v1alpha3 --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: gateways.networking.istio.io annotations: "helm.sh/hook": crd-install "helm.sh/hook-weight": "-5" labels: app: istio-pilot spec: group: networking.istio.io names: kind: Gateway plural: gateways singular: gateway categories: - istio-io - networking-istio-io scope: Namespaced version: v1alpha3 --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: envoyfilters.networking.istio.io annotations: "helm.sh/hook": crd-install labels: app: istio-pilot spec: group: networking.istio.io names: kind: EnvoyFilter plural: envoyfilters singular: envoyfilter categories: - istio-io - networking-istio-io scope: Namespaced version: v1alpha3 --- # # these CRDs only make sense when security is enabled # # kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: annotations: "helm.sh/hook": crd-install name: httpapispecbindings.config.istio.io spec: group: config.istio.io names: kind: HTTPAPISpecBinding plural: httpapispecbindings singular: httpapispecbinding categories: - istio-io - apim-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: annotations: "helm.sh/hook": crd-install name: httpapispecs.config.istio.io spec: group: config.istio.io names: kind: HTTPAPISpec plural: httpapispecs singular: httpapispec categories: - istio-io - apim-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: annotations: "helm.sh/hook": crd-install name: quotaspecbindings.config.istio.io spec: group: config.istio.io names: kind: QuotaSpecBinding plural: quotaspecbindings singular: quotaspecbinding categories: - istio-io - apim-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: annotations: "helm.sh/hook": crd-install name: quotaspecs.config.istio.io spec: group: config.istio.io names: kind: QuotaSpec plural: quotaspecs singular: quotaspec categories: - istio-io - apim-istio-io scope: Namespaced version: v1alpha2 --- # Mixer CRDs kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: rules.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: istio.io.mixer istio: core spec: group: config.istio.io names: kind: rule plural: rules singular: rule categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: attributemanifests.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: istio.io.mixer istio: core spec: group: config.istio.io names: kind: attributemanifest plural: attributemanifests singular: attributemanifest categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: bypasses.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: bypass istio: mixer-adapter spec: group: config.istio.io names: kind: bypass plural: bypasses singular: bypass categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: circonuses.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: circonus istio: mixer-adapter spec: group: config.istio.io names: kind: circonus plural: circonuses singular: circonus categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: deniers.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: denier istio: mixer-adapter spec: group: config.istio.io names: kind: denier plural: deniers singular: denier categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: fluentds.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: fluentd istio: mixer-adapter spec: group: config.istio.io names: kind: fluentd plural: fluentds singular: fluentd categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: kubernetesenvs.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: kubernetesenv istio: mixer-adapter spec: group: config.istio.io names: kind: kubernetesenv plural: kubernetesenvs singular: kubernetesenv categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: listcheckers.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: listchecker istio: mixer-adapter spec: group: config.istio.io names: kind: listchecker plural: listcheckers singular: listchecker categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: memquotas.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: memquota istio: mixer-adapter spec: group: config.istio.io names: kind: memquota plural: memquotas singular: memquota categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: noops.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: noop istio: mixer-adapter spec: group: config.istio.io names: kind: noop plural: noops singular: noop categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: opas.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: opa istio: mixer-adapter spec: group: config.istio.io names: kind: opa plural: opas singular: opa categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: prometheuses.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: prometheus istio: mixer-adapter spec: group: config.istio.io names: kind: prometheus plural: prometheuses singular: prometheus categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: rbacs.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: rbac istio: mixer-adapter spec: group: config.istio.io names: kind: rbac plural: rbacs singular: rbac categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: redisquotas.config.istio.io annotations: "helm.sh/hook": crd-install labels: package: redisquota istio: mixer-adapter spec: group: config.istio.io names: kind: redisquota plural: redisquotas singular: redisquota scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: servicecontrols.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: servicecontrol istio: mixer-adapter spec: group: config.istio.io names: kind: servicecontrol plural: servicecontrols singular: servicecontrol categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: signalfxs.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: signalfx istio: mixer-adapter spec: group: config.istio.io names: kind: signalfx plural: signalfxs singular: signalfx categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: solarwindses.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: solarwinds istio: mixer-adapter spec: group: config.istio.io names: kind: solarwinds plural: solarwindses singular: solarwinds categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: stackdrivers.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: stackdriver istio: mixer-adapter spec: group: config.istio.io names: kind: stackdriver plural: stackdrivers singular: stackdriver categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: statsds.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: statsd istio: mixer-adapter spec: group: config.istio.io names: kind: statsd plural: statsds singular: statsd categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: stdios.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: stdio istio: mixer-adapter spec: group: config.istio.io names: kind: stdio plural: stdios singular: stdio categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: apikeys.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: apikey istio: mixer-instance spec: group: config.istio.io names: kind: apikey plural: apikeys singular: apikey categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: authorizations.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: authorization istio: mixer-instance spec: group: config.istio.io names: kind: authorization plural: authorizations singular: authorization categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: checknothings.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: checknothing istio: mixer-instance spec: group: config.istio.io names: kind: checknothing plural: checknothings singular: checknothing categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: kuberneteses.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: adapter.template.kubernetes istio: mixer-instance spec: group: config.istio.io names: kind: kubernetes plural: kuberneteses singular: kubernetes categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: listentries.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: listentry istio: mixer-instance spec: group: config.istio.io names: kind: listentry plural: listentries singular: listentry categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: logentries.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: logentry istio: mixer-instance spec: group: config.istio.io names: kind: logentry plural: logentries singular: logentry categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: edges.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: edge istio: mixer-instance spec: group: config.istio.io names: kind: edge plural: edges singular: edge categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: metrics.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: metric istio: mixer-instance spec: group: config.istio.io names: kind: metric plural: metrics singular: metric categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: quotas.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: quota istio: mixer-instance spec: group: config.istio.io names: kind: quota plural: quotas singular: quota categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: reportnothings.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: reportnothing istio: mixer-instance spec: group: config.istio.io names: kind: reportnothing plural: reportnothings singular: reportnothing categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: servicecontrolreports.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: servicecontrolreport istio: mixer-instance spec: group: config.istio.io names: kind: servicecontrolreport plural: servicecontrolreports singular: servicecontrolreport categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: tracespans.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: tracespan istio: mixer-instance spec: group: config.istio.io names: kind: tracespan plural: tracespans singular: tracespan categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: rbacconfigs.rbac.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: istio.io.mixer istio: rbac spec: group: rbac.istio.io names: kind: RbacConfig plural: rbacconfigs singular: rbacconfig categories: - istio-io - rbac-istio-io scope: Namespaced version: v1alpha1 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: serviceroles.rbac.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: istio.io.mixer istio: rbac spec: group: rbac.istio.io names: kind: ServiceRole plural: serviceroles singular: servicerole categories: - istio-io - rbac-istio-io scope: Namespaced version: v1alpha1 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: servicerolebindings.rbac.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: istio.io.mixer istio: rbac spec: group: rbac.istio.io names: kind: ServiceRoleBinding plural: servicerolebindings singular: servicerolebinding categories: - istio-io - rbac-istio-io scope: Namespaced version: v1alpha1 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: adapters.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: adapter istio: mixer-adapter spec: group: config.istio.io names: kind: adapter plural: adapters singular: adapter categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: instances.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: instance istio: mixer-instance spec: group: config.istio.io names: kind: instance plural: instances singular: instance categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: templates.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: template istio: mixer-template spec: group: config.istio.io names: kind: template plural: templates singular: template categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- kind: CustomResourceDefinition apiVersion: apiextensions.k8s.io/v1beta1 metadata: name: handlers.config.istio.io annotations: "helm.sh/hook": crd-install labels: app: mixer package: handler istio: mixer-handler spec: group: config.istio.io names: kind: handler plural: handlers singular: handler categories: - istio-io - policy-istio-io scope: Namespaced version: v1alpha2 --- # # --- # Source: istio/charts/galley/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: istio-galley-istio-system labels: app: istio-galley chart: galley-1.0.1 heritage: Tiller release: istio rules: - apiGroups: ["admissionregistration.k8s.io"] resources: ["validatingwebhookconfigurations"] verbs: ["*"] - apiGroups: ["config.istio.io"] # istio mixer CRD watcher resources: ["*"] verbs: ["get", "list", "watch"] - apiGroups: ["*"] resources: ["deployments"] resourceNames: ["istio-galley"] verbs: ["get"] - apiGroups: ["*"] resources: ["endpoints"] resourceNames: ["istio-galley"] verbs: ["get"] --- # Source: istio/charts/gateways/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: labels: app: gateways chart: gateways-1.0.1 heritage: Tiller release: istio name: istio-egressgateway-istio-system rules: - apiGroups: ["extensions"] resources: ["thirdpartyresources", "virtualservices", "destinationrules", "gateways"] verbs: ["get", "watch", "list", "update"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: labels: app: gateways chart: gateways-1.0.1 heritage: Tiller release: istio name: istio-ingressgateway-istio-system rules: - apiGroups: ["extensions"] resources: ["thirdpartyresources", "virtualservices", "destinationrules", "gateways"] verbs: ["get", "watch", "list", "update"] --- --- # Source: istio/charts/ingress/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: labels: app: ingress chart: ingress-1.0.1 heritage: Tiller release: istio name: istio-ingress-istio-system rules: - apiGroups: ["extensions"] resources: ["thirdpartyresources", "ingresses"] verbs: ["get", "watch", "list", "update"] - apiGroups: [""] resources: ["configmaps", "pods", "endpoints", "services"] verbs: ["get", "watch", "list"] --- # Source: istio/charts/kiali/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kiali labels: app: kiali version: master rules: - apiGroups: ["","apps", "autoscaling"] resources: - configmaps - namespaces - nodes - pods - projects - services - endpoints - deployments - horizontalpodautoscalers verbs: - get - list - watch - apiGroups: ["config.istio.io"] resources: - rules - circonuses - deniers - fluentds - kubernetesenvs - listcheckers - memquotas - opas - prometheuses - rbacs - servicecontrols - solarwindses - stackdrivers - statsds - stdios - apikeys - authorizations - checknothings - kuberneteses - listentries - logentries - metrics - quotas - reportnothings - servicecontrolreports - quotaspecs - quotaspecbindings verbs: - get - list - watch - apiGroups: ["networking.istio.io"] resources: - virtualservices - destinationrules - serviceentries - gateways verbs: - get - list - watch --- # Source: istio/charts/mixer/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: istio-mixer-istio-system labels: app: mixer chart: mixer-1.0.1 heritage: Tiller release: istio rules: - apiGroups: ["config.istio.io"] # istio CRD watcher resources: ["*"] verbs: ["create", "get", "list", "watch", "patch"] - apiGroups: ["rbac.istio.io"] # istio RBAC watcher resources: ["*"] verbs: ["get", "list", "watch"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["configmaps", "endpoints", "pods", "services", "namespaces", "secrets"] verbs: ["get", "list", "watch"] - apiGroups: ["extensions"] resources: ["replicasets"] verbs: ["get", "list", "watch"] - apiGroups: ["apps"] resources: ["replicasets"] verbs: ["get", "list", "watch"] --- # Source: istio/charts/pilot/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: istio-pilot-istio-system labels: app: istio-pilot chart: pilot-1.0.1 heritage: Tiller release: istio rules: - apiGroups: ["config.istio.io"] resources: ["*"] verbs: ["*"] - apiGroups: ["rbac.istio.io"] resources: ["*"] verbs: ["get", "watch", "list"] - apiGroups: ["networking.istio.io"] resources: ["*"] verbs: ["*"] - apiGroups: ["authentication.istio.io"] resources: ["*"] verbs: ["*"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: ["*"] - apiGroups: ["extensions"] resources: ["thirdpartyresources", "thirdpartyresources.extensions", "ingresses", "ingresses/status"] verbs: ["*"] - apiGroups: [""] resources: ["configmaps"] verbs: ["create", "get", "list", "watch", "update"] - apiGroups: [""] resources: ["endpoints", "pods", "services"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["namespaces", "nodes", "secrets"] verbs: ["get", "list", "watch"] --- # Source: istio/charts/prometheus/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: prometheus-istio-system rules: - apiGroups: [""] resources: - nodes - services - endpoints - pods - nodes/proxy verbs: ["get", "list", "watch"] - apiGroups: [""] resources: - configmaps verbs: ["get"] - nonResourceURLs: ["/metrics"] verbs: ["get"] --- # Source: istio/charts/security/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: istio-citadel-istio-system labels: app: security chart: security-1.0.1 heritage: Tiller release: istio rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "get", "watch", "list", "update", "delete"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get", "watch", "list"] - apiGroups: [""] resources: ["services"] verbs: ["get", "watch", "list"] --- # Source: istio/charts/sidecarInjectorWebhook/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: istio-sidecar-injector-istio-system labels: app: istio-sidecar-injector chart: sidecarInjectorWebhook-1.0.1 heritage: Tiller release: istio rules: - apiGroups: ["*"] resources: ["configmaps"] verbs: ["get", "list", "watch"] - apiGroups: ["admissionregistration.k8s.io"] resources: ["mutatingwebhookconfigurations"] verbs: ["get", "list", "watch", "patch"] --- # Source: istio/charts/galley/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: istio-galley-admin-role-binding-istio-system labels: app: istio-galley chart: galley-1.0.1 heritage: Tiller release: istio roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: istio-galley-istio-system subjects: - kind: ServiceAccount name: istio-galley-service-account namespace: istio-system --- # Source: istio/charts/gateways/templates/clusterrolebindings.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: istio-egressgateway-istio-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: istio-egressgateway-istio-system subjects: - kind: ServiceAccount name: istio-egressgateway-service-account namespace: istio-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: istio-ingressgateway-istio-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: istio-ingressgateway-istio-system subjects: - kind: ServiceAccount name: istio-ingressgateway-service-account namespace: istio-system --- --- # Source: istio/charts/ingress/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: istio-ingress-istio-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: istio-pilot-istio-system subjects: - kind: ServiceAccount name: istio-ingress-service-account namespace: istio-system --- # Source: istio/charts/kiali/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: istio-kiali-admin-role-binding-istio-system labels: app: kiali chart: kiali-1.0.1 heritage: Tiller release: istio roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kiali subjects: - kind: ServiceAccount name: kiali-service-account namespace: istio-system --- # Source: istio/charts/mixer/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: istio-mixer-admin-role-binding-istio-system labels: app: mixer chart: mixer-1.0.1 heritage: Tiller release: istio roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: istio-mixer-istio-system subjects: - kind: ServiceAccount name: istio-mixer-service-account namespace: istio-system --- # Source: istio/charts/pilot/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: istio-pilot-istio-system labels: app: istio-pilot chart: pilot-1.0.1 heritage: Tiller release: istio roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: istio-pilot-istio-system subjects: - kind: ServiceAccount name: istio-pilot-service-account namespace: istio-system --- # Source: istio/charts/prometheus/templates/clusterrolebindings.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: prometheus-istio-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-istio-system subjects: - kind: ServiceAccount name: prometheus namespace: istio-system --- # Source: istio/charts/security/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: istio-citadel-istio-system labels: app: security chart: security-1.0.1 heritage: Tiller release: istio roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: istio-citadel-istio-system subjects: - kind: ServiceAccount name: istio-citadel-service-account namespace: istio-system --- # Source: istio/charts/sidecarInjectorWebhook/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: istio-sidecar-injector-admin-role-binding-istio-system labels: app: istio-sidecar-injector chart: sidecarInjectorWebhook-1.0.1 heritage: Tiller release: istio roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: istio-sidecar-injector-istio-system subjects: - kind: ServiceAccount name: istio-sidecar-injector-service-account namespace: istio-system --- # Source: istio/charts/galley/templates/service.yaml apiVersion: v1 kind: Service metadata: name: istio-galley namespace: istio-system labels: istio: galley spec: ports: - port: 443 name: https-validation - port: 9093 name: http-monitoring selector: istio: galley --- # Source: istio/charts/gateways/templates/service.yaml apiVersion: v1 kind: Service metadata: name: istio-egressgateway namespace: istio-system annotations: labels: chart: gateways-1.0.1 release: istio heritage: Tiller app: istio-egressgateway istio: egressgateway spec: type: NodePort selector: app: istio-egressgateway istio: egressgateway ports: - name: http2 port: 80 - name: https port: 443 --- apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: istio-system annotations: labels: chart: gateways-1.0.1 release: istio heritage: Tiller app: istio-ingressgateway istio: ingressgateway spec: type: NodePort selector: app: istio-ingressgateway istio: ingressgateway ports: - name: http2 nodePort: 31380 port: 80 targetPort: 80 - name: https nodePort: 31390 port: 443 - name: tcp nodePort: 31400 port: 31400 - name: tcp-pilot-grpc-tls port: 15011 targetPort: 15011 - name: tcp-citadel-grpc-tls port: 8060 targetPort: 8060 - name: tcp-dns-tls port: 853 targetPort: 853 - name: http2-prometheus port: 15030 targetPort: 15030 - name: http2-grafana port: 15031 targetPort: 15031 --- --- # Source: istio/charts/grafana/templates/service.yaml apiVersion: v1 kind: Service metadata: name: grafana namespace: istio-system annotations: labels: app: grafana chart: grafana-1.0.1 release: istio heritage: Tiller spec: type: ClusterIP ports: - port: 3000 targetPort: 3000 protocol: TCP name: http selector: app: grafana --- # Source: istio/charts/ingress/templates/service.yaml apiVersion: v1 kind: Service metadata: name: istio-ingress namespace: istio-system labels: chart: ingress-1.0.1 release: istio heritage: Tiller istio: ingress annotations: spec: type: NodePort selector: istio: ingress ports: - name: http nodePort: 32000 port: 80 - name: https port: 443 --- --- # Source: istio/charts/kiali/templates/service.yaml apiVersion: v1 kind: Service metadata: name: kiali namespace: istio-system labels: app: kiali spec: type: NodePort ports: - name: tcp protocol: TCP port: 20001 name: http-kiali selector: app: kiali --- # Source: istio/charts/mixer/templates/service.yaml apiVersion: v1 kind: Service metadata: name: istio-policy namespace: istio-system labels: chart: mixer-1.0.1 release: istio istio: mixer spec: ports: - name: grpc-mixer port: 9091 - name: grpc-mixer-mtls port: 15004 - name: http-monitoring port: 9093 selector: istio: mixer istio-mixer-type: policy --- apiVersion: v1 kind: Service metadata: name: istio-telemetry namespace: istio-system labels: chart: mixer-1.0.1 release: istio istio: mixer spec: ports: - name: grpc-mixer port: 9091 - name: grpc-mixer-mtls port: 15004 - name: http-monitoring port: 9093 - name: prometheus port: 42422 selector: istio: mixer istio-mixer-type: telemetry --- --- # Source: istio/charts/mixer/templates/statsdtoprom.yaml --- apiVersion: v1 kind: Service metadata: name: istio-statsd-prom-bridge namespace: istio-system labels: chart: mixer-1.0.1 release: istio istio: statsd-prom-bridge spec: ports: - name: statsd-prom port: 9102 - name: statsd-udp port: 9125 protocol: UDP selector: istio: statsd-prom-bridge --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-statsd-prom-bridge namespace: istio-system labels: chart: mixer-1.0.1 release: istio istio: mixer spec: template: metadata: labels: istio: statsd-prom-bridge annotations: sidecar.istio.io/inject: "false" spec: serviceAccountName: istio-mixer-service-account volumes: - name: config-volume configMap: name: istio-statsd-prom-bridge containers: - name: statsd-prom-bridge image: "192.168.200.10/istio-release/statsd-exporter:v0.6.0" imagePullPolicy: IfNotPresent ports: - containerPort: 9102 - containerPort: 9125 protocol: UDP args: - '-statsd.mapping-config=/etc/statsd/mapping.conf' resources: requests: cpu: 10m volumeMounts: - name: config-volume mountPath: /etc/statsd --- # Source: istio/charts/pilot/templates/service.yaml apiVersion: v1 kind: Service metadata: name: istio-pilot namespace: istio-system labels: app: istio-pilot chart: pilot-1.0.1 release: istio heritage: Tiller spec: ports: - port: 15010 name: grpc-xds # direct - port: 15011 name: https-xds # mTLS - port: 8080 name: http-legacy-discovery # direct - port: 9093 name: http-monitoring selector: istio: pilot --- # Source: istio/charts/prometheus/templates/service.yaml apiVersion: v1 kind: Service metadata: name: prometheus namespace: istio-system annotations: prometheus.io/scrape: 'true' labels: name: prometheus spec: selector: app: prometheus ports: - name: http-prometheus protocol: TCP port: 9090 --- # Source: istio/charts/security/templates/service.yaml apiVersion: v1 kind: Service metadata: # we use the normal name here (e.g. 'prometheus') # as grafana is configured to use this as a data source name: istio-citadel namespace: istio-system labels: app: istio-citadel spec: ports: - name: grpc-citadel port: 8060 targetPort: 8060 protocol: TCP - name: http-monitoring port: 9093 selector: istio: citadel --- # Source: istio/charts/servicegraph/templates/service.yaml apiVersion: v1 kind: Service metadata: name: servicegraph namespace: istio-system annotations: labels: app: servicegraph chart: servicegraph-1.0.1 release: istio heritage: Tiller spec: type: ClusterIP ports: - port: 8088 targetPort: 8088 protocol: TCP name: http selector: app: servicegraph --- # Source: istio/charts/sidecarInjectorWebhook/templates/service.yaml apiVersion: v1 kind: Service metadata: name: istio-sidecar-injector namespace: istio-system labels: istio: sidecar-injector spec: ports: - port: 443 selector: istio: sidecar-injector --- # Source: istio/charts/galley/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-galley namespace: istio-system labels: app: galley chart: galley-1.0.1 release: istio heritage: Tiller istio: galley spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: istio: galley annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: istio-galley-service-account containers: - name: validator image: "192.168.200.10/istio-release/galley:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 443 - containerPort: 9093 command: - /usr/local/bin/galley - validator - --deployment-namespace=istio-system - --caCertFile=/etc/istio/certs/root-cert.pem - --tlsCertFile=/etc/istio/certs/cert-chain.pem - --tlsKeyFile=/etc/istio/certs/key.pem - --healthCheckInterval=1s - --healthCheckFile=/health - --webhook-config-file - /etc/istio/config/validatingwebhookconfiguration.yaml volumeMounts: - name: certs mountPath: /etc/istio/certs readOnly: true - name: config mountPath: /etc/istio/config readOnly: true livenessProbe: exec: command: - /usr/local/bin/galley - probe - --probe-path=/health - --interval=10s initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: exec: command: - /usr/local/bin/galley - probe - --probe-path=/health - --interval=10s initialDelaySeconds: 5 periodSeconds: 5 resources: requests: cpu: 10m volumes: - name: certs secret: secretName: istio.istio-galley-service-account - name: config configMap: name: istio-galley-configuration affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x --- # Source: istio/charts/gateways/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-egressgateway namespace: istio-system labels: chart: gateways-1.0.1 release: istio heritage: Tiller app: istio-egressgateway istio: egressgateway spec: replicas: 1 template: metadata: labels: app: istio-egressgateway istio: egressgateway annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: istio-egressgateway-service-account containers: - name: istio-proxy image: "192.168.200.10/istio-release/proxyv2:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 80 - containerPort: 443 args: - proxy - router - -v - "2" - --discoveryRefreshDelay - '1s' #discoveryRefreshDelay - --drainDuration - '45s' #drainDuration - --parentShutdownDuration - '1m0s' #parentShutdownDuration - --connectTimeout - '10s' #connectTimeout - --serviceCluster - istio-egressgateway - --zipkinAddress - zipkin:9411 - --statsdUdpAddress - istio-statsd-prom-bridge:9125 - --proxyAdminPort - "15000" - --controlPlaneAuthPolicy - NONE - --discoveryAddress - istio-pilot:8080 resources: requests: cpu: 10m env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: ISTIO_META_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: istio-certs mountPath: /etc/certs readOnly: true - name: egressgateway-certs mountPath: "/etc/istio/egressgateway-certs" readOnly: true - name: egressgateway-ca-certs mountPath: "/etc/istio/egressgateway-ca-certs" readOnly: true volumes: - name: istio-certs secret: secretName: istio.istio-egressgateway-service-account optional: true - name: egressgateway-certs secret: secretName: "istio-egressgateway-certs" optional: true - name: egressgateway-ca-certs secret: secretName: "istio-egressgateway-ca-certs" optional: true affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-ingressgateway namespace: istio-system labels: chart: gateways-1.0.1 release: istio heritage: Tiller app: istio-ingressgateway istio: ingressgateway spec: replicas: 1 template: metadata: labels: app: istio-ingressgateway istio: ingressgateway annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: istio-ingressgateway-service-account containers: - name: istio-proxy image: "192.168.200.10/istio-release/proxyv2:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 80 - containerPort: 443 - containerPort: 31400 - containerPort: 15011 - containerPort: 8060 - containerPort: 853 - containerPort: 15030 - containerPort: 15031 args: - proxy - router - -v - "2" - --discoveryRefreshDelay - '1s' #discoveryRefreshDelay - --drainDuration - '45s' #drainDuration - --parentShutdownDuration - '1m0s' #parentShutdownDuration - --connectTimeout - '10s' #connectTimeout - --serviceCluster - istio-ingressgateway - --zipkinAddress - zipkin:9411 - --statsdUdpAddress - istio-statsd-prom-bridge:9125 - --proxyAdminPort - "15000" - --controlPlaneAuthPolicy - NONE - --discoveryAddress - istio-pilot:8080 resources: requests: cpu: 10m env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: ISTIO_META_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: istio-certs mountPath: /etc/certs readOnly: true - name: ingressgateway-certs mountPath: "/etc/istio/ingressgateway-certs" readOnly: true - name: ingressgateway-ca-certs mountPath: "/etc/istio/ingressgateway-ca-certs" readOnly: true volumes: - name: istio-certs secret: secretName: istio.istio-ingressgateway-service-account optional: true - name: ingressgateway-certs secret: secretName: "istio-ingressgateway-certs" optional: true - name: ingressgateway-ca-certs secret: secretName: "istio-ingressgateway-ca-certs" optional: true affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x --- --- # Source: istio/charts/grafana/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: grafana namespace: istio-system labels: app: grafana chart: grafana-1.0.1 release: istio heritage: Tiller spec: replicas: 1 template: metadata: labels: app: grafana annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: containers: - name: grafana image: "192.168.200.10/istio-release/grafana:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 3000 readinessProbe: httpGet: path: /login port: 3000 env: - name: GRAFANA_PORT value: "3000" - name: GF_AUTH_BASIC_ENABLED value: "false" - name: GF_AUTH_ANONYMOUS_ENABLED value: "true" - name: GF_AUTH_ANONYMOUS_ORG_ROLE value: Admin - name: GF_PATHS_DATA value: /data/grafana resources: requests: cpu: 10m volumeMounts: - name: data mountPath: /data/grafana affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x volumes: - name: data emptyDir: {} --- # Source: istio/charts/ingress/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-ingress namespace: istio-system labels: app: ingress chart: ingress-1.0.1 release: istio heritage: Tiller istio: ingress spec: replicas: 1 template: metadata: labels: istio: ingress annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: istio-ingress-service-account containers: - name: ingress image: "192.168.200.10/istio-release/proxyv2:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 80 - containerPort: 443 args: - proxy - ingress - -v - "2" - --discoveryRefreshDelay - '1s' #discoveryRefreshDelay - --drainDuration - '45s' #drainDuration - --parentShutdownDuration - '1m0s' #parentShutdownDuration - --connectTimeout - '10s' #connectTimeout - --serviceCluster - istio-ingress - --zipkinAddress - zipkin:9411 - --statsdUdpAddress - istio-statsd-prom-bridge:9125 - --proxyAdminPort - "15000" - --controlPlaneAuthPolicy - NONE - --discoveryAddress - istio-pilot:8080 resources: requests: cpu: 10m env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP volumeMounts: - name: istio-certs mountPath: /etc/certs readOnly: true - name: ingress-certs mountPath: /etc/istio/ingress-certs readOnly: true volumes: - name: istio-certs secret: secretName: istio.istio-ingress-service-account optional: true - name: ingress-certs secret: secretName: istio-ingress-certs optional: true affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x --- # Source: istio/charts/kiali/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kiali namespace: istio-system labels: app: kiali chart: kiali-1.0.1 release: istio heritage: Tiller spec: replicas: 1 selector: matchLabels: app: kiali template: metadata: name: kiali labels: app: kiali annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: kiali-service-account containers: - image: "192.168.200.10/istio-release/kiali:istio-release-1.0" name: kiali command: - "/opt/kiali/kiali" - "-config" - "/kiali-configuration/config.yaml" - "-v" - "4" env: - name: ACTIVE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: SERVER_CREDENTIALS_USERNAME valueFrom: secretKeyRef: name: kiali key: username - name: SERVER_CREDENTIALS_PASSWORD valueFrom: secretKeyRef: name: kiali key: passphrase - name: PROMETHEUS_SERVICE_URL value: http://prometheus:9090 - name: GRAFANA_DASHBOARD value: istio-service-dashboard - name: GRAFANA_VAR_SERVICE_SOURCE value: var-service - name: GRAFANA_VAR_SERVICE_DEST value: var-service volumeMounts: - name: kiali-configuration mountPath: "/kiali-configuration" resources: requests: cpu: 10m volumes: - name: kiali-configuration configMap: name: kiali --- # Source: istio/charts/mixer/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-policy namespace: istio-system labels: chart: mixer-1.0.1 release: istio istio: mixer spec: replicas: 1 template: metadata: labels: app: policy istio: mixer istio-mixer-type: policy annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: istio-mixer-service-account volumes: - name: istio-certs secret: secretName: istio.istio-mixer-service-account optional: true - name: uds-socket emptyDir: {} affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x containers: - name: mixer image: "192.168.200.10/istio-release/mixer:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 9093 - containerPort: 42422 args: - --address - unix:///sock/mixer.socket - --configStoreURL=k8s:// - --configDefaultNamespace=istio-system - --trace_zipkin_url=http://zipkin:9411/api/v1/spans resources: requests: cpu: 10m volumeMounts: - name: uds-socket mountPath: /sock livenessProbe: httpGet: path: /version port: 9093 initialDelaySeconds: 5 periodSeconds: 5 - name: istio-proxy image: "192.168.200.10/istio-release/proxyv2:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 9091 - containerPort: 15004 args: - proxy - --serviceCluster - istio-policy - --templateFile - /etc/istio/proxy/envoy_policy.yaml.tmpl - --controlPlaneAuthPolicy - NONE env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP resources: requests: cpu: 10m volumeMounts: - name: istio-certs mountPath: /etc/certs readOnly: true - name: uds-socket mountPath: /sock --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-telemetry namespace: istio-system labels: chart: mixer-1.0.1 release: istio istio: mixer spec: replicas: 1 template: metadata: labels: app: telemetry istio: mixer istio-mixer-type: telemetry annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: istio-mixer-service-account volumes: - name: istio-certs secret: secretName: istio.istio-mixer-service-account optional: true - name: uds-socket emptyDir: {} containers: - name: mixer image: "192.168.200.10/istio-release/mixer:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 9093 - containerPort: 42422 args: - --address - unix:///sock/mixer.socket - --configStoreURL=k8s:// - --configDefaultNamespace=istio-system - --trace_zipkin_url=http://zipkin:9411/api/v1/spans resources: requests: cpu: 10m volumeMounts: - name: uds-socket mountPath: /sock livenessProbe: httpGet: path: /version port: 9093 initialDelaySeconds: 5 periodSeconds: 5 - name: istio-proxy image: "192.168.200.10/istio-release/proxyv2:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 9091 - containerPort: 15004 args: - proxy - --serviceCluster - istio-telemetry - --templateFile - /etc/istio/proxy/envoy_telemetry.yaml.tmpl - --controlPlaneAuthPolicy - NONE env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP resources: requests: cpu: 10m volumeMounts: - name: istio-certs mountPath: /etc/certs readOnly: true - name: uds-socket mountPath: /sock --- --- # Source: istio/charts/pilot/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-pilot namespace: istio-system # TODO: default template doesn't have this, which one is right ? labels: app: istio-pilot chart: pilot-1.0.1 release: istio heritage: Tiller istio: pilot annotations: checksum/config-volume: f8da08b6b8c170dde721efd680270b2901e750d4aa186ebb6c22bef5b78a43f9 spec: replicas: 1 template: metadata: labels: istio: pilot app: pilot annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: istio-pilot-service-account containers: - name: discovery image: "192.168.200.10/istio-release/pilot:1.0.2" imagePullPolicy: IfNotPresent args: - "discovery" ports: - containerPort: 8080 - containerPort: 15010 readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 5 periodSeconds: 30 timeoutSeconds: 5 env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: PILOT_CACHE_SQUASH value: "5" - name: GODEBUG value: "gctrace=2" - name: PILOT_PUSH_THROTTLE_COUNT value: "100" - name: PILOT_TRACE_SAMPLING value: "100" resources: requests: cpu: 500m memory: 2048Mi volumeMounts: - name: config-volume mountPath: /etc/istio/config - name: istio-certs mountPath: /etc/certs readOnly: true - name: istio-proxy image: "192.168.200.10/istio-release/proxyv2:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 15003 - containerPort: 15005 - containerPort: 15007 - containerPort: 15011 args: - proxy - --serviceCluster - istio-pilot - --templateFile - /etc/istio/proxy/envoy_pilot.yaml.tmpl - --controlPlaneAuthPolicy - NONE env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP resources: requests: cpu: 10m volumeMounts: - name: istio-certs mountPath: /etc/certs readOnly: true volumes: - name: config-volume configMap: name: istio - name: istio-certs secret: secretName: istio.istio-pilot-service-account optional: true affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x --- # Source: istio/charts/prometheus/templates/deployment.yaml # TODO: the original template has service account, roles, etc apiVersion: extensions/v1beta1 kind: Deployment metadata: name: prometheus namespace: istio-system labels: app: prometheus chart: prometheus-1.0.1 release: istio heritage: Tiller spec: replicas: 1 selector: matchLabels: app: prometheus template: metadata: labels: app: prometheus annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: prometheus containers: - name: prometheus image: "192.168.200.10/istio-release/prometheus:v2.3.1" imagePullPolicy: IfNotPresent args: - '--storage.tsdb.retention=6h' - '--config.file=/etc/prometheus/prometheus.yml' ports: - containerPort: 9090 name: http livenessProbe: httpGet: path: /-/healthy port: 9090 readinessProbe: httpGet: path: /-/ready port: 9090 resources: requests: cpu: 10m volumeMounts: - name: config-volume mountPath: /etc/prometheus volumes: - name: config-volume configMap: name: prometheus affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x --- # Source: istio/charts/security/templates/deployment.yaml # istio CA watching all namespaces apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-citadel namespace: istio-system labels: app: security chart: security-1.0.1 release: istio heritage: Tiller istio: citadel spec: replicas: 1 template: metadata: labels: istio: citadel annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: istio-citadel-service-account containers: - name: citadel image: "192.168.200.10/istio-release/citadel:1.0.2" imagePullPolicy: IfNotPresent args: - --append-dns-names=true - --grpc-port=8060 - --grpc-hostname=citadel - --citadel-storage-namespace=istio-system - --custom-dns-names=istio-pilot-service-account.istio-system:istio-pilot.istio-system,istio-ingressgateway-service-account.istio-system:istio-ingressgateway.istio-system - --self-signed-ca=true resources: requests: cpu: 10m affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x --- # Source: istio/charts/servicegraph/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: servicegraph namespace: istio-system labels: app: servicegraph chart: servicegraph-1.0.1 release: istio heritage: Tiller spec: replicas: 1 template: metadata: labels: app: servicegraph annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: containers: - name: servicegraph image: "192.168.200.10/istio-release/servicegraph:1.0.2" imagePullPolicy: IfNotPresent ports: - containerPort: 8088 args: - --prometheusAddr=http://prometheus:9090 livenessProbe: httpGet: path: /graph port: 8088 readinessProbe: httpGet: path: /graph port: 8088 resources: requests: cpu: 10m affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x --- # Source: istio/charts/sidecarInjectorWebhook/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-sidecar-injector namespace: istio-system labels: app: sidecarInjectorWebhook chart: sidecarInjectorWebhook-1.0.1 release: istio heritage: Tiller istio: sidecar-injector spec: replicas: 1 template: metadata: labels: istio: sidecar-injector annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: serviceAccountName: istio-sidecar-injector-service-account containers: - name: sidecar-injector-webhook image: "192.168.200.10/istio-release/sidecar_injector:1.0.2" imagePullPolicy: IfNotPresent args: - --caCertFile=/etc/istio/certs/root-cert.pem - --tlsCertFile=/etc/istio/certs/cert-chain.pem - --tlsKeyFile=/etc/istio/certs/key.pem - --injectConfig=/etc/istio/inject/config - --meshConfig=/etc/istio/config/mesh - --healthCheckInterval=2s - --healthCheckFile=/health volumeMounts: - name: config-volume mountPath: /etc/istio/config readOnly: true - name: certs mountPath: /etc/istio/certs readOnly: true - name: inject-config mountPath: /etc/istio/inject readOnly: true livenessProbe: exec: command: - /usr/local/bin/sidecar-injector - probe - --probe-path=/health - --interval=4s initialDelaySeconds: 4 periodSeconds: 4 readinessProbe: exec: command: - /usr/local/bin/sidecar-injector - probe - --probe-path=/health - --interval=4s initialDelaySeconds: 4 periodSeconds: 4 resources: requests: cpu: 10m volumes: - name: config-volume configMap: name: istio - name: certs secret: secretName: istio.istio-sidecar-injector-service-account - name: inject-config configMap: name: istio-sidecar-injector items: - key: config path: config affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x --- # Source: istio/charts/tracing/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-tracing namespace: istio-system labels: app: istio-tracing chart: tracing-1.0.1 release: istio heritage: Tiller spec: replicas: 1 template: metadata: labels: app: jaeger annotations: sidecar.istio.io/inject: "false" scheduler.alpha.kubernetes.io/critical-pod: "" spec: containers: - name: jaeger image: "docker.io/jaegertracing/all-in-one:1.5" imagePullPolicy: IfNotPresent ports: - containerPort: 9411 - containerPort: 16686 - containerPort: 5775 protocol: UDP - containerPort: 6831 protocol: UDP - containerPort: 6832 protocol: UDP env: - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: COLLECTOR_ZIPKIN_HTTP_PORT value: "9411" - name: MEMORY_MAX_TRACES value: "50000" livenessProbe: httpGet: path: / port: 16686 readinessProbe: httpGet: path: / port: 16686 resources: requests: cpu: 10m affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - ppc64le - s390x preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - ppc64le - weight: 2 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - s390x --- # Source: istio/charts/pilot/templates/gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-autogenerated-k8s-ingress namespace: istio-system spec: selector: istio: ingress servers: - port: number: 80 protocol: HTTP2 name: http hosts: - "*" --- --- # Source: istio/charts/gateways/templates/autoscale.yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: istio-egressgateway namespace: istio-system spec: maxReplicas: 5 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: istio-egressgateway metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 --- apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: istio-ingressgateway namespace: istio-system spec: maxReplicas: 5 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: istio-ingressgateway metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 --- --- # Source: istio/charts/ingress/templates/autoscale.yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: istio-ingress namespace: istio-system spec: maxReplicas: 5 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: istio-ingress metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 --- # Source: istio/charts/mixer/templates/autoscale.yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: istio-policy namespace: istio-system spec: maxReplicas: 5 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: istio-policy metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 --- apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: istio-telemetry namespace: istio-system spec: maxReplicas: 5 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: istio-telemetry metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 --- --- # Source: istio/charts/pilot/templates/autoscale.yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: istio-pilot spec: maxReplicas: 5 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: istio-pilot metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 --- --- # Source: istio/charts/tracing/templates/service-jaeger.yaml apiVersion: v1 kind: List items: - apiVersion: v1 kind: Service metadata: name: jaeger-query namespace: istio-system annotations: labels: app: jaeger jaeger-infra: jaeger-service chart: tracing-1.0.1 release: istio heritage: Tiller spec: ports: - name: query-http port: 16686 protocol: TCP targetPort: 16686 selector: app: jaeger - apiVersion: v1 kind: Service metadata: name: jaeger-collector namespace: istio-system labels: app: jaeger jaeger-infra: collector-service chart: tracing-1.0.1 release: istio heritage: Tiller spec: ports: - name: jaeger-collector-tchannel port: 14267 protocol: TCP targetPort: 14267 - name: jaeger-collector-http port: 14268 targetPort: 14268 protocol: TCP selector: app: jaeger type: ClusterIP - apiVersion: v1 kind: Service metadata: name: jaeger-agent namespace: istio-system labels: app: jaeger jaeger-infra: agent-service chart: tracing-1.0.1 release: istio heritage: Tiller spec: ports: - name: agent-zipkin-thrift port: 5775 protocol: UDP targetPort: 5775 - name: agent-compact port: 6831 protocol: UDP targetPort: 6831 - name: agent-binary port: 6832 protocol: UDP targetPort: 6832 clusterIP: None selector: app: jaeger --- # Source: istio/charts/tracing/templates/service.yaml apiVersion: v1 kind: List items: - apiVersion: v1 kind: Service metadata: name: zipkin namespace: istio-system labels: app: jaeger chart: tracing-1.0.1 release: istio heritage: Tiller spec: type: ClusterIP ports: - port: 9411 targetPort: 9411 protocol: TCP name: http selector: app: jaeger - apiVersion: v1 kind: Service metadata: name: tracing namespace: istio-system annotations: labels: app: jaeger chart: tracing-1.0.1 release: istio heritage: Tiller spec: ports: - name: http-query port: 80 protocol: TCP targetPort: 16686 selector: app: jaeger --- # Source: istio/charts/sidecarInjectorWebhook/templates/mutatingwebhook.yaml apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration metadata: name: istio-sidecar-injector namespace: istio-system labels: app: istio-sidecar-injector chart: sidecarInjectorWebhook-1.0.1 release: istio heritage: Tiller webhooks: - name: sidecar-injector.istio.io clientConfig: service: name: istio-sidecar-injector namespace: istio-system path: "/inject" caBundle: "" rules: - operations: [ "CREATE" ] apiGroups: [""] apiVersions: ["v1"] resources: ["pods"] failurePolicy: Fail namespaceSelector: matchLabels: istio-injection: enabled --- # Source: istio/charts/galley/templates/validatingwehookconfiguration.yaml.tpl --- # Source: istio/charts/grafana/templates/grafana-ports-mtls.yaml --- # Source: istio/charts/grafana/templates/pvc.yaml --- # Source: istio/charts/grafana/templates/secret.yaml --- # Source: istio/charts/kiali/templates/ingress.yaml --- # Source: istio/charts/pilot/templates/meshexpansion.yaml --- # Source: istio/charts/security/templates/create-custom-resources-job.yaml --- # Source: istio/charts/security/templates/enable-mesh-mtls.yaml --- # Source: istio/charts/security/templates/meshexpansion.yaml --- --- # Source: istio/charts/servicegraph/templates/ingress.yaml --- # Source: istio/charts/telemetry-gateway/templates/gateway.yaml --- # Source: istio/charts/tracing/templates/ingress-jaeger.yaml --- # Source: istio/charts/tracing/templates/ingress.yaml --- # Source: istio/templates/install-custom-resources.sh.tpl --- # Source: istio/charts/mixer/templates/config.yaml apiVersion: "config.istio.io/v1alpha2" kind: attributemanifest metadata: name: istioproxy namespace: istio-system spec: attributes: origin.ip: valueType: IP_ADDRESS origin.uid: valueType: STRING origin.user: valueType: STRING request.headers: valueType: STRING_MAP request.id: valueType: STRING request.host: valueType: STRING request.method: valueType: STRING request.path: valueType: STRING request.reason: valueType: STRING request.referer: valueType: STRING request.scheme: valueType: STRING request.total_size: valueType: INT64 request.size: valueType: INT64 request.time: valueType: TIMESTAMP request.useragent: valueType: STRING response.code: valueType: INT64 response.duration: valueType: DURATION response.headers: valueType: STRING_MAP response.total_size: valueType: INT64 response.size: valueType: INT64 response.time: valueType: TIMESTAMP source.uid: valueType: STRING source.user: # DEPRECATED valueType: STRING source.principal: valueType: STRING destination.uid: valueType: STRING destination.principal: valueType: STRING destination.port: valueType: INT64 connection.event: valueType: STRING connection.id: valueType: STRING connection.received.bytes: valueType: INT64 connection.received.bytes_total: valueType: INT64 connection.sent.bytes: valueType: INT64 connection.sent.bytes_total: valueType: INT64 connection.duration: valueType: DURATION connection.mtls: valueType: BOOL connection.requested_server_name: valueType: STRING context.protocol: valueType: STRING context.timestamp: valueType: TIMESTAMP context.time: valueType: TIMESTAMP # Deprecated, kept for compatibility context.reporter.local: valueType: BOOL context.reporter.kind: valueType: STRING context.reporter.uid: valueType: STRING api.service: valueType: STRING api.version: valueType: STRING api.operation: valueType: STRING api.protocol: valueType: STRING request.auth.principal: valueType: STRING request.auth.audiences: valueType: STRING request.auth.presenter: valueType: STRING request.auth.claims: valueType: STRING_MAP request.auth.raw_claims: valueType: STRING request.api_key: valueType: STRING --- apiVersion: "config.istio.io/v1alpha2" kind: attributemanifest metadata: name: kubernetes namespace: istio-system spec: attributes: source.ip: valueType: IP_ADDRESS source.labels: valueType: STRING_MAP source.metadata: valueType: STRING_MAP source.name: valueType: STRING source.namespace: valueType: STRING source.owner: valueType: STRING source.service: # DEPRECATED valueType: STRING source.serviceAccount: valueType: STRING source.services: valueType: STRING source.workload.uid: valueType: STRING source.workload.name: valueType: STRING source.workload.namespace: valueType: STRING destination.ip: valueType: IP_ADDRESS destination.labels: valueType: STRING_MAP destination.metadata: valueType: STRING_MAP destination.owner: valueType: STRING destination.name: valueType: STRING destination.container.name: valueType: STRING destination.namespace: valueType: STRING destination.service: # DEPRECATED valueType: STRING destination.service.uid: valueType: STRING destination.service.name: valueType: STRING destination.service.namespace: valueType: STRING destination.service.host: valueType: STRING destination.serviceAccount: valueType: STRING destination.workload.uid: valueType: STRING destination.workload.name: valueType: STRING destination.workload.namespace: valueType: STRING --- apiVersion: "config.istio.io/v1alpha2" kind: stdio metadata: name: handler namespace: istio-system spec: outputAsJson: true --- apiVersion: "config.istio.io/v1alpha2" kind: logentry metadata: name: accesslog namespace: istio-system spec: severity: '"Info"' timestamp: request.time variables: sourceIp: source.ip | ip("0.0.0.0") sourceApp: source.labels["app"] | "" sourcePrincipal: source.principal | "" sourceName: source.name | "" sourceWorkload: source.workload.name | "" sourceNamespace: source.namespace | "" sourceOwner: source.owner | "" destinationApp: destination.labels["app"] | "" destinationIp: destination.ip | ip("0.0.0.0") destinationServiceHost: destination.service.host | "" destinationWorkload: destination.workload.name | "" destinationName: destination.name | "" destinationNamespace: destination.namespace | "" destinationOwner: destination.owner | "" destinationPrincipal: destination.principal | "" apiClaims: request.auth.raw_claims | "" apiKey: request.api_key | request.headers["x-api-key"] | "" protocol: request.scheme | context.protocol | "http" method: request.method | "" url: request.path | "" responseCode: response.code | 0 responseSize: response.size | 0 requestSize: request.size | 0 requestId: request.headers["x-request-id"] | "" clientTraceId: request.headers["x-client-trace-id"] | "" latency: response.duration | "0ms" connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) requestedServerName: connection.requested_server_name | "" userAgent: request.useragent | "" responseTimestamp: response.time receivedBytes: request.total_size | 0 sentBytes: response.total_size | 0 referer: request.referer | "" httpAuthority: request.headers[":authority"] | request.host | "" xForwardedFor: request.headers["x-forwarded-for"] | "0.0.0.0" reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") monitored_resource_type: '"global"' --- apiVersion: "config.istio.io/v1alpha2" kind: logentry metadata: name: tcpaccesslog namespace: istio-system spec: severity: '"Info"' timestamp: context.time | timestamp("2017-01-01T00:00:00Z") variables: connectionEvent: connection.event | "" sourceIp: source.ip | ip("0.0.0.0") sourceApp: source.labels["app"] | "" sourcePrincipal: source.principal | "" sourceName: source.name | "" sourceWorkload: source.workload.name | "" sourceNamespace: source.namespace | "" sourceOwner: source.owner | "" destinationApp: destination.labels["app"] | "" destinationIp: destination.ip | ip("0.0.0.0") destinationServiceHost: destination.service.host | "" destinationWorkload: destination.workload.name | "" destinationName: destination.name | "" destinationNamespace: destination.namespace | "" destinationOwner: destination.owner | "" destinationPrincipal: destination.principal | "" protocol: context.protocol | "tcp" connectionDuration: connection.duration | "0ms" connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) requestedServerName: connection.requested_server_name | "" receivedBytes: connection.received.bytes | 0 sentBytes: connection.sent.bytes | 0 totalReceivedBytes: connection.received.bytes_total | 0 totalSentBytes: connection.sent.bytes_total | 0 reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") monitored_resource_type: '"global"' --- apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: stdio namespace: istio-system spec: match: context.protocol == "http" || context.protocol == "grpc" actions: - handler: handler.stdio instances: - accesslog.logentry --- apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: stdiotcp namespace: istio-system spec: match: context.protocol == "tcp" actions: - handler: handler.stdio instances: - tcpaccesslog.logentry --- apiVersion: "config.istio.io/v1alpha2" kind: metric metadata: name: requestcount namespace: istio-system spec: value: "1" dimensions: reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") source_workload: source.workload.name | "unknown" source_workload_namespace: source.workload.namespace | "unknown" source_principal: source.principal | "unknown" source_app: source.labels["app"] | "unknown" source_version: source.labels["version"] | "unknown" destination_workload: destination.workload.name | "unknown" destination_workload_namespace: destination.workload.namespace | "unknown" destination_principal: destination.principal | "unknown" destination_app: destination.labels["app"] | "unknown" destination_version: destination.labels["version"] | "unknown" destination_service: destination.service.host | "unknown" destination_service_name: destination.service.name | "unknown" destination_service_namespace: destination.service.namespace | "unknown" request_protocol: api.protocol | context.protocol | "unknown" response_code: response.code | 200 connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) monitored_resource_type: '"UNSPECIFIED"' --- apiVersion: "config.istio.io/v1alpha2" kind: metric metadata: name: requestduration namespace: istio-system spec: value: response.duration | "0ms" dimensions: reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") source_workload: source.workload.name | "unknown" source_workload_namespace: source.workload.namespace | "unknown" source_principal: source.principal | "unknown" source_app: source.labels["app"] | "unknown" source_version: source.labels["version"] | "unknown" destination_workload: destination.workload.name | "unknown" destination_workload_namespace: destination.workload.namespace | "unknown" destination_principal: destination.principal | "unknown" destination_app: destination.labels["app"] | "unknown" destination_version: destination.labels["version"] | "unknown" destination_service: destination.service.host | "unknown" destination_service_name: destination.service.name | "unknown" destination_service_namespace: destination.service.namespace | "unknown" request_protocol: api.protocol | context.protocol | "unknown" response_code: response.code | 200 connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) monitored_resource_type: '"UNSPECIFIED"' --- apiVersion: "config.istio.io/v1alpha2" kind: metric metadata: name: requestsize namespace: istio-system spec: value: request.size | 0 dimensions: reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") source_workload: source.workload.name | "unknown" source_workload_namespace: source.workload.namespace | "unknown" source_principal: source.principal | "unknown" source_app: source.labels["app"] | "unknown" source_version: source.labels["version"] | "unknown" destination_workload: destination.workload.name | "unknown" destination_workload_namespace: destination.workload.namespace | "unknown" destination_principal: destination.principal | "unknown" destination_app: destination.labels["app"] | "unknown" destination_version: destination.labels["version"] | "unknown" destination_service: destination.service.host | "unknown" destination_service_name: destination.service.name | "unknown" destination_service_namespace: destination.service.namespace | "unknown" request_protocol: api.protocol | context.protocol | "unknown" response_code: response.code | 200 connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) monitored_resource_type: '"UNSPECIFIED"' --- apiVersion: "config.istio.io/v1alpha2" kind: metric metadata: name: responsesize namespace: istio-system spec: value: response.size | 0 dimensions: reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") source_workload: source.workload.name | "unknown" source_workload_namespace: source.workload.namespace | "unknown" source_principal: source.principal | "unknown" source_app: source.labels["app"] | "unknown" source_version: source.labels["version"] | "unknown" destination_workload: destination.workload.name | "unknown" destination_workload_namespace: destination.workload.namespace | "unknown" destination_principal: destination.principal | "unknown" destination_app: destination.labels["app"] | "unknown" destination_version: destination.labels["version"] | "unknown" destination_service: destination.service.host | "unknown" destination_service_name: destination.service.name | "unknown" destination_service_namespace: destination.service.namespace | "unknown" request_protocol: api.protocol | context.protocol | "unknown" response_code: response.code | 200 connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) monitored_resource_type: '"UNSPECIFIED"' --- apiVersion: "config.istio.io/v1alpha2" kind: metric metadata: name: tcpbytesent namespace: istio-system spec: value: connection.sent.bytes | 0 dimensions: reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") source_workload: source.workload.name | "unknown" source_workload_namespace: source.workload.namespace | "unknown" source_principal: source.principal | "unknown" source_app: source.labels["app"] | "unknown" source_version: source.labels["version"] | "unknown" destination_workload: destination.workload.name | "unknown" destination_workload_namespace: destination.workload.namespace | "unknown" destination_principal: destination.principal | "unknown" destination_app: destination.labels["app"] | "unknown" destination_version: destination.labels["version"] | "unknown" destination_service: destination.service.name | "unknown" destination_service_name: destination.service.name | "unknown" destination_service_namespace: destination.service.namespace | "unknown" connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) monitored_resource_type: '"UNSPECIFIED"' --- apiVersion: "config.istio.io/v1alpha2" kind: metric metadata: name: tcpbytereceived namespace: istio-system spec: value: connection.received.bytes | 0 dimensions: reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") source_workload: source.workload.name | "unknown" source_workload_namespace: source.workload.namespace | "unknown" source_principal: source.principal | "unknown" source_app: source.labels["app"] | "unknown" source_version: source.labels["version"] | "unknown" destination_workload: destination.workload.name | "unknown" destination_workload_namespace: destination.workload.namespace | "unknown" destination_principal: destination.principal | "unknown" destination_app: destination.labels["app"] | "unknown" destination_version: destination.labels["version"] | "unknown" destination_service: destination.service.name | "unknown" destination_service_name: destination.service.name | "unknown" destination_service_namespace: destination.service.namespace | "unknown" connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) monitored_resource_type: '"UNSPECIFIED"' --- apiVersion: "config.istio.io/v1alpha2" kind: prometheus metadata: name: handler namespace: istio-system spec: metrics: - name: requests_total instance_name: requestcount.metric.istio-system kind: COUNTER label_names: - reporter - source_app - source_principal - source_workload - source_workload_namespace - source_version - destination_app - destination_principal - destination_workload - destination_workload_namespace - destination_version - destination_service - destination_service_name - destination_service_namespace - request_protocol - response_code - connection_security_policy - name: request_duration_seconds instance_name: requestduration.metric.istio-system kind: DISTRIBUTION label_names: - reporter - source_app - source_principal - source_workload - source_workload_namespace - source_version - destination_app - destination_principal - destination_workload - destination_workload_namespace - destination_version - destination_service - destination_service_name - destination_service_namespace - request_protocol - response_code - connection_security_policy buckets: explicit_buckets: bounds: [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10] - name: request_bytes instance_name: requestsize.metric.istio-system kind: DISTRIBUTION label_names: - reporter - source_app - source_principal - source_workload - source_workload_namespace - source_version - destination_app - destination_principal - destination_workload - destination_workload_namespace - destination_version - destination_service - destination_service_name - destination_service_namespace - request_protocol - response_code - connection_security_policy buckets: exponentialBuckets: numFiniteBuckets: 8 scale: 1 growthFactor: 10 - name: response_bytes instance_name: responsesize.metric.istio-system kind: DISTRIBUTION label_names: - reporter - source_app - source_principal - source_workload - source_workload_namespace - source_version - destination_app - destination_principal - destination_workload - destination_workload_namespace - destination_version - destination_service - destination_service_name - destination_service_namespace - request_protocol - response_code - connection_security_policy buckets: exponentialBuckets: numFiniteBuckets: 8 scale: 1 growthFactor: 10 - name: tcp_sent_bytes_total instance_name: tcpbytesent.metric.istio-system kind: COUNTER label_names: - reporter - source_app - source_principal - source_workload - source_workload_namespace - source_version - destination_app - destination_principal - destination_workload - destination_workload_namespace - destination_version - destination_service - destination_service_name - destination_service_namespace - connection_security_policy - name: tcp_received_bytes_total instance_name: tcpbytereceived.metric.istio-system kind: COUNTER label_names: - reporter - source_app - source_principal - source_workload - source_workload_namespace - source_version - destination_app - destination_principal - destination_workload - destination_workload_namespace - destination_version - destination_service - destination_service_name - destination_service_namespace - connection_security_policy --- apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: promhttp namespace: istio-system spec: match: context.protocol == "http" || context.protocol == "grpc" actions: - handler: handler.prometheus instances: - requestcount.metric - requestduration.metric - requestsize.metric - responsesize.metric --- apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: promtcp namespace: istio-system spec: match: context.protocol == "tcp" actions: - handler: handler.prometheus instances: - tcpbytesent.metric - tcpbytereceived.metric --- apiVersion: "config.istio.io/v1alpha2" kind: kubernetesenv metadata: name: handler namespace: istio-system spec: # when running from mixer root, use the following config after adding a # symbolic link to a kubernetes config file via: # # $ ln -s ~/.kube/config mixer/adapter/kubernetes/kubeconfig # # kubeconfig_path: "mixer/adapter/kubernetes/kubeconfig" --- apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: kubeattrgenrulerule namespace: istio-system spec: actions: - handler: handler.kubernetesenv instances: - attributes.kubernetes --- apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: tcpkubeattrgenrulerule namespace: istio-system spec: match: context.protocol == "tcp" actions: - handler: handler.kubernetesenv instances: - attributes.kubernetes --- apiVersion: "config.istio.io/v1alpha2" kind: kubernetes metadata: name: attributes namespace: istio-system spec: # Pass the required attribute data to the adapter source_uid: source.uid | "" source_ip: source.ip | ip("0.0.0.0") # default to unspecified ip addr destination_uid: destination.uid | "" destination_port: destination.port | 0 attribute_bindings: # Fill the new attributes from the adapter produced output. # $out refers to an instance of OutputTemplate message source.ip: $out.source_pod_ip | ip("0.0.0.0") source.uid: $out.source_pod_uid | "unknown" source.labels: $out.source_labels | emptyStringMap() source.name: $out.source_pod_name | "unknown" source.namespace: $out.source_namespace | "default" source.owner: $out.source_owner | "unknown" source.serviceAccount: $out.source_service_account_name | "unknown" source.workload.uid: $out.source_workload_uid | "unknown" source.workload.name: $out.source_workload_name | "unknown" source.workload.namespace: $out.source_workload_namespace | "unknown" destination.ip: $out.destination_pod_ip | ip("0.0.0.0") destination.uid: $out.destination_pod_uid | "unknown" destination.labels: $out.destination_labels | emptyStringMap() destination.name: $out.destination_pod_name | "unknown" destination.container.name: $out.destination_container_name | "unknown" destination.namespace: $out.destination_namespace | "default" destination.owner: $out.destination_owner | "unknown" destination.serviceAccount: $out.destination_service_account_name | "unknown" destination.workload.uid: $out.destination_workload_uid | "unknown" destination.workload.name: $out.destination_workload_name | "unknown" destination.workload.namespace: $out.destination_workload_namespace | "unknown" --- # Configuration needed by Mixer. # Mixer cluster is delivered via CDS # Specify mixer cluster settings apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: istio-policy namespace: istio-system spec: host: istio-policy.istio-system.svc.cluster.local trafficPolicy: connectionPool: http: http2MaxRequests: 10000 maxRequestsPerConnection: 10000 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: istio-telemetry namespace: istio-system spec: host: istio-telemetry.istio-system.svc.cluster.local trafficPolicy: connectionPool: http: http2MaxRequests: 10000 maxRequestsPerConnection: 10000 ---
這裏說的是使用 install/kubernetes/helm/istio
目錄中的 Chart 進行渲染,生成的內容保存到 ./istio.yaml
文件之中。將 sidecarInjectorWebhook.enabled
設置爲 true,從而使自動注入屬性生效。git
部署完成後,能夠檢查 isotio-system
namespace 中的服務是否正常運行:github
[root@master1 kubernetes]# /root/show.sh | grep istio-system istio-system grafana-676b67689b-wzsrg 1/1 Running 0 1h 10.254.98.12 node3 istio-system istio-citadel-5f7664d4f5-szqtx 1/1 Running 0 1h 10.254.98.14 node3 istio-system istio-cleanup-secrets-ppbg7 0/1 Completed 0 1h 10.254.87.7 node2 istio-system istio-egressgateway-7c56f748c7-2lw4z 1/1 Running 0 1h 10.254.98.10 node3 istio-system istio-egressgateway-7c56f748c7-57m6v 1/1 Running 0 1h 10.254.104.6 node5 istio-system istio-egressgateway-7c56f748c7-q9hsj 1/1 Running 0 1h 10.254.87.13 node2 istio-system istio-egressgateway-7c56f748c7-vmrwh 1/1 Running 0 1h 10.254.102.6 node1 istio-system istio-egressgateway-7c56f748c7-wxjfs 1/1 Running 0 1h 10.254.95.3 node4 istio-system istio-galley-7cfd5974fd-hwkz4 1/1 Running 0 1h 10.254.87.11 node2 istio-system istio-grafana-post-install-p8hfg 0/1 Completed 0 1h 10.254.98.8 node3 istio-system istio-ingressgateway-679574c9c6-6hnnk 1/1 Running 0 1h 10.254.102.7 node1 istio-system istio-ingressgateway-679574c9c6-d456h 1/1 Running 0 1h 10.254.87.9 node2 istio-system istio-ingressgateway-679574c9c6-gd9p7 1/1 Running 0 1h 10.254.98.17 node3 istio-system istio-ingressgateway-679574c9c6-gzff8 1/1 Running 0 1h 10.254.104.7 node5 istio-system istio-ingressgateway-679574c9c6-sf75q 1/1 Running 0 1h 10.254.95.5 node4 istio-system istio-pilot-6897c9df47-gpqmg 2/2 Running 0 1h 10.254.87.7 node2 istio-system istio-policy-5459cb554f-6nbsk 2/2 Running 0 1h 10.254.104.8 node5 istio-system istio-policy-5459cb554f-74v9w 2/2 Running 0 1h 10.254.98.18 node3 istio-system istio-policy-5459cb554f-bqxt8 2/2 Running 0 1h 10.254.95.4 node4 istio-system istio-policy-5459cb554f-f4x6j 2/2 Running 0 1h 10.254.102.8 node1 istio-system istio-policy-5459cb554f-qj74h 2/2 Running 0 1h 10.254.87.10 node2 istio-system istio-sidecar-injector-5c5fb8f6b9-224tw 1/1 Running 0 1h 10.254.98.8 node3 istio-system istio-statsd-prom-bridge-d44479954-pbb8d 1/1 Running 0 1h 10.254.98.9 node3 istio-system istio-telemetry-8694d7f76-9g8s2 2/2 Running 0 1h 10.254.104.5 node5 istio-system istio-telemetry-8694d7f76-kh7bf 2/2 Running 0 1h 10.254.87.12 node2 istio-system istio-telemetry-8694d7f76-mt5rg 2/2 Running 0 1h 10.254.95.6 node4 istio-system istio-telemetry-8694d7f76-t5p8l 2/2 Running 0 1h 10.254.102.5 node1 istio-system istio-telemetry-8694d7f76-vfdbl 2/2 Running 0 1h 10.254.98.11 node3 istio-system istio-tracing-7c9b8969f7-wpt8p 1/1 Running 0 1h 10.254.98.16 node3 istio-system prometheus-6b945b75b6-vng9q 1/1 Running 0 1h 10.254.98.13 node3
istio-citadel
。istio-cleanup-secrets
是一個 job,用於清理過去的 Istio 遺留下來的 CA 部署(包括 sa、deploy 以及 svc 三個對象)。egressgateway
、ingress
以及 ingressgateway
,能夠看出邊緣部分的變更很大,之後會另行發文。
kubernetes 直接安裝golang
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
kubectl apply -f install/kubernetes/istio-demo.yaml
等全部 Pod 啓動後,能夠經過 NodePort、Ingress 或者 kubectl proxy 來訪問這些服務。好比能夠經過 Ingress
來訪問服務。web
首先爲 Prometheus、Grafana、Servicegraph 和 Jaeger 服務建立 Ingress:redis
$ cat ingress.yaml --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: prometheus namespace: istio-system spec: rules: - host: prometheus.istio.io http: paths: - path: / backend: serviceName: prometheus servicePort: 9090 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: grafana namespace: istio-system spec: rules: - host: grafana.istio.io http: paths: - path: / backend: serviceName: grafana servicePort: 3000 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: servicegraph namespace: istio-system spec: rules: - host: servicegraph.istio.io http: paths: - path: / backend: serviceName: servicegraph servicePort: 8088 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: tracing namespace: istio-system spec: rules: - host: tracing.istio.io http: paths: - path: / backend: serviceName: tracing servicePort: 80
手工或自動注入都會從 istio-system
命名空間的 istio-sidecar-injector
以及 istio
ConfigMap 中獲取配置信息算法
自動注入過程會在 Pod 的生成過程當中進行注入,這種方法不會更改控制器的配置。手工刪除 Pod 或者使用滾動更新均可以選擇性的對 Sidecar 進行更新。
使用集羣內置配置將 Sidecar 注入到 Deployment 中
[root@master1 ~]# istioctl kube-inject -f gateway-deployment.yaml | kubectl apply -f - deployment.apps/gateway created service/gateway created
查看注入到proxy
[root@master1 ~]# kubectl get deployment gateway -o yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1beta2","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"name":"gateway","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"gateway"}},"strategy":{},"template":{"metadata":{"annotations":{"sidecar.istio.io/status":"{\"version\":\"9f116c4689c03bb21330a7b2baa7a88c26d7c5adb08b1deda5fca9032de8a474\",\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-certs\"],\"imagePullSecrets\":null}"},"creationTimestamp":null,"labels":{"app":"gateway"}},"spec":{"containers":[{"image":"192.168.200.10/testsubject/gateway:28","name":"gateway","ports":[{"containerPort":80}],"resources":{"limits":{"cpu":"400m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"512Mi"}}},{"args":["proxy","sidecar","--configPath","/etc/istio/proxy","--binaryPath","/usr/local/bin/envoy","--serviceCluster","gateway","--drainDuration","45s","--parentShutdownDuration","1m0s","--discoveryAddress","istio-pilot.istio-system:15007","--discoveryRefreshDelay","1s","--zipkinAddress","zipkin.istio-system:9411","--connectTimeout","10s","--statsdUdpAddress","istio-statsd-prom-bridge.istio-system:9125","--proxyAdminPort","15000","--controlPlaneAuthPolicy","NONE"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"INSTANCE_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}},{"name":"ISTIO_META_POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"ISTIO_META_INTERCEPTION_MODE","value":"REDIRECT"}],"image":"docker.io/istio/proxyv2:1.0.0","imagePullPolicy":"IfNotPresent","name":"istio-proxy","resources":{"requests":{"cpu":"10m"}},"securityContext":{"privileged":false,"readOnlyRootFilesystem":true,"runAsUser":1337},"volumeMounts":[{"mountPath":"/etc/istio/proxy","name":"istio-envoy"},{"mountPath":"/etc/certs/","name":"istio-certs","readOnly":true}]}],"initContainers":[{"args":["-p","15001","-u","1337","-m","REDIRECT","-i","*","-x","","-b","80,","-d",""],"image":"192.168.200.10/istio/proxy_init:1.0.0","imagePullPolicy":"IfNotPresent","name":"istio-init","resources":{},"securityContext":{"capabilities":{"add":["NET_ADMIN"]},"privileged":true}}],"volumes":[{"emptyDir":{"medium":"Memory"},"name":"istio-envoy"},{"name":"istio-certs","secret":{"optional":true,"secretName":"istio.default"}}]}}},"status":{}} creationTimestamp: 2018-08-15T06:21:17Z generation: 1 name: gateway namespace: default resourceVersion: "5173472" selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/gateway uid: 6af616cf-a053-11e8-b03c-005056845c62 spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: gateway strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: sidecar.istio.io/status: '{"version":"9f116c4689c03bb21330a7b2baa7a88c26d7c5adb08b1deda5fca9032de8a474","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}' creationTimestamp: null labels: app: gateway spec: containers: - image: 192.168.200.10/testsubject/gateway:28 imagePullPolicy: IfNotPresent name: gateway ports: - containerPort: 80 protocol: TCP resources: limits: cpu: 400m memory: 1Gi requests: cpu: 100m memory: 512Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File - args: - proxy - sidecar - --configPath - /etc/istio/proxy - --binaryPath - /usr/local/bin/envoy - --serviceCluster - gateway - --drainDuration - 45s - --parentShutdownDuration - 1m0s - --discoveryAddress - istio-pilot.istio-system:15007 - --discoveryRefreshDelay - 1s - --zipkinAddress - zipkin.istio-system:9411 - --connectTimeout - 10s - --statsdUdpAddress - istio-statsd-prom-bridge.istio-system:9125 - --proxyAdminPort - "15000" - --controlPlaneAuthPolicy - NONE env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: ISTIO_META_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: ISTIO_META_INTERCEPTION_MODE value: REDIRECT image: docker.io/istio/proxyv2:1.0.0 imagePullPolicy: IfNotPresent name: istio-proxy resources: requests: cpu: 10m securityContext: privileged: false readOnlyRootFilesystem: true runAsUser: 1337 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/istio/proxy name: istio-envoy - mountPath: /etc/certs/ name: istio-certs readOnly: true dnsPolicy: ClusterFirst initContainers: - args: - -p - "15001" - -u - "1337" - -m - REDIRECT - -i - '*' - -x - "" - -b - 80, - -d - "" image: 192.168.200.10/istio/proxy_init:1.0.0 imagePullPolicy: IfNotPresent name: istio-init resources: {} securityContext: capabilities: add: - NET_ADMIN privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - emptyDir: medium: Memory name: istio-envoy - name: istio-certs secret: defaultMode: 420 optional: true secretName: istio.default status: availableReplicas: 1 conditions: - lastTransitionTime: 2018-08-15T06:21:28Z lastUpdateTime: 2018-08-15T06:21:28Z message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: 2018-08-15T06:21:17Z lastUpdateTime: 2018-08-15T06:21:28Z message: ReplicaSet "gateway-78ddd84d8d" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing observedGeneration: 1 readyReplicas: 1 replicas: 1 updatedReplicas: 1
使用 Kubernetes 的 mutating webhook admission controller,能夠進行 Sidecar 的自動注入。Kubernetes 1.9 之後的版本才具有這一能力。使用這一功能以前首先要檢查 kube-apiserver 的進程,是否具有 admission-control
參數,而且這個參數的值中須要包含 MutatingAdmissionWebhook
以及 ValidatingAdmissionWebhook
兩項,而且按照正確的順序加載,這樣才能啓用 admissionregistration
API:
kubectl api-versions | grep admissionregistration admissionregistration.k8s.io/v1alpha1 admissionregistration.k8s.io/v1beta1
跟手工注入不一樣的是,自動注入過程是發生在 Pod 級別的。所以是不會看到 Deployment 自己發生什麼變化的。可是可使用 kubectl describe
來觀察單獨的 Pod,在其中能看到注入 Sidecar 的相關信息。
[root@master1 helm]# kubectl get configmap istio-sidecar-injector -n istio-system -o yaml > istio-sidecar-injector.yaml [root@master1 helm]# vim istio-sidecar-injector.yaml 修改注入的proxy2 從docker.io
[root@master1 helm]# kubectl apply -f istio-sidecar-injector.yaml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply configmap/istio-sidecar-injector configured
缺省狀況下,用於 Sidecar 注入的 Webhook 是啓用的。若是想要禁用它,能夠用 Helm ,將 sidecarInjectorWebhook.enabled
參數設爲 false
,生成一個 istio.yaml
進行更新。也就是:
helm template --namespace=istio-system --set sidecarInjectorWebhook.enabled=false install/kubernetes/helm/istio > istio.yaml kubectl create ns istio-system kubectl apply -n istio-system -f istio.yaml
給 default
命名空間設置標籤:istio-injection=enabled
kubectl label namespace default istio-injection=enabled
kubectl get namespace -L istio-injection NAME STATUS AGE ISTIO-INJECTION default Active 1h enabled istio-system Active 1h kube-public Active 1h kube-system Active 1h
這樣就會在 Pod 建立時觸發 Sidecar 的注入過程了。刪掉運行的 Pod,會產生一個新的 Pod,新 Pod 會被注入 Sidecar。原有的 Pod 只有一個容器,而被注入 Sidecar 的 Pod 會有兩個容器:
刪除namespace 上的標籤
kubectl label namespace default istio-injection-
kubectl delete pod sleep-776b7bcdcd-7hpnk kubectl get pod NAME READY STATUS RESTARTS AGE sleep-776b7bcdcd-7hpnk 1/1 Terminating 0 1m sleep-776b7bcdcd-bhn9m 2/2 Running 0 7s
查看被注入的 Pod 的細節。不難發現多出了一個 istio-proxy
容器及其對應的存儲卷。注意用正確的 Pod 名稱來執行下面的命令:
kubectl describe pod sleep-776b7bcdcd-bhn9m
禁用 default
命名空間的自動注入功能,而後檢查新建 Pod 是否是就不帶有 Sidecar 容器了
kubectl label namespace default istio-injection- kubectl delete pod sleep-776b7bcdcd-bhn9m kubectl get pod NAME READY STATUS RESTARTS AGE sleep-776b7bcdcd-bhn9m 2/2 Terminating 0 2m sleep-776b7bcdcd-gmvnr 1/1 Running 0 2s
被 Kubernetes 調用時,admissionregistration.k8s.io/v1beta1#MutatingWebhookConfiguration 會進行配置。Istio 提供的缺省配置,會在帶有 istio-injection=enabled
標籤的命名空間中選擇 Pod。使用 kubectl edit mutatingwebhookconfiguration istio-sidecar-injector
命令能夠編輯目標命名空間的範圍。
修改 mutatingwebhookconfiguration 以後,應該從新啓動已經被注入 Sidecar 的 Pod。
istio-system
命名空間中的 ConfigMap istio-sidecar-injector
中包含了缺省的注入策略以及 Sidecar 的注入模板。
disabled
- Sidecar 注入器缺省不會向 Pod 進行注入。在 Pod 模板中加入 sidecar.istio.io/inject
註解並賦值爲 true
才能啓用注入。
enabled
- Sidecar 注入器缺省會對 Pod 進行注入。在 Pod 模板中加入 sidecar.istio.io/inject
註解並賦值爲 false
就會阻止對這一 Pod 的注入
下面的例子用 sidecar.istio.io/inject
註解來禁用 Sidecar 注入
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ignored spec: template: metadata: annotations: sidecar.istio.io/inject: "false" spec: containers: - name: ignored image: tutum/curl command: ["/bin/sleep","infinity"]
Sidecar 注入模板使用的是 golang 模板,當解析和執行時,會解碼爲下面的結構,其中包含了將要注入到 Pod 中的容器和卷。
type SidecarInjectionSpec struct { InitContainers []v1.Container `yaml:"initContainers"` Containers []v1.Container `yaml:"containers"` Volumes []v1.Volume `yaml:"volumes"` ImagePullSecrets []corev1.LocalObjectReference `yaml:"imagePullSecrets"` }
在運行時,這個模板會造成以下的數據結構
type SidecarTemplateData struct { ObjectMeta *metav1.ObjectMeta Spec *v1.PodSpec ProxyConfig *meshconfig.ProxyConfig // 定義來自於 https://istio.io/docs/reference/config/service-mesh.html#proxyconfig MeshConfig *meshconfig.MeshConfig // 定義來自於 https://istio.io/docs/reference/config/service-mesh.html#meshconfig }
ObjectMeta
和 Spec
都來自於 Pod。ProxyConfig
和 MeshConfig
來自 istio-system
命名空間中的 istio
ConfigMap。模板可使用這些數據,有條件對將要注入的容器和捲進行定義。
例以下面的模板代碼段來自於 install/kubernetes/istio-sidecar-injector-configmap-release.yaml
containers: - name: istio-proxy image: istio.io/proxy:0.5.0 args: - proxy - sidecar - --configPath - {{ .ProxyConfig.ConfigPath }} - --binaryPath - {{ .ProxyConfig.BinaryPath }} - --serviceCluster {{ if ne "" (index .ObjectMeta.Labels "app") -}} - {{ index .ObjectMeta.Labels "app" }} {{ else -}} - "istio-proxy" {{ end -}}
會在部署 Sleep 應用時應用到 Pod 上,並擴展爲:
containers: - name: istio-proxy image: istio.io/proxy:0.5.0 args: - proxy - sidecar - --configPath - /etc/istio/proxy - --binaryPath - /usr/local/bin/envoy - --serviceCluster - sleep
kubectl delete mutatingwebhookconfiguration istio-sidecar-injector kubectl -n istio-system delete service istio-sidecar-injector kubectl -n istio-system delete deployment istio-sidecar-injector kubectl -n istio-system delete serviceaccount istio-sidecar-injector-service-account kubectl delete clusterrole istio-sidecar-injector-istio-system kubectl delete clusterrolebinding istio-sidecar-injector-admin-role-binding-istio-system
上面的命令不會從 Pod 中移除已經注入的 Sidecar。能夠用一次滾動更新,或者簡單的刪除原有 Pod 迫使 Deployment 從新建立均可以移除 Sidecar。
除此之外,還能夠刪除咱們在本任務中作出的其它更改:
kubectl label namespace default istio-injection-
要成爲服務網格的一部分,Kubernetes 集羣中的 Pod 和服務必須知足如下幾個要求:
須要給端口正確命名:服務端口必須進行命名。端口名稱只容許是<協議>[-<後綴>-]
模式,其中<協議>
部分可選擇範圍包括 http
、http2
、grpc
、mongo
以及 redis
,Istio 能夠經過對這些協議的支持來提供路由能力。例如 name: http2-foo
和 name: http
都是有效的端口名,但 name: http2foo
就是無效的。若是沒有給端口進行命名,或者命名沒有使用指定前綴,那麼這一端口的流量就會被視爲普通 TCP 流量(除非顯式的用 Protocol: UDP
聲明該端口是 UDP 端口)。
關聯服務:Pod 必須關聯到 Kubernetes 服務,若是一個 Pod 屬於多個服務,這些服務不能再同一端口上使用不一樣協議,例如 HTTP 和 TCP。
Deployment 應帶有 app
標籤:在使用 Kubernetes Deployment
進行 Pod 部署的時候,建議顯式的爲 Deployment 加上 app
標籤。每一個 Deployment 都應該有一個有意義的 app
標籤。app
標籤在分佈式跟蹤的過程當中會被用來加入上下文信息。
對於選項1,使用 kubectl
進行卸載:
$ kubectl delete -f istio.yaml
對於選項2,使用 Helm 進行卸載:
使用 Istio 的流量管理模型,本質上是將流量與基礎設施擴容解耦,讓運維人員能夠經過 Pilot 指定流量遵循什麼規則,而不是指定哪些 pod/VM 應該接收流量——Pilot 和智能 Envoy 代理會幫咱們搞定。所以,例如,您能夠經過 Pilot 指定特定服務的 5% 流量能夠轉到金絲雀版本,而沒必要考慮金絲雀部署的大小,或根據請求的內容將流量發送到特定版本。
Istio 流量管理的核心組件是 Pilot,它管理和配置部署在特定 Istio 服務網格中的全部 Envoy 代理實例。它容許您指定在 Envoy 代理之間使用什麼樣的路由流量規則,並配置故障恢復功能,如超時、重試和熔斷器。它還維護了網格中全部服務的規範模型,並使用這個模型經過發現服務讓 Envoy 瞭解網格中的其餘實例
每一個 Envoy 實例都會維護負載均衡信息信息,這些信息來自 Pilot 以及對負載均衡池中其餘實例的按期健康檢查。從而容許其在目標實例之間智能分配流量,同時遵循其指定的路由規則。
Pilot 負責管理經過 Istio 服務網格發佈的 Envoy 實例的生命週期
在網格中 Pilot 維護了一個服務規則表 並獨立於底層平臺。Pilot中的特定於平臺的適配器負責適當地填充這個規範模型。例如,在 Pilot 中的 Kubernetes 適配器實現了必要的控制器,來觀察 Kubernetes API 服務器,用於更改 pod 的註冊信息、入口資源以及存儲流量管理規則。這些數據被轉換爲規範。而後根據規範生成特定的 Envoy 的配置。
Pilot 公開了用於服務發現 、負載均衡池和路由表的動態更新的 API。
運維人員能夠經過 Pilot 的 Rules API 指定高級流量管理規則。這些規則被翻譯成低級配置,並經過 discovery API 分發到 Envoy 實例。
Istio 進入和離開服務網絡的全部流量都會經過 Envoy 代理進行傳輸。經過將 Envoy 代理部署在服務以前,運維人員能夠針對面向用戶的服務進行 A/B 測試、部署金絲雀服務等。相似地,經過使用 Envoy 將流量路由到外部 Web 服務(例如,訪問 Maps API 或視頻服務 API)的方式,運維人員能夠爲這些服務添加超時控制、重試、斷路器等功能,同時還能從服務鏈接中獲取各類細節指標。
Pilot 使用來自服務註冊的信息,並提供與平臺無關的服務發現接口。網格中的 Envoy 實例執行服務發現,並相應地動態更新其負載均衡池
網格中的服務使用其 DNS 名稱訪問彼此。服務的全部 HTTP 流量都會經過 Envoy 自動從新路由。Envoy 在負載均衡池中的實例之間分發流量。雖然 Envoy 支持多種複雜的負載均衡算法,但 Istio 目前僅容許三種負載均衡模式:輪循、隨機和帶權重的最少請求
除了負載均衡外,Envoy 還會按期檢查池中每一個實例的運行情況。Envoy 遵循熔斷器風格模式,根據健康檢查 API 調用的失敗率將實例分類爲不健康和健康兩種。換句話說,當給定實例的健康檢查失敗次數超過預約閾值時,將會被從負載均衡池中彈出。相似地,當經過的健康檢查數超過預約閾值時,該實例將被添加回負載均衡池。您能夠在處理故障中瞭解更多有關 Envoy 的故障處理功能。
服務能夠經過使用 HTTP 503 響應健康檢查來主動減輕負擔。在這種狀況下,服務實例將當即從調用者的負載均衡池中刪除。
超時
具有超時預算,並可以在重試之間進行可變抖動(間隔)的有限重試功能
併發鏈接數和上游服務請求數限制
對負載均衡池中的每一個成員主動(按期)運行健康檢查
細粒度熔斷器(被動健康檢查)——適用於負載均衡池中的每一個實例
在 Istio 中運行的應用程序是否仍須要處理故障?
是的。Istio能夠提升網格中服務的可靠性和可用性。可是,應用程序仍然須要處理故障(錯誤)並採起適當的回退操做。例如,當負載均衡池中的全部實例都失敗時,Envoy 將返回 HTTP 503。應用程序有責任實現必要的邏輯,對這種來自上游服務的 HTTP 503 錯誤作出合適的響應。
雖然 Envoy sidecar/proxy 爲在 Istio 上運行的服務提供了大量的故障恢復機制,但測試整個應用程序端到端的故障恢復能力依然是必須的。錯誤配置的故障恢復策略(例如,跨服務調用的不兼容/限制性超時)可能致使應用程序中的關鍵服務持續不可用,從而破壞用戶體驗。
Istio 能在不殺死 Pod 的狀況下,將特定協議的故障注入到網絡中,在 TCP 層製造數據包的延遲或損壞。咱們的理由是,不管網絡級別的故障如何,應用層觀察到的故障都是同樣的,而且能夠在應用層注入更有意義的故障(例如,HTTP 錯誤代碼),以檢驗和改善應用的彈性。
運維人員能夠爲符合特定條件的請求配置故障,還能夠進一步限制遭受故障的請求的百分比。能夠注入兩種類型的故障:延遲和中斷。延遲是計時故障,模擬網絡延遲上升或上游服務超載的狀況。中斷是模擬上游服務的崩潰故障。中斷一般以 HTTP 錯誤代碼或 TCP 鏈接失敗的形式表現。
Istio 提供了一個簡單的配置模型,用來控制 API 調用以及應用部署內多個服務之間的四層通訊。運維人員可使用這個模型來配置服務級別的屬性,這些屬性能夠是斷路器、超時、重試,以及一些普通的持續發佈任務,例如金絲雀發佈、A/B 測試、使用百分比對流量進行控制,從而完成應用的逐步發佈等。
Istio 中包含有四種流量管理配置資源,分別是 VirtualService(虛擬服務)
、DestinationRule(重點規則)
、ServiceEntry(服務入口)
以及 Gateway(網關)
。下面會講一下這幾個資源的一些重點。在網絡參考中能夠得到更多這方面的信息。
VirtualService
在 Istio 服務網格中定義路由規則,控制路由如何路由到服務上。
DestinationRule
是 VirtualService
路由生效後,配置應用與請求的策略集
ServiceEntry
是一般用於在 Istio 服務網格以外啓用對服務的請求。
Gateway
爲 HTTP/TCP 流量配置負載均衡器,最多見的是在網格的邊緣的操做,以啓用應用程序的入口流量。
例如,將 reviews
服務 100% 的傳入流量發送到 v1
版本,這一需求能夠用下面的規則來實現:
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1
這個配置的用意是,發送到 reviews
服務(在 host
字段中標識)的流量應該被路由到 reviews
服務實例的 v1
子集中。路由中的 subset
制定了一個預約義的子集名稱,子集的定義來自於目標規則配置:
子集指定了一個或多個特定版本的實例標籤。例如,在 Kubernetes 中部署 Istio 時,」version: v1」 表示只有包含 「version: v1」 標籤版本的 pods 纔會接收流量。
在 DestinationRule
中,你能夠添加其餘策略,例如:下面的定義指定使用隨機負載均衡模式:
apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2
可使用 kubectl
命令配置規則。在配置請求路由任務中包含有配置示例。
路由規則對應着一或多個用 VirtualService
配置指定的請求目的主機。這些主機能夠是也能夠不是實際的目標負載,甚至能夠不是同一網格內可路由的服務。例如要給到 reviews
服務的請求定義路由規則,可使用內部的名稱 reviews
,也能夠用域名 bookinfo.com
,VirtualService
能夠定義這樣的 host
字段:
每一個路由規則都須要對一或多個有權重的後端進行甄別並調用合適的後端。每一個後端都對應一個特定版本的目標服務,服務的版本是依靠標籤來區分的。若是一個服務版本包含多個註冊實例,那麼會根據爲該服務定義的負載均衡策略進行路由,缺省策略是 round-robin
例以下面的規則會把 25% 的 reviews
服務流量分配給 v2
標籤;其他的 75% 流量分配給 v1
:
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 weight: 75 - destination: host: reviews subset: v2 weight: 25
缺省狀況下,HTTP 請求的超時設置爲 15 秒,可使用路由規則來覆蓋這個限制
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 timeout: 10s
還能夠用路由規則來指定某些 http 請求的重試次數。下面的代碼能夠用來設置最大重試次數,或者在規定時間內一直重試,時間長度一樣能夠進行覆蓋
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 retries: attempts: 3 perTryTimeout: 2s
在根據路由規則向選中目標轉發 http 請求的時候,能夠向其中注入一或多個錯誤。錯誤能夠是延遲,也能夠是退出。
下面的例子在目標爲 ratings:v1
服務的流量中,對其中的 10% 注入 5 秒鐘的延遲。
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: #過錯 delay: #延遲 percent: 10 fixedDelay: 5s route: - destination: host: ratings subset: v1
可使用其餘類型的故障,終止、提早終止請求。例如,模擬失敗。
接下來,在目標爲 ratings:v1
服務的流量中,對其中的 10% 注入 HTTP 400 錯誤。
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: abort: #終止 percent: 10 httpStatus: 400 route: - destination: host: ratings subset: v1
有時會把延遲和退出同時使用。例以下面的規則對從 reviews:v2
到 ratings:v1
的流量生效,會讓全部的請求延遲 5 秒鐘,接下來把其中的 10% 退出:
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - sourceLabels: app: reviews version: v2 fault: delay: fixedDelay: 5s abort: percent: 10 httpStatus: 400 route: - destination: host: ratings subset: v1
能夠選擇讓規則只對符合某些要求的請求生效:
1. 使用工做負載 label 限制特定客戶端工做負載。例如,規則能夠指示它僅適用於實現 reviews
服務的工做負載實例(pod)的調用
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: sourceLabels: app: reviews ...
sourceLabels
的值取決於服務的實現。例如,在 Kubernetes 中,它可能與相應 Kubernetes 服務的 pod 選擇器中使用的 label 相同。
以上示例還能夠進一步細化爲僅適用於 reviews
服務版本 v2
負載均衡實例的調用:
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - sourceLabels: app: reviews version: v2 ...
2. 根據 HTTP Header 選擇規則。下面的規則只會對包含了 end-user
標頭的來源請求,且值爲 jason
的請求生效
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason
若是規則中指定了多個標頭,則全部相應的標頭必須匹配才能應用規則。
3. 根據請求URI選擇規則。例如,若是 URI 路徑以 /api/v1
開頭,則如下規則僅適用於請求:
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - match: - uri: prefix: /api/v1 ...
能夠同時設置多個匹配條件。在這種狀況下,根據嵌套,應用 AND 或 OR 語義。
若是多個條件嵌套在單個匹配子句中,則條件爲 AND。例如,如下規則僅適用於客戶端工做負載爲 reviews:v2
且請求中包含 jason
的自定義 end-user
標頭:
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - sourceLabels: app: reviews version: v2 headers: end-user: exact: jason ...
相反,若是條件出如今單獨的匹配子句中,則只應用其中一個條件(OR 語義):
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - sourceLabels: app: reviews version: v2 - headers: end-user: exact: jason
若是客戶端工做負載是 reviews:v2
,或者請求中包含 jason
的自定義 end-user
標頭,則適用此規則。
當對同一目標有多個規則時,會按照在 VirtualService
中的順序進行應用,換句話說,列表中的第一條規則具備最高優先級。
爲何優先級很重要:當對某個服務的路由是徹底基於權重的時候,就能夠在單一規則中完成。另外一方面,若是有多重條件(例如來自特定用戶的請求)用來進行路由,就會須要不止一條規則。這樣就出現了優先級問題,須要經過優先級來保證根據正確的順序來執行規則。
常見的路由模式是提供一或多個高優先級規則,這些優先規則使用源服務以及 Header 來進行路由判斷,而後才提供一條單獨的基於權重的規則,這些低優先級規則不設置匹配規則,僅根據權重對全部剩餘流量進行分流。
例以下面的 VirtualService
包含了兩個規則,全部對 reviews
服務發起的請求,若是 Header 包含 Foo=bar
,就會被路由到 v2
實例,而其餘請求則會發送給 v1
:
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: Foo: exact: bar route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v1
基於 Header 的規則具備更高優先級。若是下降它的優先級,那麼這一規則就沒法生效了,這是由於那些沒有限制的權重規則會首先被執行,也就是說全部請求即便包含了符合條件的 Foo
頭,也都會被路由到 v1
。流量特徵被判斷爲符合一條規則的條件的時候,就會結束規則的選擇過程,這就是在存在多條規則時,須要慎重考慮優先級問題的緣由。
在請求被 VirtualService
路由以後,DestinationRule
配置的一系列策略就生效了。這些策略由服務屬主編寫,包含斷路器、負載均衡以及 TLS 等的配置內容。
DestinationRule
還定義了對應目標主機的可路由 subset
(例若有命名的版本)。VirtualService
在向特定服務版本發送請求時會用到這些子集。
下面是 reviews
服務的 DestinationRule
配置策略以及子集:
apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3
在單個 DestinationRule
配置中能夠包含多條策略(好比 default 和 v2)
能夠用一系列的標準,例如鏈接數和請求數限制來定義簡單的斷路器。
例以下面的 DestinationRule
給 reviews
服務的 v1
版本設置了 100 鏈接的限制:
apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 trafficPolicy: connectionPool: tcp: maxConnections: 100
和路由規則相似,DestinationRule
中定義的策略也是和特定的 host
相關聯的,若是指定了 subset
,那麼具體生效的 subset
的決策是由路由規則來決定的。
規則評估的第一步,是確認 VirtualService
中所請求的主機相對應的路由規則(若是有的話),這一步驟決定了將請求發往目標服務的哪個 subset
(就是特定版本)。下一步,被選中的 subset
若是定義了策略,就會開始是否生效的評估。
注意:這一算法須要留心是,爲特定 subset
定義的策略,只有在該 subset
被顯式的路由時候才能生效。例以下面的配置,只爲 review
服務定義了規則(沒有對應的 VirtualService
路由規則)。