helm安裝Sentry

文中的--kubeconfig ~/.kube/sentry,是指k8s的配置,添加配置後,能夠訪問指定k8s,如不須要,自行去除。

1.安裝helm

2.設置鏡像

helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator
helm repo update
複製代碼

3.檢測鏡像

helm search repo sentry
#NAME                                 CHART VERSION    APP VERSION    DESCRIPTION
#stable/sentry                        4.2.0            9.1.2          Sentry is a cross-platform crash reporting and ...
#看到sentry,說明鏡像沒問題
複製代碼

3.建立k8s命名空間

kubectl create namespace sentry
複製代碼

4.安裝

helm --kubeconfig ~/.kube/sentry install sentry stable/sentry \
-n sentry \
--set persistence.enabled=true,user.email=ltz@qq.com,user.password=ltz \
--set ingress.enabled=true,ingress.hostname=sentry.ltz.com,service.type=ClusterIP \
--set email.host=smtp.exmail.qq.com,email.port=465 \
--set email.user=ltz@ltz.com,email.password=ltz,email.use_tls=false \
--wait
複製代碼

參數說明

說明 必須
--kubeconfig ~/.kube/sentry kube的配置文件,能夠指定使用哪一個k8s true
user.email 管理員郵箱 true
user.password 管理員密碼 true
ingress.hostname sentry的域名(上報時必須使用域名) true
email.host、email.port 郵箱發站地址、端口 true
email.user、email.password 本身的郵箱(sentry使用這個發送郵件) true
email.use_tls 能夠在具體的郵箱設置中查看是否設置true true
redis.primary.persistence.storageClass Redis的SC使用哪一個(也能夠不設置,我這個是由於沒有PV\PVC) false
postgresql.persistence.storageClass postgresql的SC使用哪一個(也能夠不設置,我這個是由於沒有PV\PVC) false

若是安裝成功,此時,能夠看到3個Deployment和三個StatefulSet都啓動了。過一會,訪問域名就好了。

5.卸載sentry

helm --kubeconfig ~/.kube/sentry uninstall sentry -n sentry
複製代碼

6.安裝的一個坑

安裝後,個人Redis和PG一直啓動不起來,提示。node

Pending: pod has unbound immediate PersistentVolumeClaimsweb

大概就是說,PVC綁定不上,因此啓動不了。redis

解決方法

1.先卸載Sentry

2.安裝SC

yml太長,貼在最後了。sql

在yml同級目錄執行

kubectl --kubeconfig ~/.kube/sentry apply -f local-path-storage.yaml 

將local-path這隻爲默認sc
kubectl --kubeconfig ~/.kube/cls-saas-prod patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
複製代碼

3.再次安裝sentry

添加參數
helm --kubeconfig ~/.kube/sentry install sentry stable/sentry \
-n sentry \
--set persistence.enabled=true,user.email=ltz@qq.com,user.password=ltz \
--set ingress.enabled=true,ingress.hostname=sentry.ltz.com,service.type=ClusterIP \
--set email.host=smtp.exmail.qq.com,email.port=465 \
--set email.user=ltz@ltz.com,email.password=ltz,email.use_tls=false \
--set redis.primary.persistence.storageClass=local-path \
--set postgresql.persistence.storageClass=local-path \
--wait
複製代碼

4.訪問域名,正常顯示。

7.數據的一個坑。

正常狀況下,啓動後,會自動初始化數據庫信息。然鵝,我這個沒有,因此須要登陸到Sentry-web的機器上手動執行下初始化命令。數據庫

kubectl --kubeconfig ~/.kube/sentry exec -it -n sentry $(kubectl get pods  -n sentry  |grep sentry-web |awk '{print $1}') bash
sentry upgrade
複製代碼

8.管理員的一個坑

同上,管理員若是沒自動建立的話,能夠在Sentry-web手動執行。json

kubectl exec -it -n sentry $(kubectl get pods  -n sentry  |grep sentry-web |awk '{print $1}') bash
sentry createuser
複製代碼

9.Email的一個坑

上面的安裝參數,email要寫對,而後,在pod中的環境變量,也要配置對。api

sentry-web的環境變量。bash

- name: SENTRY_EMAIL_HOST
  value: smtp.exmail.qq.com
- name: SENTRY_EMAIL_PORT
  value: "465"
- name: SENTRY_EMAIL_USER
  value: ltz@ltz.com
- name: SENTRY_EMAIL_PASSWORD
  valueFrom:
 secretKeyRef:
   key: smtp-password
   name: sentry
   optional: false
- name: SENTRY_EMAIL_USE_TLS
  value: "false"
- name: SENTRY_SERVER_EMAIL
  value: ltz@ltz.com
複製代碼

sentry-worker的環境變量markdown

- name: SENTRY_EMAIL_HOST
  value: smtp.exmail.qq.com
- name: SENTRY_EMAIL_PORT
  value: "587"
- name: SENTRY_EMAIL_USER
  value: ltz@ltz.com
- name: SENTRY_EMAIL_PASSWORD
  valueFrom:
    secretKeyRef:
      key: smtp-password
      name: sentry
      optional: false
- name: SENTRY_EMAIL_USE_TLS
  value: "true"
- name: SENTRY_SERVER_EMAIL
  value: ltz@ltz.com
- name: SENTRY_EMAIL_USE_SSL
  value: "false"
複製代碼

配置好後,能夠發送一封測試郵件,若是沒收到,能夠查看sentry-worker的日誌。

通過測試,SENTRY_SERVER_EMAIL的配置,使用的是sentry-web中的環境變量!修改完後,兩個應用都要重啓!!

10.local-path.yml(其中的name、namespace按需替換)

apiVersion: v1
kind: Namespace
metadata:
  name: local-path-storage

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: local-path-provisioner-service-account
  namespace: local-path-storage

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: local-path-provisioner-role
rules:
  - apiGroups: [ "" ]
    resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
    verbs: [ "get", "list", "watch" ]
  - apiGroups: [ "" ]
    resources: [ "endpoints", "persistentvolumes", "pods" ]
    verbs: [ "*" ]
  - apiGroups: [ "" ]
    resources: [ "events" ]
    verbs: [ "create", "patch" ]
  - apiGroups: [ "storage.k8s.io" ]
    resources: [ "storageclasses" ]
    verbs: [ "get", "list", "watch" ]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: local-path-provisioner-bind
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: local-path-provisioner-role
subjects:
  - kind: ServiceAccount
    name: local-path-provisioner-service-account
    namespace: local-path-storage

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: local-path-provisioner
  namespace: local-path-storage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: local-path-provisioner
  template:
    metadata:
      labels:
        app: local-path-provisioner
    spec:
      serviceAccountName: local-path-provisioner-service-account
      containers:
        - name: local-path-provisioner
          image: rancher/local-path-provisioner:v0.0.19
          imagePullPolicy: IfNotPresent
          command:
            - local-path-provisioner
            - --debug
            - start
            - --config
            - /etc/config/config.json
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config/
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      volumes:
        - name: config-volume
          configMap:
            name: local-path-config

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |- { "nodePathMap":[ { "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES", "paths":["/opt/local-path-provisioner"] } ] }   setup: |- #!/bin/sh while getopts "m:s:p:" opt do case $opt in p) absolutePath=$OPTARG ;; s) sizeInBytes=$OPTARG ;; m) volMode=$OPTARG ;; esac done 
    mkdir -m 0777 -p ${absolutePath}
  teardown: |- #!/bin/sh while getopts "m:s:p:" opt do case $opt in p) absolutePath=$OPTARG ;; s) sizeInBytes=$OPTARG ;; m) volMode=$OPTARG ;; esac done 
    rm -rf ${absolutePath}
  helperPod.yaml: |- apiVersion: v1 kind: Pod metadata: name: helper-pod spec: containers: - name: helper-pod image: busybox imagePullPolicy: IfNotPresent 複製代碼
相關文章
相關標籤/搜索