Lagom production deployment

tutorial:https://developer.lightbend.com/guides/lagom-kubernetes-k8s-deploy-microservices/html

 

1、harbor deployment: java

https://blog.frognew.com/2017/06/install-harbor.htmlnode

 

#harbor composereact

 

wget https://github.com/vmware/harbor/releases/download/v1.1.2/harbor-offline-installer-v1.1.2.tgz

tar -zxvf harbor-offline-installer-v1.1.2.tgz

cd harbor/
ls
common  docker-compose.notary.yml  docker-compose.yml  harbor_1_1_0_template  harbor.cfg  harbor.v1.1.2.tar.gz  install.sh  LICENSE  NOTICE  prepare  upgrade

修改harbor.cfg:

#或者指定域名 如 hostname = harbor.myCompany.com
hostname = 192.168.61.11

 

2、harbor deployment

# habor
wget https://github.com/vmware/harbor/releases/download/v1.1.2/harbor-offline-installer-v1.1.2.tgz
tar -zxvf harbor-offline-installer-v1.1.2.tgz
cd harbor
vi harbor.cfg
# change the domain name, https, password,
# default user name: admin

hostname = harbor.xx.xxx.comnginx

harbor_admin_password = Xxxxxxxgit

 

3、https deploymentgithub

# https(input the domain name and ip)
cd https
# 這是一個自動生成證書的工具 .
/lc-tlscert mkdir -p /data/cert/ cp server.* /data/cert/ ./install.sh

 

4、clientdocker

#####
# client
#####
vi /etc/hosts
# add 10.0.0.xxx harbor.xx.xxx.com

# docker client import *.crt
mkdir -p /usr/share/ca-certificates/extra
scp root@10.0.0.xxx:/root/harbor/https/server.crt /usr/share/ca-certificates/extra/
dpkg-reconfigure ca-certificates
systemctl restart docker

# before push
docker login -u admin -p xxxxx harbor.xx.xxx.com
 

在客戶端將harbor的hostname添加到 /etc/hosts中,在本地建立文件夾,拷貝證書, depackage, 而後重啓本地docker -》 登錄dockerjson

# change image pull policy
Always

5、在本地將工程publish到harborvim

sbt -DbuildTarget=kubernetes clean docker:publish

build.sbt中須要修改

lazy val friendImpl = project("friend-impl")
  .enablePlugins(LagomJava)
  .settings(
    version := buildVersion,
    version in Docker := buildVersion,
    dockerBaseImage := "openjdk:8-jre-alpine",
    dockerRepository := Some(BuildTarget.dockerRepository),
    dockerUpdateLatest := true,
    dockerEntrypoint ++= """-Dhttp.address="$(eval "echo $FRIENDSERVICE_BIND_IP")" -Dhttp.port="$(eval "echo $FRIENDSERVICE_BIND_PORT")" -Dakka.remote.netty.tcp.hostname="$(eval "echo $AKKA_REMOTING_HOST")" -Dakka.remote.netty.tcp.bind-hostname="$(eval "echo $AKKA_REMOTING_BIND_HOST")" -Dakka.remote.netty.tcp.port="$(eval "echo $AKKA_REMOTING_PORT")" -Dakka.remote.netty.tcp.bind-port="$(eval "echo $AKKA_REMOTING_BIND_PORT")" $(IFS=','; I=0; for NODE in $AKKA_SEED_NODES; do echo "-Dakka.cluster.seed-nodes.$I=akka.tcp://friendservice@$NODE"; I=$(expr $I + 1); done)""".split(" ").toSeq,
    dockerCommands :=
      dockerCommands.value.flatMap {
        case ExecCmd("ENTRYPOINT", args @ _*) => Seq(Cmd("ENTRYPOINT", args.mkString(" ")))
        case c @ Cmd("FROM", _) => Seq(c, ExecCmd("RUN", "/bin/sh", "-c", "apk add --no-cache bash && ln -sf /bin/bash /bin/sh"))
        case v => Seq(v)
      },
    resolvers += bintrayRepo("hajile", "maven"),
    resolvers += bintrayRepo("hseeberger", "maven"),
    libraryDependencies ++= Seq(
      lagomJavadslPersistenceCassandra,
      lagomJavadslTestKit
    )
  )
  .settings(BuildTarget.additionalSettings)
  .settings(lagomForkedTest
dockerRepository的值設置成publishTo的目標,好比: harbor.xx.xxx.com/chirper


6、kubenets拉取image
注意:上述在master節點作的CA工做,在slave節點上也要作,另外slave節點上也須要作hosts的配置

edit /etc/hosts
cd lagom-java-chirper-example/deploy/kubernetes/resources/chirper

kubectl create -f  friend-impl-service.json


kubectl create -f  friend-impl-statefulset.json
若是出現
2015/07/21 11:11:00 Get https://kubernetes.default.svc.cluster.local/api/v1/nodes: dial tcp: lookup kubernetes.default.svc.cluster.local: no such host
 
 這類錯誤,能夠
vim /etc/hosts  10.0.0.xxx(harbor) harbor
vim /etc/resolv.conf  114.114.114.114 (修改至harbor的url)

 

7、kubenetes deployment:

ref: https://blog.frognew.com/2017/09/kubeadm-install-kubernetes-1.8.html

 

# 0. Verify the mac address and product_uuid

ifconfig -a

cat /sys/class/dmi/id/product_uuid

 

# 1. Turn off swap

swapoff -a

vi /etc/fstab

 

vi /etc/hosts

10.0.0.56 master.dev.xx.xxx.com

10.0.0.51 slave01.dev.xx.xxx.com

10.0.0.52 slave02.dev.xx.xxx.com

10.0.0.53 slave03.dev.xx.xxx.com

 

#swap line

 

# 2. Docker

apt install -y docker.io

cat << EOF > /etc/docker/daemon.json

{

  "exec-opts": ["native.cgroupdriver=cgroupfs"]

}

EOF

? service docker restart

 

# 3. Kubeadm, Kubectl, Kubelet

apt update && apt install -y apt-transport-https

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb http://apt.kubernetes.io/ kubernetes-xenial main

EOF

apt update

apt install -y kubelet kubeadm kubectl

 

# 4. Init. Take the token!!!

vi ~/.profile

export KUBECONFIG=/etc/kubernetes/admin.conf

 

./init.sh

 

# 5. Flannel

# copy portmap to master and all slaves

cp flannel/portmap /opt/cni/bin

kubectl apply -f flannel/kube-flannel.yml

 

# 6. Dashboard

kubectl create -f dashboard/kubernetes-dashboard.yaml

kubectl create -f dashboard/kubernetes-dashboard-admin.rbac.yaml

kubectl create -f dashboard/grafana.yaml

kubectl create -f dashboard/influxdb.yaml

kubectl create -f dashboard/heapster.yaml

kubectl create -f dashboard/heapster-rbac.yaml

 

#7.

kubectl get pod --all-namespaces -o wide

找出dashboard 的ip

如:

kube-system   kubernetes-dashboard-7486b894c6-4rhfn    1/1       Running   0          1h        10.244.0.3   k8s-dev-master

 

./rinetd -c  10.0.0.56 8443 (target ip)  10.244.0.3 8443 (dashboard ip)

 

若是:rinetd: couldn't bind to address

關閉rinetd從新run便可

pkill rinetd

 

集羣清理,無論master仍是slave 都是:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

 

查看kubernete-dashboard-admin的token:

根據第一句命令查看token,把token寫入第二句

kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
kubernetes-dashboard-admin-token-pfss5   kubernetes.io/service-account-token   3         14s

kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-pfss5

 

8、deploy chirper-ingress

kubectl create -f /deploy/kubernetes/resources/nginx/chirper-ingress.json

 

9、set up static cassandra

在impl的application.conf中加入

 

cassandra.default {

  ## list the contact points  here

  contact-points = ["10.0.0.58", "23.51.143.11"]

  ## override Lagom’s ServiceLocator-based ConfigSessionProvider

  session-provider = akka.persistence.cassandra.ConfigSessionProvider

}

 

cassandra-journal {

  contact-points = ${cassandra.default.contact-points}

  session-provider = ${cassandra.default.session-provider}

}

 

cassandra-snapshot-store {

  contact-points = ${cassandra.default.contact-points}

  session-provider = ${cassandra.default.session-provider}

}

 

lagom.persistence.read-side.cassandra {

  contact-points = ${cassandra.default.contact-points}

  session-provider = ${cassandra.default.session-provider}

}

 

10、部署本身的scala application時,發現一直會拋一個異常說是一個名爲「application」的akka-remote-actor沒法定位,發現經過play.akka.actor-system配置的name被篡改爲application,

發現由於ConductR是收費的,而咱們使用了DnsServiceLocator,因此Loader中須要把extends ConductR改爲extends DnsServiceLocator,這樣actor-system就能夠正常配置了。

ref:https://index.scala-lang.org/lightbend/service-locator-dns/lagom-service-locator-dns/1.0.2?target=_2.11

 

11、kafka配置問題。

在配置外部靜態kafka時,會遇到下面這個問題。

[error] com.lightbend.lagom.internal.broker.kafka.KafkaSubscriberActor [sourceThread=myservice-akka.actor.default-dispatcher-19, akkaTimestamp=07:51:51.537UTC, akkaSource=akka.tcp://myservice@myservice-0.myservice.default.svc.cluster.local:2551/user/KafkaBackoffConsumer1-myEvents/KafkaConsumerActor1-myEvents, sourceActorSystem=myservice] - Unable to locate Kafka service named [myservice] 

這個問題解決的方法是,咱們不只要在build.sbt中配置

lagomKafkaEnabled in ThisBuild := false
lagomKafkaAddress in ThisBuild := "10.0.0.xx:9092"

還要在serviceImpl的applicaton.conf中配置,首先要把service-name置空,brokers=ip:port,其餘和Lagom官網上Kafka client配置同樣。

lagom.broker.kafka {
  # The name of the Kafka service to look up out of the service locator.
  # If this is an empty string, then a service locator lookup will not be done,
  # and the brokers configuration will be used instead.
  service-name = ""

  # The URLs of the Kafka brokers. Separate each URL with a comma.
  # This will be ignored if the service-name configuration is non empty.
  brokers = "10.0.0.58:9092"
}

 若是kafka拋出WakeupException,Consumer actor terminated,詳情見:https://github.com/lagom/lagom/issues/705,須要修改kafka server.properties中的

advertised.listeners=PLAINTEXT://your IP:9092

 

12、k8s健康檢查探針的問題

k8s部署以後,經過kube get pods發現咱們剛deploy的服務狀態是Running,可是容器狀態Ready是false,經過kubectl describe pod servicename,發現

readiness probe failed: Get http://10.108.88.40:8080/healthz: dial tcp 10.108.88.40:8080: getsockopt: connection refused
這裏是由於咱們URL錯誤。

Lagom須要咱們在使用circuit-breaker來配置這個URL。
application.conf

lagom.circuit-breaker {

  # Default configuration that is used if a configuration section
  # with the circuit breaker identifier is not defined.
  default {
    # Possibility to disable a given circuit breaker.
    enabled = on

    # Number of failures before opening the circuit.
    max-failures = 10

    # Duration of time after which to consider a call a failure.
    call-timeout = 10s

    # Duration of time in open state after which to attempt to close
    # the circuit, by first entering the half-open state.
    reset-timeout = 15s
  }
}

在scala版本的serviceApplication中,

lazy val lagomServer = LagomServer.forServices(
    bindService[YourService].to(wire[YourImpl]),
    metricsServiceBinding
  )

加入metricsServiceBinding,這裏就能夠解決這個問題。

 

十3、traefik會把k8s對外的ip設置成80.

 

 

harbor:https://www.jianshu.com/p/2ebadd9a323d

docker-compose.yml主要修改registry容器參數,在network下增長以下圖中框內的內容:

 

 
 
 

四、harbor.cfg只須要修改hostname爲你本身的機器IP或者域名,harbor默認的db鏈接密碼爲root123,能夠本身修改,也能夠保持默認,harbor初始管理員密碼爲Harbor12345,能夠根據本身須要進行修改,email選項是用來忘記密碼重置用的,根據實際狀況修改,若是使用163或者qq郵箱等,須要使用受權碼進行登陸,此時就不能使用密碼登陸了,會無效的(qq使用受權碼登陸第三方郵箱客戶端自行百度

5.訪問harbor, 登錄以後的頁面:

相關文章
相關標籤/搜索