InfluxDB是一個開源的沒有外部依賴的時間序列數據庫。適用於記錄度量,事件及執行分析。css
cAdvisor是一款google開源的數據收集工具。可是它默認只顯示實時數據,不儲存歷史數據。所以,爲了存儲和顯示歷史數據,自定義展現圖,能夠將cAdvisor與Influxdb+Grafana集成起來使用。html
Grafana是一個開源的度量分析與可視化套件。常常被用做基礎設施的時間序列數據和應用程序分析的可視化,它在其餘領域也被普遍的使用包括工業傳感器,家庭自動化,天氣和過程控制等。node
Grafana支持許多不一樣的數據源。每一個數據源都有一個特定的查詢編輯器,該編輯器定製的特性和功能是公開的特定數據來源。linux
官方支持如下數據源:Graphite,InfluxDB,OpenTSDB,Prometheus,Elasticsearch,CloudWatch和KairosDB。nginx
每一個數據源的查詢語言和能力都是不一樣的。你能夠把來自多個數據源的數據組合到一個儀表板,但每個面板被綁定到一個特定的數據源,它就屬於一個特定的組織。c++
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.confgit
sysctl -pgithub
yum -y install yum-utils device-mapper-persistent-data lvm2web
curl https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo算法
yum -y install docker-ce
systemctl start docker
systemctl enable docker
vim /etc/docker/daemon.json
cat /etc/docker/daemon.json
{
"registry-mirrors":[ "https://registry.docker-cn.com" ]
}
systemctl daemon-reload
systemctl restart docker
docker pull tutum/influxdb
docker network create monitor
docker network ls
docker run -d --name influxdb --net monitor -p 8083:8083 -p 8086:8086 tutum/influxdb
docker ps -a
http://192.168.200.70:8083
進入Influxdb的管理界面,以下
docker pull google/cadvisor
docker images
docker run -d --name=cadvisor --net monitor -p 8081:8080 --mount type=bind,src=/,dst=/rootfs,ro --mount type=bind,src=/var/run,dst=/var/run --mount type=bind,src=/sys,dst=/sys,ro --mount type=bind,src=/var/lib/docker/,dst=/var/lib/docker,ro google/cadvisor -storage_driver=influxdb -storage_driver_db=cadvisor -storage_driver_host=influxdb:8086
docker ps -a
http://192.168.200.70:8081
進入cAdvisor管理界面,以下
docker pull grafana/grafana
docker images
docker run -d --name grafana --net monitor -p 3000:3000 grafana/grafana
docker ps -a
http://192.168.200.70:3000
默認用戶:admin 默認密碼:admin
進入Grafana管理界面,以下
用戶:grafana
密碼:grafana
微服務是衆多可以獨立運行,獨立部署,獨立提供訪問的服務程序。
這些獨立的程序能夠單獨運行提供某方面的服務,也能夠經過分佈式的方式調用各自提供的API接口進行集羣組合式服務。
就如同以前咱們安裝的容器監控系統,它是經過InfluxDB+cAdvisor+Grafana組合來實現的。這三個軟件服務均可以獨立部署,獨立運行,並都有獨立提供外部訪問的web界面。能夠分拆來使用(搭配別的微服務軟件進行組合),也能夠經過各自的API接口進行串聯集羣式訪問。
微服務的框架體系中,服務發現是不能不提的一個模塊。咱們來看下圖:
總結起來一句話:服務多了,配置很麻煩,問題多多
Consul是一個支持多數據中心分佈式高可用的服務發現和配置共享的服務軟件,由HashiCorp公司用Go語言開發,基於Mozilla Public License 2.0的協議進行開源。Consul支持健康檢查,並容許HTTP和DNS協議調用API存儲鍵值對。
綜合比較,Consul做爲服務註冊和配置管理的新星,比較值得關注和研究。
連接:https://pan.baidu.com/s/1E7dTmKvbMRtGZ95OtuF2fw
提取碼:z8ly
consul下載地址:https://www.consul.io/downloads.html
主機名 IP 用途
registrator-server 192.168.200.70 consul註冊服務器
tar xf consul_1.2.1_linux_amd64.tar.gz
mv consul /usr/bin/
ll /usr/bin/consul
chmod +x /usr/bin/consul
consul agent -server -bootstrap -ui -data-dir=/var/lib/consul-data -bind=192.168.200.70 -client=0.0.0.0 -node=server01 &>/var/log/consul.log &
netstat -antup | grep consul
tcp6 0 0 :::8500 :::* LISTEN 18866/consul #這是對外訪問端口
192.168.200.70:8500
consul members
consul info | grep leader
consul catalog services
curl -X PUT -d '{"id":"jetty","name":"service_name","adress":"192.168.200.70","port":8080,"tags":["test"],"checks":[{"http":"http://192.168.200.70:8080/","interval":"5s"}]}' http://192.168.200.70:8500/v1/agent/service/register
curl 192.168.200.70:8500/v1/status/peers
curl 192.168.200.70:8500/v1/status/leader
curl 192.168.200.70:8500/v1/catalog/services
curl 192.168.200.70:8500/v1/catalog/services/nginx
curl 192.168.200.70:8500/v1/catalog/nodes
https://github.com/hashicorp/consul-template
主機名 | IP | 用途 |
---|---|---|
registartor-server | 192.168.200.70 | consul註冊服務器 |
nginx-LB | 192.168.200.86 | nginx反向代理服務器 |
docker-client | 192.168.200.87 | nginxWeb節點服務器 |
ls
unzip consul-template_0.19.3_linux_amd64.zip
mv consul-template /usr/bin/
which consul-template
yum -y install gcc gcc-c++ make pcre pcre-devel zlib zlib-devel openssl openssl-devel
tar xf nginx-1.10.2.tar.gz -C /usr/src/
cd /usr/src/nginx-1.10.2/
./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_stub_status_module && make && make install
mkdir -p /consul-tml
cd /consul-tml/
vim nginx.ctmpl
cat nginx.ctmpl
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend {
ip_hash;
{{ range service "nginx" }} #獲取服務nginx
server {{ .Address }}:{{ .Port }}; #循環羅列所屬服務的IP和端口
{{ end }}
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://http_backend;
}
}
}
nohup consul-template -consul-addr 192.168.200.70:8500 -template /consul-tml/nginx.ctmpl:/usr/local/nginx/conf/nginx.conf:"/usr/local/nginx/sbin/nginx -s reload" 2>&1 >/consul-tml/consul-template.log &
cat /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend { #尚未任何容器節點註冊,所以這裏沒東西
ip_hash;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://http_backend;
}
}
}
netstat -antup | grep nginx #配置文件裏沒有web節點所以nginx沒有啓動成功
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sysctl -p
docker pull nginx
mkdir -p /www/html
echo "
hostname -I
sl.yunjisuan.com" >> /www/html/index.htmldocker run -dit --name nginxWeb01 -p 80:80 --mount type=bind,src=/www/html,dst=/usr/share/nginx/html nginx
curl localhost
docker pull gliderlabs/registrator
docker run -d --name=registrator -v /var/run/docker.sock:/tmp/docker.sock --restart=always gliderlabs/registrator:latest -ip=192.168.200.87 consul://192.168.200.70:8500
/usr/local/nginx/sbin/nginx
cat /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend {
ip_hash;
server 192.168.200.87:80; #已經有註冊的web容器地址了
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://http_backend;
}
}
}
netstat -antup | grep nginx #nginx服務也啓動了
docker run -dit --name nginxWeb02 -p 81:80 --mount type=bind,src=/www/html,dst=/usr/share/nginx/html nginx
docker run -dit --name nginxWeb03 -p 82:80 --mount type=bind,src=/www/html,dst=/usr/share/nginx/html nginx
docker ps -a
[root@localhost conf]# cat /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend {
ip_hash;
server 192.168.200.87:80;
server 192.168.200.87:81;
server 192.168.200.87:82;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://http_backend;
}
}
}
netstat -antup | grep nginx
docker stop nginxWeb02
docker stop nginxWeb03
docker ps
cat /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend {
ip_hash;
server 192.168.200.142:80;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://http_backend;
}
}
}
netstat -antup | grep nginx