運維筆記(部署篇)

前言

針對Ubuntu 16.04,彙總經常使用服務的搭建指南。html

系統初始化

新買的ECS須要執行系統初始化java

$ sudo apt update
$ sudo apt dist-upgrade
$ sudo apt autoremove
$ sudo apt clean

$ cat /etc/hosts # 修改hosts,通常將本機須要使用的外部內網服務設置映射爲名稱
172.16.0.192    kftest-config01

$ cat /etc/hostname # 修改hostname,便於辨認
pg_1

$ reboot # 修改hostname須要重啓生效

# 掛載數據盤,例如阿里雲數據盤 https://help.aliyun.com/document_detail/25446.html
$ sudo fdisk -l # 查看實例上的數據盤
Disk /dev/vdb: 1000 GiB, 1073741824000 bytes, 2097152000 sectors
$ sudo fdisk -u /dev/vdb
Command (m for help): n
... 一路enter
Command (m for help): w
## 更多參考 https://help.aliyun.com/document_detail/108501.html
$ sudo fdisk -lu /dev/vdb
Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1        2048 2097151999 2097149952 1000G 83 Linux
$ sudo mkfs.ext4 /dev/vdb1 # 在新分區上建立一個文件系統

$ cp /etc/fstab /etc/fstab.bak # 備份 etc/fstab
$ echo /dev/vdb1 /data ext4 defaults 0 0 >> /etc/fstab # 向 /etc/fstab 寫入新分區信息

$ sudo mkdir /data
$ sudo mount /dev/vdb1 /data # 掛載文件系統

$ df -h
/dev/vdb1       985G   72M  935G   1% /data

Postgresql

安裝Postgresql

$ echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main" | sudo tee /etc/apt/sources.list.d/pgdg.list
$ wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install postgresql-9.6 # 自行選擇合適版本
## 更多參考 https://www.postgresql.org/download/linux/ubuntu/

修改配置文件

$ sudo vim /etc/postgresql/9.6/main/postgresql.conf
listen_addresses = '*'
max_connections = 1000
logging_collector = on
## 更多參考 https://www.postgresql.org/docs/current/static/runtime-config.html
 
$ sudo vim /etc/postgresql/9.6/main/pg_hba.conf
host    all             all             0.0.0.0/0             md5
## 更多參考 https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html
   
$ sudo service postgresql restart

修改默認用戶Postgres的密碼

$ sudo -u postgres psql
# ALTER USER postgres WITH PASSWORD 'postgres';
# \q
$ exit

搭建集羣(可選)

主機 ip
Master節點 10.10.10.10
Slave節點 10.10.10.9

Master節點和Slave節點分別按照上述步驟安裝完成postgres後,開始搭建集羣。node

master節點:

  1. 修改配置
$ sudo vi /etc/postgresql/9.6/main/postgresql.conf
listen_addresses = '*'
wal_level = hot_standby
archive_mode = on
archive_command = 'test ! -f /var/lib/postgresql/9.6/archive/%f && cp %p /var/lib/postgresql/9.6/archive/%f'
max_wal_senders = 16
wal_keep_segments = 100
hot_standby = on
logging_collector = on
## 更多參考 https://www.postgresql.org/docs/current/static/runtime-config.html

$ sudo vi /etc/postgresql/9.6/main/pg_hba.conf
host    all             all             10.0.0.0/8              md5
host    replication     repuser         10.0.0.0/8              md5
## 更多參考 https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html
  
$ sudo -upostgres mkdir /var/lib/postgresql/9.6/archive
$ sudo chmod 0700 /var/lib/postgresql/9.6/archive
  
$ sudo service postgresql restart
  1. 建立工做帳戶 repuser
$ sudo -upostgres createuser --replication repuser
$ sudo -upostgres psql
postgres=# \password repuser
<password>
## 更多參考 https://www.postgresql.org/docs/current/static/user-manag.html

slave節點:

  1. 先中止服務
$ sudo service postgresql stop
  1. 由master節點導入數據(postgres 免密碼登陸 repuser role)
$ sudo -upostgres vi /var/lib/postgresql/.pgpass
10.10.10.10:5432:*:repuser:<password>
127.0.0.1:5432:*:repuser:<password>
 
$ sudo chmod 0600 /var/lib/postgresql/.pgpass
$ sudo mv /var/lib/postgresql/9.6/main /var/lib/postgresql/9.6/main.bak
$ sudo -upostgres pg_basebackup -D /var/lib/postgresql/9.6/main -F p -X stream -v -R -h 10.10.10.10 -p 5432 -U repuser
  1. 修改配置
$ sudo vi /var/lib/postgresql/9.6/main/recovery.conf
standby_mode = 'on'
primary_conninfo = 'user=repuser host=10.10.10.10 port=5432'
trigger_file = 'failover.now'

## 更多參考 https://www.postgresql.org/docs/current/static/recovery-config.html
  
$ sudo vi /etc/postgresql/9.6/main/postgresql.conf
hot_standby = on
  1. 重啓並檢查服務
$ sudo service postgresql start
  
$ sudo service postgresql status
...
Active: active (exited)

$ sudo -upostgres psql
psql (9.6.12)
...

測試集羣

在master節點進行增刪改操做,對照看slave節點是否可以從master節點複製操做mysql

經常使用命令

$ sudo service postgresql start
$ sudo service postgresql status
$ sudo service postgresql restart

👉 PG數據庫經常使用命令linux

Redis

安裝Redis(單機)

$ sudo apt-get install redis-server
$ sudo vim /etc/redis/redis.conf
# bind 127.0.0.1
$ sudo systemctl restart redis-server

安裝Redis(集羣)

主機 ip redis-server sentinel
node01 10.10.10.5
node02 10.10.10.4
node03 10.10.10.6

安裝 Redis-Server

node01:
$ sudo apt-get install redis-server
$ sudo vi /etc/redis/redis.conf
bind: 10.10.10.5

$ sudo service redis-server restart

node02:
$ sudo apt-get install redis-server
$ sudo vi /etc/redis/redis.conf
bind: 10.10.10.4
slaveof 10.10.10.5
 
$ sudo service redis-server restart

node03 同node02

測試主從同步

node01:
$ redis-cli -h 10.10.10.5 -p 6379
10.10.10.5:6379>info
....
# Replication
role:master
connected_slaves:2
slave0:ip=10.10.10.4,port=6379,state=online,offset=99,lag=0
slave1:ip=10.10.10.6,port=6379,state=online,offset=99,lag=1
master_repl_offset:99
....
10.10.10.5:6379>set testkey testvalue
OK
10.10.10.5:6379>get testkey
"testvalue"
  
node02:
$ redis-cli -h 10.10.10.4 -p 6379
10.9.8.203:6379>info
...
# Replication
role:slave
master_host:10.10.10.5
master_port:6379
master_link_status:up
...
10.10.10.4:6379>get testkey
"testvalue"

配置 Sentinel(可選)

一個穩健的 Redis Sentinel 集羣,應該使用至少 三個 Sentinel 實例,而且保證將這些實例放到 不一樣的機器 上,甚至不一樣的 物理區域nginx

$ sudo wget http://download.redis.io/redis-stable/sentinel.conf -O /etc/redis/sentinel.conf
$ sudo chown redis:redis /etc/redis/sentinel.conf
$ sudo vi /etc/redis/sentinel.conf
sentinel monitor mymaster 10.10.10.5 6379 2
sentinel down-after-milliseconds mymaster 60000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000

## 自啓動配置
$ sudo vi /etc/redis/sentinel.service
[Unit]
Documentation=http://redis.io/topics/sentinel
[Service]
ExecStart=/usr/bin/redis-server /etc/redis/sentinel.conf --sentinel
User=redis
Group=redis
[Install]
WantedBy=multi-user.target
  
$ sudo ln -s /etc/redis/sentinel.service /lib/systemd/system/sentinel.service
$ sudo systemctl enable sentinel.service
$ sudo service sentinel start

node02 node03 sentinel 配置同node01,全部節點配置完成,再繼續下一步

配置好sentinel以後,redis.confsentinel.conf都由sentinel接管;sentinel監控主節點發生改變的話,會更改對應的配置文件sentinel.confredis.confgit

測試Sentinel監控、通知、自動故障轉移

# 查看全部節點哨兵配置
node01,node02,node03:
$ redis-cli -h 10.10.10.5 -p 26379
10.10.10.5:26379> info
# Server
redis_version:3.0.6
...
config_file:/etc/redis/sentinel.conf

# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
master0:name=mymaster,status=ok,address=10.10.10.5:6379,slaves=2,sentinels=1

# 在從節點查看哨兵詳情,關注主節點
$ redis-cli -h 10.10.10.4 -p 26379
10.10.10.5:26379> sentinel master mymaster
 1) "name"
 2) "mymaster"
 3) "ip"
 4) "10.10.10.5"
 5) "port"
 6) "6379"
...

# 中止主節點所在redis-server
node01:
$ systemctl stop redis-server.service
# 查看從節點的哨兵詳情,通常來講,過1分鐘~2分鐘,會自動選舉出新的主節點,例如node03被推舉爲主節點
node02:
$ redis-cli -h 10.10.10.4 -p 26379
10.10.10.4:26379> info
...
# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
master0:name=mymaster,status=ok,address=10.10.10.6:6379,slaves=2,sentinels=3

$ redis-cli -h 10.10.10.6 -p 6379
10.10.10.6:6379> info
# Replication
role:master
connected_slaves:1
slave0:ip=10.10.10.4,port=6379,state=online,offset=19874,lag=0
master_repl_offset:19874
...

# 啓動剛纔被中止的原主節點redis-server,將做爲從節點加入到redis集羣
node01:
$ systemctl start redis-server
$ redis-cli -h 10.10.10.5 -p 6379
10.10.10.5:6379> info
...
# Replication
role:slave
master_host:10.10.10.6
master_port:6379
master_link_status:up
...

$ redis-cli -h 10.10.10.5 -p 26379
10.10.10.5:26379> info
...
# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
master0:name=mymaster,status=ok,address=10.10.10.6:6379,slaves=2,sentinels=3

客戶端鏈接Sentinel

配置完sentinel,客戶端鏈接方式就改變了,拿Redisson舉例,須要增長如下配置,並刪除單機模式下spring.redis.host配置,端口號改爲哨兵的端口號github

spring.redis.sentinel.master=mymaster
spring.redis.sentinel.nodes=10.10.10.4:26379,10.10.10.5:26379,10.10.10.6:26379

引入的jar是redis

compile "org.redisson:redisson-spring-boot-starter:3.9.1"

配置類所在位置:spring

org.springframework.boot.autoconfigure.data.redis.RedisProperties.Sentinel

經常使用命令

$ sudo systemctl start redis
$ sudo systemctl enable redis
$ sudo systemctl restart
$ sudo systemctl stop redis

常見問題

  1. 有時可能會遇到關閉或重啓不了,這時候能夠使用redis-cli提供的命令行來強制關閉
$ redis-cli -h 10.10.10.5 -p 6379
10.10.10.5:6379> shutdown nosave
## 更多參考 https://redis.io/commands/SHUTDOWN
  1. Redis is configured to save RDB snapshots, but is currently not able to persist on disk.

Redis被配置爲保存數據庫快照,但它目前不能持久化到硬盤。

$ vim /etc/sysctl.conf
## 添加一行
vm.overcommit_memory=1
$ sudo sysctl -p /etc/sysctl.conf
## 重啓全部節點redis-server和sentinel

若是改好後,還不行,就須要查看下Redis的dump文件配置是否是被更改了

$ redis-cli -h 10.10.10.5
10.10.10.5:6379> CONFIG GET dbfilename
1) "dbfilename"
2) ".rdb" ## 默認是dump.rdb
10.10.10.5:6379> CONFIG GET dir
1) "dir"
2) "/var/spool/cron" ## 默認是dump.rdb

以上配置,若是不是本身更改的,則可懷疑是被黑客篡改了

  1. 檢查Redis端口是否在公網開放,若是是,立馬關閉
  2. 設置Redis訪問密碼
  3. 恢復Redis默認配置
$ vim /etc/redis/redis.conf
dbfilename "dump.rdb"
dir "/var/lib/redis"
$ service redis-server restart

node01 node02 node03均按此修改並重啓
## 瞭解更多 https://serverfault.com/questions/800295/redis-spontaneously-failed-failed-opening-rdb-for-saving-permission-denied

Consul

安裝Consul(單機)

$ sudo mkdir -p /data/consul/{current/{bin,etc},data}
$ sudo wget https://releases.hashicorp.com/consul/1.5.3/consul_1.5.3_linux_amd64.zip -O /data/consul/consul_1.5.3_linux_amd64.zip
$ sudo apt-get install unzip
$ sudo unzip /data/consul/consul_1.5.3_linux_amd64.zip -d /data/consul/current/bin
$ sudo vi /data/consul/current/etc/consul.json
{
    "bootstrap": true,
    "datacenter": "test-datacenter",
    "data_dir": "/data/consul/data",
    "log_level": "INFO",
    "server": true,
    "client_addr": "0.0.0.0",
    "ui": true,
    "start_join": ["ip:8301"],
    "enable_syslog": true
}
## 更多參考:https://www.consul.io/docs/agent/options.html#configuration_files

$ sudo ln -s /data/consul/current/etc /data/consul/etc

$ sudo vi /etc/systemd/system/consul.service
[Unit]
Description=consul service
[Service]
ExecStart=/data/consul/current/bin/consul agent -bind={ip} -config-dir /data/consul/etc/consul.json
User=root
[Install]
WantedBy=multi-user.target

$ sudo systemctl enable consul.service
$ sudo systemctl start consul.service

安裝Consul(集羣)

主機 ip
node01 10.10.10.5
node02 10.10.10.4
node03 10.10.10.6
node01 node02 node03
$ sudo mkdir -p /data/consul/{current/{bin,etc},data}
$ sudo wget https://releases.hashicorp.com/consul/1.5.3/consul_1.5.3_linux_amd64.zip -O /data/consul/consul_1.5.3_linux_amd64.zip
$ sudo apt-get install unzip
$ sudo unzip /data/consul/consul_1.5.3_linux_amd64.zip -d /data/consul/current/bin
$ sudo vi /data/consul/current/etc/consul.json
{
    "datacenter": "roc-datacenter",
    "data_dir": "/data/consul/data",
    "log_level": "INFO",
    "server": true,
    "bootstrap_expect": 3,
    "client_addr": "10.10.10.4",
    "ui": true,
    "start_join": ["10.10.10.4:8301","10.10.10.5:8301","10.10.10.6:8301"],
    "enable_syslog": true
}
## 更多參考:https://www.consul.io/docs/agent/options.html#configuration_files

$ sudo ln -s /data/consul/current/etc /data/consul/etc

$ sudo vi /etc/systemd/system/consul.service
[Unit]
Description=consul service
[Service]
ExecStart=/data/consul/current/bin/consul agent -config-dir /data/consul/etc/consul.json
User=root
[Install]
WantedBy=multi-user.target

$ sudo systemctl enable consul.service
$ sudo systemctl start consul.service

須要開放的端口:8300, 8301, 8500,若是網絡不通,則子節點將沒法join到主節點,可能會出現

failed to sync remote state: No cluster leader

沒法選舉出leader,實際上是節點之間沒法通訊,若是通訊正常,啓動之時全部節點會自動推舉出leader。

經常使用命令

$ sudo systemctl start consul.service
$ sudo systemctl stop consul.service
$ sudo systemctl restart consul.service

## 更多參考:https://www.consul.io/docs/commands/index.html

Nginx

安裝Nginx

$ echo -e "deb http://nginx.org/packages/ubuntu/ $(lsb_release -cs) nginx\ndeb-src http://nginx.org/packages/ubuntu/ $(lsb_release -cs) nginx" | sudo tee /etc/apt/sources.list.d/nginx.list
$ wget -O- http://nginx.org/keys/nginx_signing.key | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install nginx
## 更多參考:http://nginx.org/en/linux_packages.html#stable

經常使用命令

$ sudo service nginx start
$ sudo service nginx stop
$ sudo service nginx restart

$ sudo service nginx reload # 從新加載配置

Cassandra集羣

主機 IP
cassandra-1 192.168.0.1
cassandra-2 192.168.0.2

安裝Cassandra

$ echo "deb http://www.apache.org/dist/cassandra/debian 39x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
$ curl https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add -
$ sudo apt update
$ sudo apt -y install cassandra
$ sudo apt install openjdk-8-jdk-headless
## 更多參考:http://cassandra.apache.org/download/#installation-from-debian-packages

修改配置文件

$ sudo vi /etc/cassandra/cassandra.yaml

seed_provider:
          - seeds: "192.168.0.1,192.168.0.2"
 
concurrent_writes: 64
concurrent_counter_writes: 64
concurrent_counter_writes: 64
concurrent_materialized_view_writes: 64
compaction_throughput_mb_per_sec: 128
file_cache_size_in_mb: 1024
buffer_pool_use_heap_if_exhausted: true
disk_optimization_strategy: spinning
#listen_address: localhost
listen_interface: eth0
#rpc_address: localhost
rpc_interface: eth0
enable_user_defined_functions: true
auto_bootstrap: false

## 優化cassandra jvm配置
$ sudo vi /etc/cassandra/jvm.options
#-XX:+UseParNewGC
#-XX:+UseConcMarkSweepGC
#-XX:+CMSParallelRemarkEnabled
#-XX:SurvivorRatio=8
#-XX:MaxTenuringThreshold=1
#-XX:CMSInitiatingOccupancyFraction=75
#-XX:+UseCMSInitiatingOccupancyOnly
#-XX:CMSWaitDuration=10000
#-XX:+CMSParallelInitialMarkEnabled
#-XX:+CMSEdenChunksRecordAlways

-XX:+UseG1GC
-XX:G1RSetUpdatingPauseTimePercent=5
-XX:MaxGCPauseMillis=500
-XX:InitiatingHeapOccupancyPercent=70
-XX:ParallelGCThreads=16
-XX:ConcGCThreads=16

$ sudo vi /etc/cassandra/cassandra-env.sh

## 配置爲主機內網地址
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=192.168.0.1"
#if [ "x$LOCAL_JMX" = "x" ]; then
#      LOCAL_JMX=yes
#  fi
  if [ "x$LOCAL_JMX" = "x" ]; then
      LOCAL_JMX=no
  fi

#JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=true"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false"

#JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"

$ sudo systemctl stop cassandra

遷移配置導數據盤(可選)

$ sudo mv /var/lib/cassandra /data/cassandra
$ sudo ln -s /data/cassandra /var/lib/cassandra

$ sudo systemctl start cassandra

集羣內其他機器,重複上述步驟,修改對應IP

Zookeeper集羣

主機 IP
zk-01 192.168.0.1
zk-02 192.168.0.2
zk-03 192.168.0.3

安裝Zookeeper

$ sudo apt install zookeeperd

修改配置文件

$ sudo vim /etc/zookeeper/conf/zoo.cfg
server.1=192.168.0.1:2888:3888
server.2=192.168.0.2:2888:3888
server.3=192.168.0.3:2888:3888
$ sudo vim /etc/zookeeper/conf/myid
1
# 每臺主機id各不相同,好比zk-01=1,zk-02=2,zk-03=3
$ sudo systemctl restart zookeeper

安裝ZK-UI(可選)

# 安裝zkui
$ cd /data && wget https://github.com/zifangsky/zkui/releases/download/v2.0/zkui-2.0.zip
$ sudo unzip zkui-2.0.zip
$ sudo vi /data/zkui/config.cfg
  
zkServer=192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181
userSet = {"users": [{ "username":"<username>" , "password":"<password>","role": "ADMIN" },{ "username":"appconfig" , "password":"appconfig","role": "USER" }]}
  
$ cd /data/zkui && sudo bash start.sh

集羣內其他機器,重複上述步驟

Kafka集羣

主機 IP
zk-01 192.168.0.1
zk-02 192.168.0.2
zk-03 192.168.0.3

安裝Kafka

$ sudo mkdir /data/kafka && cd ~
$ wget "http://www-eu.apache.org/dist/kafka/1.0.1/kafka_2.12-1.0.1.tgz"
$ curl http://kafka.apache.org/KEYS | gpg --import
$ wget https://dist.apache.org/repos/dist/release/kafka/1.0.1/kafka_2.12-1.0.1.tgz.asc
$ gpg --verify kafka_2.12-1.0.1.tgz.asc kafka_2.12-1.0.1.tgz
$ sudo tar -xvzf kafka_2.12-1.0.1.tgz --directory /data/kafka --strip-components 1
$ sudo rm -rf kafka_2.12-1.0.1.tgz kafka_2.12-1.0.1.tgz.asc
## 更多參考 https://tecadmin.net/install-apache-kafka-ubuntu/

修改配置文件

$ sudo mkdir /data/kafka-logs
$ sudo cp /data/kafka/config/server.properties{,.bak}
$ sudo vim /data/kafka/config/server.properties
 
broker.id=0    # 每臺主機各不相同
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://<ip>:9092
delete.topic.enable = true
leader.imbalance.check.interval.seconds=5  # leader不平衡檢查間隔
leader.imbalance.per.broker.percentage=1
log.dirs=/data/kafka-logs
offsets.topic.replication.factor=3
log.retention.hours=72
log.segment.bytes=1073741824
zookeeper.connect=192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181
  
$ sudo vim /data/kafka/bin/kafka-server-start.sh
export JMX_PORT=12345    # 暴露jmx端口,留待監控使用

註冊爲Systemd服務

$ sudo adduser --system --no-create-home --disabled-password --disabled-login kafka
$ sudo chown -R kafka:nogroup /data/kafka
$ sudo chown -R kafka:nogroup /data/kafka-logs
  
$ sudo vim /etc/systemd/system/kafka.service
[Unit]
Description=High-available, distributed message broker
After=network.target
[Service]
User=kafka
ExecStart=/data/kafka/bin/kafka-server-start.sh /data/kafka/config/server.properties
[Install]
WantedBy=multi-user.target

## 啓用服務
$ sudo systemctl enable kafka.service
$ sudo systemctl start kafka.service

## 更多參考 https://kafka.apache.org/quickstart

測試Kafka的使用(可選)

$ /data/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
$ /data/kafka/bin/kafka-topics.sh --list --zookeeper localhost:2181
  
$ /data/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
> Hello World
  
# 另一個terminal
$ /data/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
Hello World

部署Kafka-manager

$ cd /data & sudo wget https://github.com/yahoo/kafka-manager/archive/1.3.3.17.zip
$ sudo unzip kafka-manager-1.3.3.17.zip
$ sudo mv kafka-manager-1.3.3.17 kafka-manager
$ sudo chown -R kafka:nogroup /data/kafka-manager
$ sudo vim /data/kafka-manager/conf/application.conf
kafka-manager.zkhosts="192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181"
basicAuthentication.enabled=true
basicAuthentication.username="<username>"
basicAuthentication.password="<password>"
  
$ sudo vim /etc/systemd/system/kafka-manager.service
[Unit]
Description=High-available, distributed message broker manager
After=network.target
[Service]
User=kafka
ExecStart=/data/kafka-manager/bin/kafka-manager
[Install]
WantedBy=multi-user.target

## 啓用服務
$ sudo systemctl enable kafka-manager.service
$ sudo systemctl start kafka-manager.service

Mysql

安裝Mysql

$ sudo apt-get update
$ sudo apt-get install mysql-server

{% note warning %}

在安裝過程當中,系統將提示您建立root密碼。請務必記住root密碼

{% endnote %}

配置Mysql

運行安全腳本

$ mysql_secure_installation

值得一提的是,Disallow root login remotely?,若是你須要使用root帳號進行遠程鏈接,請選擇No

驗證

接下來測試下是否安裝成功了

  1. 運行狀態
$ systemctl status mysql.service
● mysql.service - MySQL Community Server
   Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-07-18 23:38:43 PDT; 11min ago
 Main PID: 2948 (mysqld)
    Tasks: 28
   Memory: 142.6M
      CPU: 545ms
   CGroup: /system.slice/mysql.service
           └─2948 /usr/sbin/mysqld
  1. 登陸查看版本
$ mysqladmin -p -u root version
mysqladmin  Ver 8.42 Distrib 5.7.26, for Linux on x86_64
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Server version      5.7.26-0ubuntu0.16.04.1
Protocol version    10
Connection      Localhost via UNIX socket
UNIX socket     /var/run/mysqld/mysqld.sock
Uptime:         12 min 18 sec

Threads: 1  Questions: 36  Slow queries: 0  Opens: 121  Flush tables: 1  Open tables: 40  Queries per second avg: 0.048

到這裏,Mysql安裝完成!

參考

相關文章
相關標籤/搜索