Logstash數據處理工具
具備實時渠道能力的數據收集引擎,包含輸入、過濾、輸出模塊,通常在過濾模塊中作日誌格式化的解析工做php
日誌信息-->logstsh-->json形式html
mysql\ hbase\ ES-->logstsh(select * from user)-->ESjava
logstsh架構 比較耗費性能
蒐集--->過濾--->處理
Grok:匹配須要收集的字段信息
Date:處理日期類型
Geoip:添加地理位置信息
Useragent:提取請求用戶信息mysql
input | 過濾組件(Grok正則匹配,)| -輸出
ES("username")--->logstsh---ES
select * from user
kris1 smile alex
|
logstsh(input/filter)
input(kris1 event smile event alex event) queue隊列
filter
input(kris1 event smile event alex event) queue隊列linux
logstash安裝
[root@localhost logstash]# tar -zxvf logstash-6.3.1.tar.gz 建立config目錄目的-->自定義過濾文件和插件,保存配置文件信息 [elk@localhost logstash]$ mkdir config [elk@localhost config]$ pwd /home/elk/logstash/config
寫這個小型配置/腳本;必須包含3部分;nginx
①.2 按換行方式輸入,輸出以json的形式: [elk@localhost config]$ vi test1.conf input { stdin {codec=>line} } output { stdout {codec=>json} } heihei {"message":"heihei","@timestamp":"2019-03-26T03:05:35.750Z","@version":"1","host":"localhost.localdomain"} hello alex {"message":"hello alex","@timestamp":"2019-03-26T03:06:11.283Z","@version":"1","host":"localhost.localdomain"}
③ Stdin 輸入插件:能夠管道輸入,也能夠從終端交互輸入(前兩個都是終端交互輸入) 通用配置: codec:類型爲codec type:類型爲string自定義該事件類型,可用於後續判斷 tags:類型爲array,自定義事件的tag,可用於後續判斷 add_field:類型爲hash,爲該事件添加字段 以管道方式輸入 [elk@localhost config]$ echo "bar\nfoo" | ../logstash-6.3.1/bin/logstash -f test1.conf {"@timestamp":"2019-03-25T12:22:43.534Z","host":"localhost.localdomain","message":"bar\\nfoo","@version":"1"}
④ 輸入輸出,接收方式 以管道方式灌入數據 type是又添加一個字段,add_field是隨機添加一個k v鍵值對; [elk@localhost config]$ vi test2.conf input{ stdin{ codec => "plain" tags => ["test"] type => "std" add_field => {"key" => "value"}} } output{ stdout{ codec => "rubydebug"} }
[elk@localhost config]$ ../logstash-6.3.1/bin/logstash -f ./test2.conf Hello { "@timestamp" => 2019-03-27T00:42:18.166Z, "@version" => "1", "key" => "value", "tags" => [ [0] "test" ], "host" => "localhost.localdomain", "type" => "std", "message" => "Hello" }
⑥ Elasticsearch 讀取ES中的數據 哪一個索引中有數據 從一個ES去同步到另一個ES中就可使用logstash去同步 [elk@localhost config]$ vi es.conf input { elasticsearch { hosts => "192.168.1.101" index => "kris" query => '{"query": {"match_all": {} }}' } } output { stdout { codec => "rubydebug" } } [elk@localhost config]$ ../logstash-6.3.1/bin/logstash -f ./es.conf { "@version" => "1", "job" => "java senior engineer and java specialist", "isMarried" => true, "birth" => "1980-05-07", "age" => 28, "@timestamp" => 2019-03-25T13:15:27.762Z, "username" => "alfred" } { "@version" => "1", "job" => "ruby engineer", "isMarried" => false, "birth" => "1986-08-07", "age" => 23, "@timestamp" => 2019-03-25T13:15:27.789Z, "username" => "lee junior way" } { "@version" => "1", "job" => "java engineer", "isMarried" => false, "birth" => "1991-12-15", "age" => 18, "@timestamp" => 2019-03-25T13:15:27.790Z, "username" => "alfred way" } { "@version" => "1", "job" => "java and ruby engineer", "isMarried" => false, "birth" => "1985-08-07", "age" => 22, "@timestamp" => 2019-03-25T13:15:27.790Z, "username" => "lee" }
logstsh filter
Filter是logstsh功能強大的緣由,它能夠對數據進行豐富的處理,好比解析數據、刪除字段、類型轉換等
date:日期解析
grok:正則匹配解析
dissect:分割符解析
mutate:對字段做處理,好比重命名、刪除、替換等
json:按照json解析字段內容到指定字段中
geoip:增長地理位置數據
ruby:利用ruby代碼來動態修改logstsh Eventc++
[elk@localhost config]$ vi filter.conf input { stdin {codec => "json"} } filter { date { match => ["logdate","MM dd yyyy HH:mm:ss"] } } output { stdout { codec => "rubydebug" } } [elk@localhost config]$ ../logstash-6.3.1/bin/logstash -f ./filter.conf jing [2019-03-25T23:51:09,341][WARN ][logstash.codecs.jsonlines] JSON parse error, original data now in message field {:error=>#<LogStash::Json::ParserError: Unrecognized token 'jing': was expecting ('true', 'false' or 'null') at [Source: (String)"jing"; line: 1, column: 9]>, :data=>"jing"} { "host" => "localhost.localdomain", "message" => "jing", "@version" => "1", "tags" => [ [0] "_jsonparsefailure" ], "@timestamp" => 2019-03-26T03:51:09.375Z }
Grok 正則匹配
93.180.71.3 - - [17/May/2015:08:05:32 +0000] "GET /downloads/product_1 HTTP/1.1" 304 0 "-" "Debian APT-HTTP/1.3 (0.8.16~exp12ubuntu10.21)" [0-9]+.[0-9]+.[0-9].[0-9](93.180.71.3)+ ? ? []...最終把它轉換成(已經封裝好的正則) %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] 「%{WORD:verb} %{DATA:request} HTTP/%{NUMBER:httpversion}」 %{NUMBER:response:int} (?:-|%{NUMBER:bytes:int}) %{QS:referrer} %{QS:agent} 造成json格式,message接收到的;clientip、ident、auth、timestamp等這些字段; input接收hhttp7474端口 93.180.71.3 - - [17/May/2015:08:05:32 +0000] "GET /downloads/product_1 HTTP/1.1" 304 0 "-" "Debian APT-HTTP/1.3 (0.8.16~exp12ubuntu10.21)" 93.180.71.3 - - [17/May/2015:08:05:23 +0000] "GET /downloads/product_1 HTTP/1.1" 304 0 "-" "Debian APT-HTTP/1.3 (0.8.16~exp12ubuntu10.21)"
%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] 「%{WORD:verb} %{DATA:request} HTTP/%{NUMBER:httpversion}」 %{NUMBER:response:int} (?:-|%{NUMBER:bytes:int}) %{QS:referrer} %{QS:agent}web
[elk@localhost config]$vi grok.conf ##加\進行轉義; input { http {port => 7474} } filter { grok { match => { "message" => "%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"%{WORD:verb} %{DATA:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response:int} (?:-|%{NUMBER:bytes:int}) %{QS:referrer} %{QS:agent}" } } } output { stdout { codec => "rubydebug" } }
[elk@localhost config]$ ../logstash-6.3.1/bin/logstash -f ./grok.conf 發送7474端口的GET請求: http://192.168.1.101:7474/93.180.71.3%20-%20-%20[17/May/2015:08:05:32%20+0000]%20%22GET%20/downloads/product_1%20HTTP/1.1%22%20304%200%20%22-%22%20%22Debian%20APT-HTTP/1.3%20(0.8.16~exp12ubuntu10.21)%22 { "message" => "", "@timestamp" => 2019-03-26T07:07:03.183Z, "host" => "192.168.1.5", "tags" => [ [0] "_grokparsefailure" ], "@version" => "1", "headers" => { "http_host" => "192.168.1.101:7474", "http_user_agent" => "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36", "http_accept_language" => "zh-CN,zh;q=0.9", "http_accept_encoding" => "gzip, deflate", "http_version" => "HTTP/1.1", "http_accept" => "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8", "request_uri" => "/93.180.71.3%20-%20-%20[17/May/2015:08:05:32%20+0000]%20%22GET%20/downloads/product_1%20HTTP/1.1%22%20304%200%20%22-%22%20%22Debian%20APT-HTTP/1.3%20(0.8.16~exp12ubuntu10.21)%22", "http_connection" => "keep-alive", "request_path" => "/93.180.71.3%20-%20-%20[17/May/2015:08:05:32%20+0000]%20%22GET%20/downloads/product_1%20HTTP/1.1%22%20304%200%20%22-%22%20%22Debian%20APT-HTTP/1.3%20(0.8.16~exp12ubuntu10.21)%22", "request_method" => "GET", "http_upgrade_insecure_requests" => "1" } } { "message" => "", "@timestamp" => 2019-03-26T07:07:03.403Z, "host" => "192.168.1.5", "tags" => [ [0] "_grokparsefailure" ], "@version" => "1", "headers" => { "http_host" => "192.168.1.101:7474", "http_user_agent" => "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36", "http_referer" => "http://192.168.1.101:7474/93.180.71.3%20-%20-%20[17/May/2015:08:05:32%20+0000]%20%22GET%20/downloads/product_1%20HTTP/1.1%22%20304%200%20%22-%22%20%22Debian%20APT-HTTP/1.3%20(0.8.16~exp12ubuntu10.21)%22", "http_accept_language" => "zh-CN,zh;q=0.9", "http_accept_encoding" => "gzip, deflate", "http_version" => "HTTP/1.1", "http_accept" => "image/webp,image/apng,image/*,*/*;q=0.8", "request_uri" => "/favicon.ico", "http_connection" => "keep-alive", "request_path" => "/favicon.ico", "request_method" => "GET" } }
百度echart正則表達式
https://echarts.baidu.com/echarts2/doc/example.html算法
數據可視化演示實戰
l 需求:
收集Elasticserach集羣的查詢語句
分析查詢語句的經常使用語句、響應時長等
l 方案
數據收集:Packetbeat+logstash
數據分析:Kibana+Elasticsearch
準備
l Production Cluster(生產環境)
一、Elasticsearch 192.168.14.13:9200
二、Kibana 192.168.14.15:5601
l Monitoring Cluster(監控環境)
一、Elasticsearch 192.168.14.16:8200
二、Kibana 192.168.14.16:8601
l Logstash\packetbeat
nginx -->log
↓
javaee logstash--->es-->kibana
1.tomcat-->web
2.nginx
3.logstash
4.es
5.kibana
101 102
es tomcat
kibana nginx
logstash
l 啓動數據採集集羣
啓動ES: ./elasticsearch
./kibana #啓動
l 啓動數據分析集羣
(1)啓動ES
(2)啓動logstash
安裝tomcat,把java的web項目manager-test上傳到webapps中;
[elk@localhost tomcat]$ ll drwxrwxr-x. 9 elk elk 160 Mar 25 13:02 apache-tomcat-7.0.47 [elk@localhost tomcat]$ tar -zxvf /home/elk/soft/apache-tomcat-7.0.47.tar.gz -C ./tomcat/ [elk@localhost apache-tomcat-7.0.47]$ bin/startup.sh Using CATALINA_BASE: /home/elk/tomcat/apache-tomcat-7.0.47 Using CATALINA_HOME: /home/elk/tomcat/apache-tomcat-7.0.47 Using CATALINA_TMPDIR: /home/elk/tomcat/apache-tomcat-7.0.47/temp Using JRE_HOME: /home/elk/jdk/jdk1.8.0_171/jre Using CLASSPATH: /home/elk/tomcat/apache-tomcat-7.0.47/bin/bootstrap.jar:/home/elk/tomcat/apache-tomcat-7.0.47/bin/tomcat-juli.jar http://192.168.1.102:8080/ [elk@localhost apache-tomcat-7.0.47]$ bin/shutdown.sh http://192.168.1.102:8080/manager-test/tables.html
安裝nginx
1、yum install gcc-c++ 安裝nginx須要先將官網下載的源碼進行編譯,編譯依賴gcc環境 2、yum install -y pcre pcre-devel PCRE(Perl Compatible Regular Expressions)是一個Perl庫,包括 perl 兼容的正則表達式庫。nginx的http模塊使用pcre來解析正則表達式,因此須要在linux上安裝pcre庫。 注:pcre-devel是使用pcre開發的一個二次開發庫。nginx也須要此庫。 3、yum install -y zlib zlib-devel zlib庫提供了不少種壓縮和解壓縮的方式,nginx使用zlib對http包的內容進行gzip,因此須要在linux上安裝zlib庫。 4、yum install -y openssl openssl-devel OpenSSL 是一個強大的安全套接字層密碼庫,囊括主要的密碼算法、經常使用的密鑰和證書封裝管理功能及SSL協議,並提供豐富的應用程序供測試或其它目的使用。 nginx不只支持http協議,還支持https(即在ssl協議上傳輸http),因此須要在linux安裝openssl庫。
tar -zxvf /home/elk/soft/nginx-1.15.1.tar.gz -C ./nginx/ ./configure --help查詢詳細參數 [root@localhost nginx-1.15.1]# ./configure \ > --prefix=/usr/local/nginx \ > --pid-path=/var/run/nginx/nginx.pid \ > --lock-path=/var/lock/nginx.lock \ > --error-log-path=/var/log/nginx/error.log \ > --http-log-path=/var/log/nginx/access.log \ > --with-http_gzip_static_module \ > --http-client-body-temp-path=/var/temp/nginx/client \ > --http-proxy-temp-path=/var/temp/nginx/proxy \ > --http-fastcgi-temp-path=/var/temp/nginx/fastcgi \ > --http-uwsgi-temp-path=/var/temp/nginx/uwsgi \ > --http-scgi-temp-path=/var/temp/nginx/scgi
#注意:上邊將臨時文件目錄指定爲/var/temp/nginx,須要在/var下建立temp及nginx目錄
/var/log/nginx/access.log 編譯安裝 [root@localhost nginx-1.15.1]# make ##編譯下讓它執行 [root@localhost nginx-1.15.1]# make install
安裝成功查看安裝目錄 :
[root@localhost nginx]# ll total 4 drwxr-xr-x. 2 root root 4096 Mar 25 13:33 conf drwxr-xr-x. 2 root root 40 Mar 25 13:33 html drwxr-xr-x. 2 root root 19 Mar 25 13:33 sbin
[root@localhost nginx]# pwd 這個是nginc的實際目錄 /usr/local/nginx
啓動nginx
cd /usr/local/nginx/sbin/
./nginx
[root@localhost conf]# rm -rf nginx.conf [root@localhost conf]# cp /home/elk/file/project/nginx.conf ./ ##重寫配置下nginx.conf文件;將配置好的複製過來
nginx.conf
![](http://static.javashuo.com/static/loading.gif)
![](http://static.javashuo.com/static/loading.gif)
#user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; log_format main '$remote_addr - $remote_user [$time_local] $http_host $request_method "$uri" "$query_string" ' '$status $body_bytes_sent "$http_referer" $upstream_status $upstream_addr $request_time $upstream_response_time ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; upstream manager { server 127.0.0.1:8080 weight=10; } server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { proxy_pass http://manager/manager/index.html; proxy_redirect off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }
修改下項目地址
[root@localhost conf]# vi nginx.conf location / { proxy_pass http://manager/manager-test/index.html; proxy_redirect off; }
啓動: [root@localhost conf]# pwd /usr/local/nginx/conf [root@localhost conf]# cd ../sbin/ [root@localhost sbin]# pwd /usr/local/nginx/sbin [root@localhost sbin]# ./ngin http://192.168.1.102/ 刷寫網頁就會生成日誌信息
中止nginx
方式1,快速中止:
cd /usr/local/nginx/sbin
./nginx -s stop
此方式至關於先查出nginx進程id再使用kill命令強制殺掉進程。
方式2,完整中止(建議使用):
cd /usr/local/nginx/sbin
./nginx -s quit
此方式中止步驟是待nginx進程處理任務完畢進行中止。
重啓nginx
方式1,先中止再啓動(建議使用):
對nginx進行重啓至關於先中止nginx再啓動nginx,即先執行中止命令再執行啓動命令。
以下:
./nginx -s quit
./nginx
方式2,從新加載配置文件:
當nginx的配置文件nginx.conf修改後,要想讓配置生效須要重啓nginx,使用-s reload不用先中止nginx再啓動nginx便可將配置信息在nginx中生效,以下:
./nginx -s reload
測試
nginx安裝成功,啓動nginx,便可訪問虛擬機上的nginx:
到這說明nginx上安裝成功。
查詢nginx進程:ps aux | grep nginx
主進程id,工做進程id
注意:執行./nginx啓動nginx,這裏能夠-c指定加載的nginx配置文件,以下:
./nginx -c /usr/local/nginx/conf/nginx.conf
若是不指定-c,nginx在啓動時默認加載conf/nginx.conf文件,此文件的地址也能夠在編譯安裝nginx時指定./configure的參數(--conf-path= 指向配置文件(nginx.conf))
實時監控文件的變化
[root@localhost sbin]# cd /var/log/nginx/ [root@localhost nginx]# ls access.log error.log [root@localhost nginx]# tail -f access.log
logstash的安裝配置
[elk@localhost config]$ vi nginx_logstash.conf 修改下路徑 patterns_dir => "/home/elk/logstash/config/patterns/" match => { "message" => "%{NGINXACCESS}" [elk@localhost config]$ ../logstash-6.3.1/bin/logstash -f ./nginx_logstash.conf
[elk@localhost config]$ pwd /home/elk/logstash/config [elk@localhost config]$ ll total 4 -rw-r--r--. 1 elk elk 1090 Mar 25 13:56 nginx_logstash.conf drwxrwxr-x. 2 elk elk 19 Mar 25 13:54 patterns
配置這兩個文件便可;
[elk@localhost config]$ cat patterns/nginx NGINXACCESS %{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) NGINXACCESSLOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent} [elk@localhost config]$ vi nginx_logstash.conf input { file { path => ["/var/log/nginx/access.log"] type => "nginx_access" #start_position => "beginning" } } filter { if [type] == "nginx_access" { grok { patterns_dir => "/home/elk/logstash/config/patterns/" match => { "message" => "%{NGINXACCESS}" } } date { match => ["timestamp","dd/MMM/YYY:HH:mm:ss Z"] } if [param] { ruby { init => "@kname = ['quote','url_args']" code => " new_event = LogStash::Event.new(Hash[@kname.zip(event.get('param').split('?'))]) new_event.remove('@timestamp') event.append(new_event) " } } if [url_args] { ruby { init => "@kname = ['key','value']" code => "event.set('nested_args',event.get('url_args').split('&').cllect{|i| Hash[@kname.zip(i.split('='))]})" remove_field => ["url_args","param","quote"] } } mutate { convert => ["response","integer"] remove_field => "timestamp" } } } output { stdout { codec => rubydebug } elasticsearch { hosts => ["http://192.168.1.102:9200"] index => "logstash-%{type}-%{+YYYY.MM.dd}" } }
啓動kibana:
[elk@localhost bin]$ ./kibana 一刷新網頁就會產生log日誌: { "request" => "/assets/js/ace.min.js", "@version" => "1", "clientip" => "192.168.1.5", "verb" => "GET", "message" => "192.168.1.5 - - [25/Mar/2019:14:01:58 -0400] \"GET /assets/js/ace.min.js HTTP/1.1\" 404 1037 \"http://192.168.1.102/\" \"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36\"", "@timestamp" => 2019-03-25T18:01:58.000Z, "bytes" => "1037", "path" => "/var/log/nginx/access.log", "type" => "nginx_access", "host" => "localhost.localdomain", "httpversion" => "1.1", "auth" => "-", "ident" => "-", "response" => 404 }
elasticsearch的索引是 elasticsearch { hosts => ["http://192.168.1.101:9200"] index => "logstash-%{type}-%{+YYYY.MM.dd}" }
查詢下看到它會生成不少的log信息 GET logstash-nginx_access-2019.03.25/_search log日誌-->灌到ES中
圖表展現: 建立logstash的信息
logstash-*
timestamp
建立索引
建立索引logstash-*
附錄:防火牆配置
一、firewalld的基本使用
啓動: systemctl start firewalld
關閉: systemctl stop firewalld
查看狀態: systemctl status firewalld
開機禁用 : systemctl disable firewalld
開機啓用 : systemctl enable firewalld
2.systemctl是CentOS7的服務管理工具中主要的工具,它融合以前service和chkconfig的功能於一體。
啓動一個服務:systemctl start firewalld.service
關閉一個服務:systemctl stop firewalld.service
重啓一個服務:systemctl restart firewalld.service
顯示一個服務的狀態:systemctl status firewalld.service
在開機時啓用一個服務:systemctl enable firewalld.service
在開機時禁用一個服務:systemctl disable firewalld.service
查看服務是否開機啓動:systemctl is-enabled firewalld.service
查看已啓動的服務列表:systemctl list-unit-files|grep enabled
查看啓動失敗的服務列表:systemctl --failed
3.配置firewalld-cmd
查看版本: firewall-cmd --version
查看幫助: firewall-cmd --help
顯示狀態: firewall-cmd --state
查看全部打開的端口: firewall-cmd --zone=public --list-ports
更新防火牆規則: firewall-cmd --reload
查看區域信息: firewall-cmd --get-active-zones
查看指定接口所屬區域: firewall-cmd --get-zone-of-interface=eth0
拒絕全部包:firewall-cmd --panic-on
取消拒絕狀態: firewall-cmd --panic-off
查看是否拒絕: firewall-cmd --query-panic
4.那怎麼開啓一個端口呢
添加 firewall-cmd --zone=public --add-port=80/tcp --permanent (--permanent永久生效,沒有此參數重啓後失效)
從新載入 firewall-cmd --reload
查看 firewall-cmd --zone= public --query-port=80/tcp
刪除 firewall-cmd --zone= public --remove-port=80/tcp --permanent