因爲測試環境上面使用的zabbix服務器配置比較低,常常會遇到性能瓶頸(主要是數據庫和磁盤I/O等),因而倒逼我使用了一些方式來緩解這些問題。
mysql
主要是之前使用的那個備份數據庫的腳本是對zabbix數據庫進行全備的,使用的又是mysql自帶的工具mysqldump,當數據量大了以後進行全備所花的時間比較長,這樣將會形成數據庫的鎖讀。。。從而使zabbix服務覺得mysql死掉了,產生一大堆的報警。git
後來發現原來形成數據庫數據量大量增長的是zabbix數據庫中的一些存儲數據的大表致使的。因而備份數據庫的時候能夠選擇跳過這些表進行備份,這樣,將大大減小數據庫備份所花的時間(PS:以前備份數據庫所花時間在十分鐘左右,如今跳過大表備份,所花時間在1S左右就能備份完,大大縮短了備份數據庫時間)。github
下面就貼出某位大神寫的專門爲zabbix數據庫作備份以及恢復的腳本:
web
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
|
#!/bin/bash
#author: itnihao
red='\e[0;31m' # 紅色
RED='\e[1;31m'
green='\e[0;32m' # 綠色
GREEN='\e[1;32m'
blue='\e[0;34m' # 藍色
BLUE='\e[1;34m'
purple='\e[0;35m' # 紫色
PURPLE='\e[1;35m'
NC='\e[0m' # 沒有顏色
source /etc/bashrc
source /etc/profile
MySQL_USER=zabbix
MySQL_PASSWORD=zabbix
MySQL_HOST=localhost
MySQL_PORT=3306
MySQL_DUMP_PATH=/opt/backup
MYSQL_BIN_PATH=/opt/software/mysql/bin/mysql
MYSQL_DUMP_BIN_PATH=/opt/software/mysql/bin/mysqldump
MySQL_DATABASE_NAME=zabbix
DATE=$(date '+%Y%m%d')
MySQLDUMP () {
[ -d ${MySQL_DUMP_PATH} ] || mkdir ${MySQL_DUMP_PATH}
cd ${MySQL_DUMP_PATH}
[ -d logs ] || mkdir logs
[ -d ${DATE} ] || mkdir ${DATE}
cd ${DATE}
#TABLE_NAME_ALL=$(${MYSQL_BIN_PATH} -u${MySQL_USER} -p${MySQL_PASSWORD} -h${MySQL_HOST} ${MySQL_DATABASE_NAME} -e "show tables"|egrep -v "(Tables_in_zabbix)")
TABLE_NAME_ALL=$(${MYSQL_BIN_PATH} -u${MySQL_USER} -p${MySQL_PASSWORD} -h${MySQL_HOST} ${MySQL_DATABASE_NAME} -e "show tables"|egrep -v "(Tables_in_zabbix|history*|trends*|acknowledges|alerts|auditlog|events|service_alarms)")
for TABLE_NAME in ${TABLE_NAME_ALL}
do
${MYSQL_DUMP_BIN_PATH} --opt -u${MySQL_USER} -p${MySQL_PASSWORD} -P${MySQL_PORT} -h${MySQL_HOST} ${MySQL_DATABASE_NAME} ${TABLE_NAME} >${TABLE_NAME}.sql
sleep 0.01
done
[ "$?" == 0 ] && echo "${DATE}: Backup zabbix succeed" >> ${MySQL_DUMP_PATH}/logs/ZabbixMysqlDump.log
[ "$?" != 0 ] && echo "${DATE}: Backup zabbix not succeed" >> ${MySQL_DUMP_PATH}/logs/ZabbixMysqlDump.log
cd ${MySQL_DUMP_PATH}/
rm -rf $(date +%Y%m%d --date='5 days ago')
exit 0
}
MySQLImport () {
cd ${MySQL_DUMP_PATH}
DATE=$(ls ${MySQL_DUMP_PATH} |egrep "\b^[0-9]+$\b")
echo -e "${green}${DATE}"
echo -e "${blue}what DATE do you want to import,please input date:${NC}"
read SELECT_DATE
if [ -d "${SELECT_DATE}" ];then
echo -e "you select is ${green}${SELECT_DATE}${NC}, do you want to contine,if,input ${red}(yes|y|Y)${NC},else then exit"
read Input
[[ 'yes|y|Y' =~ "${Input}" ]]
status="$?"
if [ "${status}" == "0" ];then
echo "now import SQL....... Please wait......."
else
exit 1
fi
cd ${SELECT_DATE}
for PER_TABEL_SQL in $(ls *.sql)
do
${MYSQL_BIN_PATH} -u${MySQL_USER} -p${MySQL_PASSWORD} -h${MySQL_HOST} ${MySQL_DATABASE_NAME} < ${PER_TABEL_SQL}
echo -e "import ${PER_TABEL_SQL} ${PURPLE}........................${NC}"
done
echo "Finish import SQL,Please check Zabbix database"
else
echo "Don't exist ${SELECT_DATE} DIR"
fi
}
case "$1" in
MySQLDUMP|mysqldump)
MySQLDUMP
;;
MySQLImport|mysqlimport)
MySQLImport
;;
*)
echo "Usage: $0 {(MySQLDUMP|mysqldump) (MySQLImport|mysqlimport)}"
;;
esac
|
該腳本源出處在這https://github.com/itnihao/zabbix-book/blob/master/03-chapter/Zabbix_MySQLdump_per_table_v2.shsql
我這是在大神的腳本上作了修改以後造成的適合我本身備份的腳本,各位也能夠自行修改爲適合本身的備份腳本。這個腳本實現的效果上面已經說了,以前作全備的時候差很少有4G左右的數據量,如今只備份配置文件數據量只有不到10M,果斷大大節省時間以及空間呀。數據庫
不過這樣的話將沒法保證數據的備份,我目前考慮使用xtradbbackup對數據進行增量備份,目前還未實現,留待過兩天作吧。bash
好了,關於數據庫備份的事情搞了,而後還須要對大數據量的表進行表分區,參考了zabbix官網的一篇文章https://www.zabbix.org/wiki/Docs/howto/mysql_partition 各位有興趣的話能夠去看看,我這裏將其總結在了一塊兒,更加方便一點。服務器
表分區能夠對大數據量的表進行物理上的拆分紅多個文件,可是邏輯上來看,仍是一張表,對應用程序是透明的。另外,將這一張大表拆分紅不少小表的話將使得數據查詢速度可以更快。還能夠隨時刪除舊的數據分區,刪除過時數據。這種方式適用於大數據量的表,可是查詢量比較少的應用場景。若是是大數據量的表,又有大量查詢的話建議仍是進行分庫分表操做。app
好了,很少扯了,開始做業了。
dom
首先,登陸數據庫(PS:這個就不演示了)
而後登錄到zabbix庫中修改兩張表的結構:
1
2
3
|
use zabbix;
Alter table history_text drop primary key, add index (id), drop index history_text_2, add index history_text_2 (itemid, id);
Alter table history_log drop primary key, add index (id), drop index history_log_2, add index history_log_2 (itemid, id);
|
修改完以後再按照官網上的過程建立四個存儲過程:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
DELIMITER $$
CREATE PROCEDURE `partition_create`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), PARTITIONNAME VARCHAR(64), CLOCK INT)
BEGIN
/*
SCHEMANAME = The DB schema in which to make changes
TABLENAME = The table with partitions to potentially delete
PARTITIONNAME = The name of the partition to create
*/
/*
Verify that the partition does not already exist
*/
DECLARE RETROWS INT;
SELECT COUNT(1) INTO RETROWS
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND partition_description >= CLOCK;
IF RETROWS = 0 THEN
/*
1. Print a message indicating that a partition was created.
2. Create the SQL to create the partition.
3. Execute the SQL from #2.
*/
SELECT CONCAT( "partition_create(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" ) AS msg;
SET @SQL = CONCAT( 'ALTER TABLE ', SCHEMANAME, '.', TABLENAME, ' ADD PARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' );
PREPARE STMT FROM @SQL;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;
END
$$DELIMITER ;
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
|
DELIMITER $$
CREATE PROCEDURE `partition_drop`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), DELETE_BELOW_PARTITION_DATE BIGINT)
BEGIN
/*
SCHEMANAME = The DB schema in which to make changes
TABLENAME = The table with partitions to potentially delete
DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that are dates older than this one (yyyy-mm-dd)
*/
DECLARE done INT DEFAULT FALSE;
DECLARE drop_part_name VARCHAR(16);
/*
Get a list of all the partitions that are older than the date
in DELETE_BELOW_PARTITION_DATE. All partitions are prefixed with
a "p", so use SUBSTRING TO get rid of that character.
*/
DECLARE myCursor CURSOR FOR
SELECT partition_name
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND CAST(SUBSTRING(partition_name FROM 2) AS UNSIGNED) < DELETE_BELOW_PARTITION_DATE;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
/*
Create the basics for when we need to drop the partition. Also, create
@drop_partitions to hold a comma-delimited list of all partitions that
should be deleted.
*/
SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " DROP PARTITION ");
SET @drop_partitions = "";
/*
Start looping through all the partitions that are too old.
*/
OPEN myCursor;
read_loop: LOOP
FETCH myCursor INTO drop_part_name;
IF done THEN
LEAVE read_loop;
END IF;
SET @drop_partitions = IF(@drop_partitions = "", drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name));
END LOOP;
IF @drop_partitions != "" THEN
/*
1. Build the SQL to drop all the necessary partitions.
2. Run the SQL to drop the partitions.
3. Print out the table partitions that were deleted.
*/
SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";");
PREPARE STMT FROM @full_sql;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, @drop_partitions AS `partitions_deleted`;
ELSE
/*
No partitions are being deleted, so print out "N/A" (Not applicable) to indicate
that no changes were made.
*/
SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, "N/A" AS `partitions_deleted`;
END IF;
END$$
DELIMITER ;
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
DELIMITER $$
CREATE PROCEDURE `partition_maintenance`(SCHEMA_NAME VARCHAR(32), TABLE_NAME VARCHAR(32), KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT)
BEGIN
DECLARE OLDER_THAN_PARTITION_DATE VARCHAR(16);
DECLARE PARTITION_NAME VARCHAR(16);
DECLARE LESS_THAN_TIMESTAMP INT;
DECLARE CUR_TIME INT;
CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL);
SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00'));
SET @__interval = 1;
create_loop: LOOP
IF @__interval > CREATE_NEXT_INTERVALS THEN
LEAVE create_loop;
END IF;
SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @__interval * 3600);
SET PARTITION_NAME = FROM_UNIXTIME(CUR_TIME + HOURLY_INTERVAL * (@__interval - 1) * 3600, 'p%Y%m%d%H00');
CALL partition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP);
SET @__interval=@__interval+1;
END LOOP;
SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVAL KEEP_DATA_DAYS DAY), '%Y%m%d0000');
CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE);
END$$
DELIMITER ;
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
DELIMITER $$
CREATE PROCEDURE `partition_verify`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), HOURLYINTERVAL INT(11))
BEGIN
DECLARE PARTITION_NAME VARCHAR(16);
DECLARE RETROWS INT(11);
DECLARE FUTURE_TIMESTAMP TIMESTAMP;
/*
* Check if any partitions exist for the given SCHEMANAME.TABLENAME.
*/
SELECT COUNT(1) INTO RETROWS
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND partition_name IS NULL;
/*
* If partitions do not exist, go ahead and partition the table
*/
IF RETROWS = 1 THEN
/*
* Take the current date at 00:00:00 and add HOURLYINTERVAL to it. This is the timestamp below which we will store values.
* We begin partitioning based on the beginning of a day. This is because we don't want to generate a random partition
* that won't necessarily fall in line with the desired partition naming (ie: if the hour interval is 24 hours, we could
* end up creating a partition now named "p201403270600" when all other partitions will be like "p201403280000").
*/
SET FUTURE_TIMESTAMP = TIMESTAMPADD(HOUR, HOURLYINTERVAL, CONCAT(CURDATE(), " ", '00:00:00'));
SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00');
-- Create the partitioning query
SET @__PARTITION_SQL = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " PARTITION BY RANGE(`clock`)");
SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION ", PARTITION_NAME, " VALUES LESS THAN (", UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));");
-- Run the partitioning query
PREPARE STMT FROM @__PARTITION_SQL;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;
END$$
DELIMITER ;
|
上面四個存儲過程執行後將可使用
1
|
CALL partition_maintenance('<zabbix_db_name>', '<table_name>', <days_to_keep_data>, <hourly_interval>, <num_future_intervals_to_create>)
|
命令對想要分區的表進行表分區了。其中的參數我這裏解釋一下。
這是舉例:
1
|
CALL partition_maintenance(zabbix, 'history_uint', 31, 24, 14);
|
zabbix_db_name:庫名
table_name:表名
days_to_keep_data:保存多少天的數據
hourly_interval:每隔多久生成一個分區
num_future_intervals_to_create:本次一共生成多少個分區
這個例子就是history_uint表最多保存31天的數據,每隔24小時生成一個分區,此次一共生成14個分區
這裏能夠將上面四個存儲過程保存爲一個文件,導入到數據庫中,文件我稍後將會放在附件中,這裏使用的命令是:mysql -uzabbix -pzabbix zabbix<partition_call.sql
而後能夠將CALL統一調用也作成一個文件,統一調用的內容以下:
1
2
3
4
5
6
7
8
9
10
11
12
|
DELIMITER $$
CREATE PROCEDURE `partition_maintenance_all`(SCHEMA_NAME VARCHAR(32))
BEGIN
CALL partition_maintenance(SCHEMA_NAME, 'history', 31, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_log', 31, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_str', 31, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_text', 31, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_uint', 31, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'trends', 180, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'trends_uint', 180, 24, 14);
END$$
DELIMITER ;
|
也將該文件導入到數據庫中,使用命令:mysql -uzabbix -pzabbix zabbix<partition_all.sql
好了,到了這裏以後就可使用以下命令執行表分區了:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
|
mysql -uzabbix -pzabbix zabbix -e "CALL partition_maintenance_all('zabbix');"
+----------------+--------------------+
| table | partitions_deleted |
+----------------+--------------------+
| zabbix.history | N/A |
+----------------+--------------------+
+--------------------+--------------------+
| table | partitions_deleted |
+--------------------+--------------------+
| zabbix.history_log | N/A |
+--------------------+--------------------+
+--------------------+--------------------+
| table | partitions_deleted |
+--------------------+--------------------+
| zabbix.history_str | N/A |
+--------------------+--------------------+
+---------------------+--------------------+
| table | partitions_deleted |
+---------------------+--------------------+
| zabbix.history_text | N/A |
+---------------------+--------------------+
+---------------------+--------------------+
| table | partitions_deleted |
+---------------------+--------------------+
| zabbix.history_uint | N/A |
+---------------------+--------------------+
+---------------+--------------------+
| table | partitions_deleted |
+---------------+--------------------+
| zabbix.trends | N/A |
+---------------+--------------------+
+--------------------+--------------------+
| table | partitions_deleted |
+--------------------+--------------------+
| zabbix.trends_uint | N/A |
+--------------------+--------------------+
|
看到以下結果證實全部7張表都進行了表分區,也能夠在Mysql的數data目錄下看到新生成的表分區文件。(PS:注意,最好是清空history_uint表的數據以後再執行上面這條命令,不然由於這張表數據量太大,轉換時間將會好長,清空表中數據的命令爲: truncate table history_uint;)
好了,這樣能夠進行表分區了。
將上面這條命令寫入到計劃任務中以下:
1
2
|
crontab -l|tail -1
01 01 * * * /opt/software/mysql/bin/mysql -uzabbix -pzabbix zabbix -e "CALL partition_maintenance_all('zabbix');"
|
天天晚上的1點01執行一次。還有以前寫的備份數據庫的腳本也須要執行計劃任務天天的凌晨0點01執行備份:
1
2
|
crontab -l|tail -2|head -1
01 00 * * * /usr/local/scripts/Zabbix_MySQLdump_per_table_v2.sh mysqldump
|
這樣就大功告成了,以後再體驗一下zabbix的web頁面看是否是感受比之前快了?
本文出自 「檸檬」 博客,請務必保留此出處http://xianglinhu.blog.51cto.com/5787032/1700981