目錄node
標籤(空格分隔): Cloud2.0mysql
對比MySQL在物理機和KVM環境下性能狀況linux
壓測遵循單一變量原則,全部的對比都是隻改變一個變量的前提下完成sql
以物理機MySQL爲基準,分別作兩次測試ubuntu
CPU:Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz X 24 MEM:48G Disk:SSD 1.3T System:Ubuntu 14.04.4 LTS Kernel:3.16.0-30-generic MySQL:mysql-5.5.31-linux2.6-x86_64 Sysbench:0.4.12 KVM:QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.22)
因相關資料說明,writethrough IO模式可以保障數據一致性,因此在MySQL環境下,默認只測試writethrough環境
api
以打開NUMA Balancing的物理機環境爲基準,測試KVM環境以下變量:app
配置模板以下(只列舉關鍵參數)dom
# The MySQL server [mysqld] default-storage-engine = innodb # MyISAM setup key_buffer_size = 128M myisam_sort_buffer_size = 64M ## gloabl config max_allowed_packet = 16M max_heap_table_size = 64M tmp_table_size = 8M max_connections = 4000 open_files_limit = 6000 table_open_cache = 512 read_buffer_size = 2M read_rnd_buffer_size = 4M join_buffer_size = 256K sort_buffer_size = 2M thread_cache_size = 8 query_cache_size = 0 thread_concurrency = 16 # Replication Master setup log-bin = mysql-bin relay-log = mysqld-relay-bin max_binlog_size = 100M binlog_format = row binlog_cache_size=32K thread_stack=262144 auto_increment_increment = 3 auto_increment_offset = 1 # Logging slow_query_log = 1 long_query_time = 2 # InnoDB setup innodb_file_format = Barracuda innodb_file_per_table innodb_buffer_pool_size = 4096M innodb_log_file_size = 16M innodb_log_buffer_size = 40M innodb_flush_log_at_trx_commit = 2 innodb_lock_wait_timeout = 50 innodb_log_files_in_group=2 innodb_io_capacity=2000 [mysqldump] quick extended-insert = false default-character-set = utf8 max_allowed_packet = 16M [mysql] no-auto-rehash [myisamchk] key_buffer_size = 8M sort_buffer_size = 8M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout
KVM-qemu 配置以下:性能
<domain type='kvm'> <name>mysql1</name> <memory unit='MiB'>5120</memory> <currentMemory unit='MiB'>5120</currentMemory> <vcpu placement='static'>4</vcpu> <os> <type>hvm</type> <boot dev='hd' /> </os> <features> <acpi /> <apic /> <pae /> </features> <clock offset='utc' /> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm-spice</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' /> <source file='/data2/kvm/image1/mysql.qcow2' /> <target dev='vda' bus='virtio' /> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writethrough'/> <source file='/data2/kvm/image1/data.qcow2' /> <target dev='vdb' bus='virtio' /> </disk> <interface type='network'> <source network='default1' /> <model type='virtio' /> </interface> <console type='pty'> <target port='0' /> </console> <graphics type='vnc' autoport='yes' sharePolicy='allow-exclusive' keymap='en-us'> <listen type='address' address='0.0.0.0' /> </graphics> </devices> </domain>
測試以物理機的MySQL實例做爲參照
物理機MySQL默認狀況下,使用4G+4Core,關閉NUMA Balancing
測試
Innodb_flush_method = O_DIRECT
OLTP test statistics: queries performed: read: 14000028 write: 5000010 other: 2000004 total: 21000042 transactions: 1000002 (1375.45 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 19000038 (26133.48 per sec.) other operations: 2000004 (2750.89 per sec.) Test execution summary: total time: 727.0382s total number of events: 1000002 total time taken by event execution: 17443.5464 per-request statistics: min: 1.78ms avg: 17.44ms max: 1048.03ms approx. 95 percentile: 32.64ms Threads fairness: events (avg/stddev): 41666.7500/646.28 execution time (avg/stddev): 726.8144/0.00
關閉 Innodb_flush_method = O_DIRECT, 使用默認值
OLTP test statistics: queries performed: read: 14000028 write: 5000010 other: 2000004 total: 21000042 transactions: 1000002 (1390.26 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 19000038 (26414.92 per sec.) other operations: 2000004 (2780.52 per sec.) Test execution summary: total time: 719.2920s total number of events: 1000002 total time taken by event execution: 17257.6867 per-request statistics: min: 1.78ms avg: 17.26ms max: 1476.86ms approx. 95 percentile: 32.76ms Threads fairness: events (avg/stddev): 41666.7500/709.66 execution time (avg/stddev): 719.0703/0.00
在物理機MySQL實例狀況下,innodb_flush_method對MySQL性能有必定影響關係
單純虛擬機(kvm)壓測, Innodb_flush_method = O_DIRECT
打開 Numa balancing
, kvm cache模式改成 writethrough
KVM 配置: CPU = 4 core Mem = 5 G MySQL = 4G Cache = writethrough MySQL 配置: Mem = 4G Innodb_flush_method = O_DIRECT
sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=test --max-requests=1000000 --num-threads=24 --mysql-host=192.168.100.244 --mysql-user=test run OLTP test statistics: queries performed: read: 14000042 write: 5000015 other: 2000006 total: 21000063 transactions: 1000003 (1041.22 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 19000057 (19783.20 per sec.) other operations: 2000006 (2082.44 per sec.) Test execution summary: total time: 960.4138s total number of events: 1000003 total time taken by event execution: 23044.1587 per-request statistics: min: 3.43ms avg: 23.04ms max: 958.60ms approx. 95 percentile: 43.71ms Threads fairness: events (avg/stddev): 41666.7917/865.32 execution time (avg/stddev): 960.1733/0.01
sysbench 0.4.12: multi-threaded system evaluation benchmark OLTP test statistics: queries performed: read: 14000042 write: 5000015 other: 2000006 total: 21000063 transactions: 1000003 (1025.90 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 19000057 (19492.01 per sec.) other operations: 2000006 (2051.79 per sec.) Test execution summary: total time: 974.7614s total number of events: 1000003 total time taken by event execution: 23388.1224 per-request statistics: min: 3.75ms avg: 23.39ms max: 1306.42ms approx. 95 percentile: 44.38ms Threads fairness: events (avg/stddev): 41666.7917/863.10 execution time (avg/stddev): 974.5051/0.01
從壓測報告顯示,在kvm打開writethrough前提下,O_DIRECT方式,MySQL的效率更高
使用kvm,MySQL性能約爲物理機的75%
縱座標爲總執行時間
在宿主機打開writethrough前提下,配置 Innodb_flush_method = O_DIRECT有效提升MySQL性能
約爲物理機O_DIRECT模式下性能的97%
單純虛擬機(kvm)壓測, 打開 numa balancing
關閉宿主機 Numa balancing, kvm cache模式改成 writethrough
Innodb_flush_method = O_SYNC
OLTP test statistics: queries performed: read: 14000014 write: 5000005 other: 2000002 total: 21000021 transactions: 1000001 (1068.76 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 19000019 (20306.35 per sec.) other operations: 2000002 (2137.51 per sec.) Test execution summary: total time: 935.6690s total number of events: 1000001 total time taken by event execution: 22450.9403 per-request statistics: min: 3.51ms avg: 22.45ms max: 1170.10ms approx. 95 percentile: 41.65ms Threads fairness: events (avg/stddev): 41666.7083/855.51 execution time (avg/stddev): 935.4558/0.01
Innodb_flush_method = O_DIRECT
OLTP test statistics: queries performed: read: 14000042 write: 5000015 other: 2000006 total: 21000063 transactions: 1000003 (1062.79 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 19000057 (20193.07 per sec.) other operations: 2000006 (2125.59 per sec.) Test execution summary: total time: 940.9197s total number of events: 1000003 total time taken by event execution: 22577.0003 per-request statistics: min: 3.36ms avg: 22.58ms max: 756.58ms approx. 95 percentile: 41.50ms Threads fairness: events (avg/stddev): 41666.7917/943.69 execution time (avg/stddev): 940.7083/0.01
打開NUMA綁定後,性能降低約3%
關閉NUMA綁定
在測試的過程當中,利用cgroup能夠將實例的CPU所有跑到對應的核,在對應CPU上,負載是滿的
猜想vCPU的多個線程可能位於不一樣的CPU Nodes, 致使跨node的內存訪問,不太清楚vCPU是否會產生這樣的調度,可是關閉NUMA是不會致使的。