測試cassandra集羣使用了vnodes,如何判斷是否用了vnodes呢? 主要看你的cassandra.yml配置文件中。
默認(3.x)爲空,系統自動生成。爲空表示使用virtual nodes,默認開啓,使用了vnodes,刪除了節點以後它會本身均衡數據,須要人爲干預。java
建立一個名爲kevin_test的KeySpace,使用網絡拓撲策略(SimpleStrategy),集羣內3副本,另外開啓寫commit log。node
cassandra@cqlsh> create keyspace kevin_test with replication = {'class':'SimpleStrategy','replication_factor':3} and durable_writes = true;
CREATE TABLE t_users ( user_id text PRIMARY KEY, first_name text, last_name text, emails set<text> );
BEGIN BATCH INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('0', 'kevin0', 'kang', {'k0@pt.com', 'k0-0@gmail.com'}); INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('1', 'kevin1', 'kang', {'k1@pt.com', 'k1-1@gmail.com'}); INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('2', 'kevin2', 'kang', {'k2@pt.com', 'k2-2@gmail.com'}); INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('3', 'kevin3', 'kang', {'k3@pt.com', 'k3-3@gmail.com'}); INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('4', 'kevin4', 'kang', {'k4@pt.com', 'k4-4@gmail.com'}); INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('5', 'kevin5', 'kang', {'k5@pt.com', 'k5-5@gmail.com'}); INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('6', 'kevin6', 'kang', {'k6@pt.com', 'k6-6@gmail.com'}); INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('7', 'kevin7', 'kang', {'k7@pt.com', 'k7-7@gmail.com'}); INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('8', 'kevin8', 'kang', {'k8@pt.com', 'k8-8@gmail.com'}); INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('9', 'kevin9', 'kang', {'k9@pt.com', 'k9-9@gmail.com'}); APPLY BATCH;
cassandra@cqlsh:kevin_test> SELECT * from t_users; user_id | emails | first_name | last_name ---------+---------------------------------+------------+----------- 6 | {'k6-6@gmail.com', 'k6@pt.com'} | kevin6 | kang 7 | {'k7-7@gmail.com', 'k7@pt.com'} | kevin7 | kang 9 | {'k9-9@gmail.com', 'k9@pt.com'} | kevin9 | kang 4 | {'k4-4@gmail.com', 'k4@pt.com'} | kevin4 | kang 3 | {'k3-3@gmail.com', 'k3@pt.com'} | kevin3 | kang 5 | {'k5-5@gmail.com', 'k5@pt.com'} | kevin5 | kang 0 | {'k0-0@gmail.com', 'k0@pt.com'} | kevin0 | kang 8 | {'k8-8@gmail.com', 'k8@pt.com'} | kevin8 | kang 2 | {'k2-2@gmail.com', 'k2@pt.com'} | kevin2 | kang 1 | {'k1-1@gmail.com', 'k1@pt.com'} | kevin1 | kang
[root@kubm-03 ~]# nodetool cfstats kevin_test.t_users Total number of tables: 41 ---------------- Keyspace : kevin_test Read Count: 0 Read Latency: NaN ms Write Count: 6 Write Latency: 0.116 ms Pending Flushes: 0 Table: t_users Number of partitions (estimate): 5 Memtable cell count: 6 Memtable data size: 828
以上表信息,在後期測試期間能夠確認數據是否丟失。apache
[root@kubm-03 ~]# nodetool status Datacenter: dc1 =============== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack UN 172.20.101.164 56.64 MiB 256 ? dcbbad83-fe7c-4580-ade7-aa763b8d2c40 rack1 UN 172.20.101.165 55.44 MiB 256 ? cefe8a3b-918f-463b-8c7d-faab0b9351f9 rack1 UN 172.20.101.166 73.96 MiB 256 ? 88e16e35-50dd-4ee3-aa1a-f10a8c61a3eb rack1 UN 172.20.101.167 55.43 MiB 256 ? 8808aaf7-690c-4f0c-be9b-ce655c1464d4 rack1 UN 172.20.101.160 54.4 MiB 256 ? 57cc39fc-e47b-4c96-b9b0-b004f2b79242 rack1 UN 172.20.101.157 56.05 MiB 256 ? 091ff0dc-415b-48a7-b4ce-e70c84bbfafc rack1
節點運行狀態正常,用於壓縮集羣節點數量,本次下線:172.20.101.165。bootstrap
在要刪除的機器(172.20.101.165)上執行:
nodetool decommission 或者 nodetool removenode 網絡
能夠經過 nodetool status查看集羣狀態,節點數據恢復完成後,下線節點從集羣列表消失。app
[root@kubm-03 ~]# /etc/init.d/cassandra statusless
● cassandra.service - LSB: distributed storage system for structured data Loaded: loaded (/etc/rc.d/init.d/cassandra; bad; vendor preset: disabled) Active: active (running) since Tue 2019-07-09 11:29:25 CST; 2 days ago Jul 09 11:29:25 kubm-03 cassandra[8495]: Starting Cassandra: OK Jul 09 11:29:25 kubm-03 systemd[1]: Started LSB: distributed storage system for structured data.
/etc/init.d/cassandra restart ide
INFO [main] 2019-07-11 16:44:49,765 StorageService.java:639 - CQL supported versions: 3.4.4 (default: 3.4.4) INFO [main] 2019-07-11 16:44:49,765 StorageService.java:641 - Native protocol supported versions: 3/v3, 4/v4, 5/v5-beta (default: 4/v4) INFO [main] 2019-07-11 16:44:49,816 IndexSummaryManager.java:80 - Initializing index summary manager with a memory pool size of 198 MB and a resize interval of 60 minutes This node was decommissioned and will not rejoin the ring unless cassandra.override_decommission=true has been set, or all existing data is removed and the node is bootstrapped again Fatal configuration error; unable to start server. See log for stacktrace. ERROR [main] 2019-07-11 16:44:49,823 CassandraDaemon.java:749 - Fatal configuration error #提示節點已經退役 org.apache.cassandra.exceptions.ConfigurationException: This node was decommissioned and will not rejoin the ring unless cassandra.override_decommission=true has been set, or all existing data is removed and the node is bootstrapped again
#提示節點已經退役,沒法接入集羣,若是想加入修改修改集羣配置 cassandra.override_decommission=true或者刪除如今節點上全部數據後重啓服務。測試