淘寶分佈式 key/value 存儲引擎Tair安裝部署過程及Javaclient測試一例

文件夾


1. 簡單介紹html

2. 安裝步驟及問題小記java

3. 部署配置
c++

4. Javaclient測試
apache

5. 參考資料json


聲明


1. 如下的安裝部署基於Linux系統環境:centos 6(64位),其餘Linux版本號可能有所差別。bootstrap

2. 網上有人說tair安裝失敗多是因爲gcc版本號問題,高版本號的gcc可能不支持某些特性致使安裝失敗。通過實驗證實。該說法是錯誤的,tair安裝失敗有各類可能的緣由但絕對與gcc版本號無關,比方個人gcc開始版本號爲4.4.7,後來tair安裝失敗,我又一次編譯低版本號的gcc(gcc4.1.2)。但是問題相同出現。centos

後來發現是其餘緣由。修正後又一次用高版本號gcc4.4.7成功安裝。api

3. 如下的內容部分參考tair官方介紹文檔,轉載請註明原文地址。緩存


正文


1. 簡單介紹


tair 是淘寶本身開發的一個分佈式 key/value 存儲引擎. tair 分爲持久化和非持久化兩種使用方式. 非持久化的 tair 可以當作是一個分佈式緩存. 持久化的 tair 將數據存放於磁盤中. 爲了解決磁盤損壞致使數據丟失, tair 可以配置數據的備份數目, tair 本身主動將一份數據的不一樣備份放到不一樣的主機上, 當有主機發生異常, 沒法正常提供服務的時候, 其他的備份會繼續提供服務.服務器


2. 安裝步驟及問題小記


2.1 安裝步驟

因爲tair的實現用到了底層庫 tbsys 和 tbnet,所以在安裝tair以前需要先安裝依賴庫 tbsys 和 tbnet。



2.1.1 獲取源代碼

首先需要經過svn下載源代碼,可以經過sudo yum install subversion安裝svn服務。


   
   
   
   
  1. svn checkout http://code.taobao.org/svn/tb-common-utils/trunk/ tb-common-utils # 獲取tbsys 和 tbnet的源代碼
  2. svn checkout http://code.taobao.org/svn/tair/trunk/ tair # 獲取tair源代碼

2.1.2 安裝依賴庫或軟件

編譯tair或tbnet/tbsys以前需要預先安裝一些編譯所需的依賴庫或軟件。
在安裝這些依賴以前最好首先檢查系統是否已經安裝,在用rpm管理軟件包的os上可以使用 rpm -q 軟件包名查看是否已安裝該軟件或庫。

a. 安裝libtool

    
    
    
    
sudo yum install libtool # 同一時候會安裝libtool所依賴的automake和autoconfig
b. 安裝boost-devel庫

     
     
     
     
sudo yum install boost-devel
c. 安裝zlib庫

     
     
     
     
sudo yum install zlib-devel

2.1.3 編譯安裝tbsys和tbnet

  1. tair 的底層依賴於tbsys庫和tbnet庫, 因此要先編譯安裝這兩個庫.

  2. a. 環境變量設置 TBLIB_ROOT 
取得源代碼後, 先指定環境變量 TBLIB_ROOT 爲需要安裝的文件夾. 這個環境變量在興許 tair 的編譯安裝中仍舊會被使用到. 
比方要安裝到當前用戶的lib文件夾下, 則指定export TBLIB_ROOT="~/lib"。

b. 安裝
進入源代碼文件夾, 執行build.sh進行安裝. 

  1. 2.1.4 編譯安裝tair

進入 tair 源代碼文件夾,依次按如下順序編譯安裝
./bootstrap.sh ./configure # 注意, 在執行configue的時候, 可以使用 --with-boost=xxxx 來指定boost的文件夾. 使用--with-release=yes 來編譯release版本號. make make install
成功安裝後會在當前用戶home文件夾下生成文件夾tair_bin,即tair的成功安裝後的文件夾。


2.2 問題小記

安裝過程並不是一路順風的,期間出現了很是多問題,在此簡單記錄以供參考。

2.2.1 g++未安裝


    
    
    
    
checking for C++ compiler default output file name...
configure: error: in `/home/config_server/tair/tb-common-utils/tbnet':
configure: error: C++ compiler cannot create executables
See `config.log' for more details.
make: *** No targets specified and no makefile found. Stop.
make: *** No rule to make target `install'. Stop.
說明安裝了gcc但未安裝g++,而tair是用C++開發的,所以僅僅能用g++編譯。經過過 sudo yum install gcc-c++安裝就能夠。

2.2.2 頭文件路徑錯誤


    
    
    
    
In file included from channel.cpp:16: tbnet.h:39:19: error: tbsys.h: No such file or directory databuffer.h: In member function 'void tbnet::DataBuffer::expand(int)': databuffer.h:429: error: 'ERROR' was not declared in this scope databuffer.h:429: error: 'TBSYS_LOG' was not declared in this scope socket.h: At global scope: socket.h:191: error: 'tbsys' has not been declared socket.h:191: error: ISO C++ forbids declaration of 'CThreadMutex' with no type socket.h:191: error: expected ';' before '_dnsMutex' channelpool.h:85: error: 'tbsys' has not been declared channelpool.h:85: error: ISO C++ forbids declaration of 'CThreadMutex' with no type channelpool.h:85: error: expected ';' before '_mutex' channelpool.h:93: error: 'atomic_t' does not name a type channelpool.h:94: error: 'atomic_t' does not name a type connection.h:164: error: 'tbsys' has not been declared connection.h:164: error: ISO C++ forbids declaration of 'CThreadCond' with no type connection.h:164: error: expected ';' before '_outputCond' iocomponent.h:184: error: 'atomic_t' does not name a type iocomponent.h: In member function 'int tbnet::IOComponent::addRef()': iocomponent.h:108: error: '_refcount' was not declared in this scope iocomponent.h:108: error: 'atomic_add_return' was not declared in this scope iocomponent.h: In member function 'void tbnet::IOComponent::subRef()': iocomponent.h:115: error: '_refcount' was not declared in this scope iocomponent.h:115: error: 'atomic_dec' was not declared in this scope iocomponent.h: In member function 'int tbnet::IOComponent::getRef()': iocomponent.h:122: error: '_refcount' was not declared in this scope iocomponent.h:122: error: 'atomic_read' was not declared in this scope transport.h: At global scope: transport.h:23: error: 'tbsys' has not been declared transport.h:23: error: expected `{' before 'Runnable' transport.h:23: error: invalid function declaration packetqueuethread.h:28: error: 'tbsys' has not been declared packetqueuethread.h:28: error: expected `{' before 'CDefaultRunnable' packetqueuethread.h:28: error: invalid function declaration connectionmanager.h:93: error: 'tbsys' has not been declared connectionmanager.h:93: error: ISO C++ forbids declaration of 'CThreadMutex' with no type connectionmanager.h:93: error: expected ';' before '_mutex' make[1]: *** [channel.lo] Error 1 make[1]: Leaving directory `/home/tair/tair/tb-common-utils/tbnet/src' make: *** [install-recursive] Error 1
have installed in ~/lib
因爲tbnet和tbsys在兩個不一樣的文件夾,但它們的源代碼文件中頭文件的互相引用卻沒有加絕對或相對路徑,將兩個文件夾的源代碼加入到C++環境變量中就能夠。

CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/home/tair/tair/tb-common-utils/tbsys/src:/home/tair/tair/tb-common-utils/tbnet/src
export CPLUS_INCLUDE_PATH


3. 部署配置

tair的執行, 至少需要一個 config server 和一個 data server. 推薦使用兩個 config server 多個data server的方式. 兩個config server有主備之分.
tair有三個配置文件。各自是對config server、data server及group信息的配置,在tair_bin安裝文件夾下的etc文件夾下有這三個配置文件的樣例,咱們將其複製一下,成爲咱們需要的配置文件。

cp configserver.conf.default configserver.conf cp dataserver.conf.default dataserver.conf cp group.conf.default group.conf

個人部署環境:



在配置以前。請查閱官網給出的配置文件字段詳細解釋,如下直接貼出我本身的配置並加以簡單的說明。



3.1 配置config server

# # tair 2.3 --- configserver config # [public] config_server=10.10.7.144:51980 config_server=10.10.7.144:51980 [configserver] port=51980 log_file=/home/dataserver1/tair_bin/logs/config.log pid_file=/home/dataserver1/tair_bin/logs/config.pid log_level=warn group_file=/home/dataserver1/tair_bin/etc/group.conf data_dir=/home/dataserver1/tair_bin/data/data dev_name=venet0:0
注意事項:

(1)首先需要配置config server的服務器地址和端口號,端口號可以默認,服務器地址改爲本身的,有一主一備兩臺configserver,這裏僅爲測試使用就設置爲一臺了。

(2)log_file/pid_file等的路徑設置最好用絕對路徑,默認的是相對路徑,而且是不對的相對路徑(沒有返回上級文件夾)。所以這裏需要改動。注意data文件和log文件很是重要,data文件必不可少。而log文件是部署出錯後能給你詳細的出錯緣由。

(3)dev_name很是重要。需要設置爲你本身當前網絡接口的名稱,默以爲eth0。這裏我依據本身的網絡狀況進行了改動(ifconfig查看網絡接口名稱)。


3.2 配置data server

# #  tair 2.3 --- tairserver config  # [public] config_server=10.10.7.144:51980 config_server=10.10.7.144:51980 [tairserver] # #storage_engine: # # mdb  # kdb # ldb # storage_engine=ldb local_mode=0 # #mdb_type: # mdb # mdb_shm # mdb_type=mdb_shm # # if you just run 1 tairserver on a computer, you may ignore this option. # if you want to run more than 1 tairserver on a computer, each tairserver must have their own "mdb_shm_path" # # mdb_shm_path=/mdb_shm_path01 #tairserver listen port port=51910 heartbeat_port=55910 process_thread_num=16 # #mdb size in MB # slab_mem_size=1024 log_file=/home/dataserver1/tair_bin/logs/server.log pid_file=/home/dataserver1/tair_bin/logs/server.pid log_level=warn dev_name=venet0:0 ulog_dir=/home/dataserver1/tair_bin/data/ulog ulog_file_number=3 ulog_file_size=64 check_expired_hour_range=2-4 check_slab_hour_range=5-7 dup_sync=1 do_rsync=0 # much resemble json format # one local cluster config and one or multi remote cluster config. # {local:[master_cs_addr,slave_cs_addr,group_name,timeout_ms,queue_limit],remote:[...],remote:[...]} rsync_conf={local:[10.0.0.1:5198,10.0.0.2:5198,group_local,2000,1000],remote:[10.0.1.1:5198,10.0.1.2:5198,group_remote,2000,3000]} # if same data can be updated in local and remote cluster, then we need care modify time to # reserve latest update when do rsync to each other. rsync_mtime_care=0 # rsync data directory(retry_log/fail_log..) rsync_data_dir=/home/dataserver1/tair_bin/data/remote # max log file size to record failed rsync data, rotate to a new file when over the limit rsync_fail_log_size=30000000 # whether do retry when rsync failed at first time rsync_do_retry=0 # when doing retry,  size limit of retry log's memory use rsync_retry_log_mem_size=100000000 [fdb] # in MB index_mmap_size=30 cache_size=256 bucket_size=10223 free_block_pool_size=8 data_dir=/home/dataserver1/tair_bin/data/fdb fdb_name=tair_fdb [kdb] # in byte map_size=10485760      # the size of the internal memory-mapped region bucket_size=1048583    # the number of buckets of the hash table record_align=128       # the power of the alignment of record size data_dir=/home/dataserver1/tair_bin/data/kdb      # the directory of kdb's data [ldb] #### ldb manager config ## data dir prefix, db path will be data/ldbxx, "xx" means db instance index. ## so if ldb_db_instance_count = 2, then leveldb will init in ## /data/ldb1/ldb/, /data/ldb2/ldb/. We can mount each disk to ## data/ldb1, data/ldb2, so we can init each instance on each disk. data_dir=/home/dataserver1/tair_bin/data/ldb ## leveldb instance count, buckets will be well-distributed to instances ldb_db_instance_count=1 ## whether load backup version when startup. ## backup version may be created to maintain some db data of specifid version. ldb_load_backup_version=0 ## whether support version strategy. ## if yes, put will do get operation to update existed items's meta info(version .etc), ## get unexist item is expensive for leveldb. set 0 to disable if nobody even care version stuff. ldb_db_version_care=1 ## time range to compact for gc, 1-1 means do no compaction at all ldb_compact_gc_range = 3-6 ## backgroud task check compact interval (s) ldb_check_compact_interval = 120 ## use cache count, 0 means NOT use cache,`ldb_use_cache_count should NOT be larger ## than `ldb_db_instance_count, and better to be a factor of `ldb_db_instance_count. ## each cache mdb's config depends on mdb's config item(mdb_type, slab_mem_size, etc) ldb_use_cache_count=1 ## cache stat can't report configserver, record stat locally, stat file size. ## file will be rotate when file size is over this. ldb_cache_stat_file_size=20971520 ## migrate item batch size one time (1M) ldb_migrate_batch_size = 3145728 ## migrate item batch count. ## real batch migrate items depends on the smaller size/count ldb_migrate_batch_count = 5000 ## comparator_type bitcmp by default # ldb_comparator_type=numeric ## numeric comparator: special compare method for user_key sorting in order to reducing compact ## parameters for numeric compare. format: [meta][prefix][delimiter][number][suffix]  ## skip meta size in compare # ldb_userkey_skip_meta_size=2 ## delimiter between prefix and number  # ldb_userkey_num_delimiter=: #### ## use blommfilter ldb_use_bloomfilter=1 ## use mmap to speed up random acess file(sstable),may cost much memory ldb_use_mmap_random_access=0 ## how many highest levels to limit compaction ldb_limit_compact_level_count=0 ## limit compaction ratio: allow doing one compaction every ldb_limit_compact_interval ## 0 means limit all compaction ldb_limit_compact_count_interval=0 ## limit compaction time interval ## 0 means limit all compaction ldb_limit_compact_time_interval=0 ## limit compaction time range, start == end means doing limit the whole day. ldb_limit_compact_time_range=6-1 ## limit delete obsolete files when finishing one compaction ldb_limit_delete_obsolete_file_interval=5 ## whether trigger compaction by seek ldb_do_seek_compaction=0 ## whether split mmt when compaction with user-define logic(bucket range, eg)  ldb_do_split_mmt_compaction=0 #### following config effects on FastDump #### ## when ldb_db_instance_count > 1, bucket will be sharded to instance base on config strategy. ## current supported: ##  hash : just do integer hash to bucket number then module to instance, instance's balance may be ##         not perfect in small buckets set. same bucket will be sharded to same instance ##         all the time, so data will be reused even if buckets owned by server changed(maybe cluster has changed), ##  map  : handle to get better balance among all instances. same bucket may be sharded to different instance based ##         on different buckets set(data will be migrated among instances). ldb_bucket_index_to_instance_strategy=map ## bucket index can be updated. this is useful if the cluster wouldn't change once started ## even server down/up accidently. ldb_bucket_index_can_update=1 ## strategy map will save bucket index statistics into file, this is the file's directory ldb_bucket_index_file_dir=/home/dataserver1/tair_bin/data/bindex ## memory usage for memtable sharded by bucket when batch-put(especially for FastDump) ldb_max_mem_usage_for_memtable=3221225472 #### #### leveldb config (Warning: you should know what you're doing.) ## one leveldb instance max open files(actually table_cache_ capacity, consider as working set, see `ldb_table_cache_size) ldb_max_open_files=655 ## whether return fail when occure fail when init/load db, and ## if true, read data when compactiong will verify checksum ldb_paranoid_check=0 ## memtable size ldb_write_buffer_size=67108864 ## sstable size ldb_target_file_size=8388608 ## max file size in each level. level-n (n > 0): (n - 1) * 10 * ldb_base_level_size ldb_base_level_size=134217728 ## sstable's block size # ldb_block_size=4096 ## sstable cache size (override `ldb_max_open_files) ldb_table_cache_size=1073741824 ##block cache size ldb_block_cache_size=16777216 ## arena used by memtable, arena block size #ldb_arenablock_size=4096 ## key is prefix-compressed period in block, ## this is period length(how many keys will be prefix-compressed period) # ldb_block_restart_interval=16 ## specifid compression method (snappy only now) # ldb_compression=1 ## compact when sstables count in level-0 is over this trigger ldb_l0_compaction_trigger=1 ## write will slow down when sstables count in level-0 is over this trigger ## or sstables' filesize in level-0 is over trigger * ldb_write_buffer_size if ldb_l0_limit_write_with_count=0 ldb_l0_slowdown_write_trigger=32 ## write will stop(wait until trigger down) ldb_l0_stop_write_trigger=64 ## when write memtable, max level to below maybe ldb_max_memcompact_level=3 ## read verify checksum ldb_read_verify_checksums=0 ## write sync log. (one write will sync log once, expensive) ldb_write_sync=0 ## bits per key when use bloom filter #ldb_bloomfilter_bits_per_key=10 ## filter data base logarithm. filterbasesize=1<<ldb_filter_base_logarithm #ldb_filter_base_logarithm=12                                   

該配置文件內容很是多,紅色標出來的是我改動的部分。其餘的採用默認。當中:

(1)config_server的配置與以前必須全然相同。

(2)這裏面的port和heartbeat_port是data server的端口號和心跳端口號,必須確保系統能給你使用這些端口號。通常默認的就能夠。這裏我改動是因爲本身的Linux系統僅僅贊成分配30000之後的端口號。依據本身狀況改動。

(3)data文件、log文件等很是重要,與前同樣,最好用絕對路徑


3.3 配置group信息

#group name [group_1] # data move is 1 means when some data serve down, the migrating will be start.  # default value is 0 _data_move=0 #_min_data_server_count: when data servers left in a group less than this value, config server will stop serve for this group #default value is copy count. _min_data_server_count=1 #_plugIns_list=libStaticPlugIn.so _build_strategy=1 #1 normal 2 rack  _build_diff_ratio=0.6 #how much difference is allowd between different rack  # diff_ratio =  |data_sever_count_in_rack1 - data_server_count_in_rack2| / max (data_sever_count_in_rack1, data_server_count_in_rack2) # diff_ration must less than _build_diff_ratio _pos_mask=65535  # 65535 is 0xffff  this will be used to gernerate rack info. 64 bit serverId & _pos_mask is the rack info,  _copy_count=1     _bucket_number=1023 # accept ds strategy. 1 means accept ds automatically _accept_strategy=1 # data center A _server_list=10.10.7.146:51910 #_server_list=192.168.1.2:5191 #_server_list=192.168.1.3:5191 #_server_list=192.168.1.4:5191 # data center B #_server_list=192.168.2.1:5191 #_server_list=192.168.2.2:5191 #_server_list=192.168.2.3:5191 #_server_list=192.168.2.4:5191 #quota info _areaCapacity_list=0,1124000;

這個文件我僅僅配置了data server列表,我僅僅有一個dataserver,所以僅僅需配置一個。


3.4 啓動集羣

在完畢安裝配置以後, 可以啓動集羣了.  啓動的時候需要先啓動data server 而後再啓動cofnig server.  假設是爲已有的集羣加入dataserver則可以先啓動dataserver進程而後再改動gruop.conf,假設你先改動group.conf再啓動進程,那麼需要執行touch group.conf;在scripts文件夾下有一個腳本 tair.sh 可以用來幫助啓動 tair.sh start_ds 用來啓動data server.  tair.sh start_cs 用來啓動config server.  這個腳本比較簡單, 它要求配置文件放在固定位置, 採用固定名稱.  使用者可以經過執行安裝文件夾下的bin下的 tair_server (data server) 和 tair_cfg_svr(config server) 來啓動集羣.


進入tair_bin文件夾後,按順序啓動:

sudo sbin/tair_server -f etc/dataserver.conf # 在dataserver端啓動 sudo sbin/tair_cfg_svr -f etc/configserver.conf # 在config server端啓動
執行啓動命令後,在兩端經過ps aux | grep tair查看是否啓動了。這裏啓動起來僅僅是第一步,還需要測試看是否真的啓動成功。經過如下命令測試:

sudo sbin/tairclient -c 10.10.7.144:51980 -g group_1 TAIR> put k1 v1        put: success TAIR> put k2 v2 put: success TAIR> get k2 KEY: k2, LEN: 2
當中10.10.7.144:51980是config server IP:PORT,group_1是group name,在group.conf裏配置的。


3.4 部署過程當中的錯誤記錄

假設啓動不成功或測試put/get時出現故障,那麼需要查看config server端的logs/config.log和data server端的logs/server.log日誌文件,裏面會有詳細的報錯信息。


3.4.1  Too many open files 


  
  
  
  
[2014-07-09 10:37:24.863119] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001013.stat] failed: Too many open files
[2014-07-09 10:37:24.863132] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001014.stat] failed: Too many open files
[2014-07-09 10:37:24.863145] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001015.stat] failed: Too many open files
[2014-07-09 10:37:24.863154] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001016.stat] failed: Too many open files
[2014-07-09 10:37:24.863162] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001017.stat] failed: Too many open files
因爲個人存儲引擎選擇的是ldb,而ldb有一個配置ldb_max_open_files=65535,即默認最多能打開的文件個數是65535個,但是個人系統不一樣意,可以經過「ulimit -n」查看系統執行程序中打開的最多文件個數。通常爲1024個,遠遠小於65535,這時有兩個辦法來解決,一是改動ldb_max_open_files的值,使其小於1024。二是改動系統最多贊成打開文件個數(如下的參考資料有提供改動的方法),因爲我是測試使用,所以這裏直接改動了ldb_max_open_files的值。


3.4.2 data server問題


dataserver沒配置好會報各類錯誤,如下列舉一些我遇到的錯誤:


問題1:


  
  
  
  
TAIR> put abc a 
put: unknow 
TAIR> put a 11 
put: unknow 
TAIR> put abc 33 
put: unknow 
TAIR> get a 
get failed: data not exists.

問題2:

  
  
  
  
ERROR wakeup_wait_object (../../src/common/wait_object.hpp:302) [140627106383616] [3] packet is null
這些都是dataserver開始啓動起來了。但是使用put/get時報錯。而後dataserver當即down掉的狀況,這時候就要依據log查看詳細報錯信息。改動錯誤的配置。

還有如下這種報錯信息:

[2014-07-09 09:08:11.646430] ERROR rebuild (group_info.cpp:879) [139740048353024] can not get enough data servers. need 1 lef 0
這是config server在啓動時找不到data server。也就是data server必需要先啓動成功後才幹啓動config server。


3.4.3 端口問題

  
  
  
  
start tair_cfg_srv listen port 5199 error

有時候使用默認的端口號也不必定行。需要依據系統限制進行設置,比方個人系統環境僅僅能執行普通用戶使用30000以上的端口號。所以這裏我就不能使用默認端口號了,改下就能夠。


4. Javaclient測試

Tair是一個分佈式的key/value存儲系統。數據每每存儲在多個數據節點上。

client需要決定數據存儲的詳細節點,而後才幹完畢詳細的操做。

Tair的client經過和configserver交互獲取這部分信息。configserver會維護一張表,這張表包括hash值與存儲其對應數據的節點的對比關係。

client在啓動時,需要先和configserver通訊,獲取這張對比表。

在獲取到對比表後,client便可以開始提供服務。client會依據請求的key的hash值,查找對比表中負責該數據的數據節點,而後經過和數據節點通訊完畢用戶的請求。


Tair當前支持Java和c++語言的client。Javaclient已有對應的實現(可從這裏下載到對應的jar包),咱們直接使用封裝的接口操做就能夠,但C++client眼下還沒看到實現版本號(需要本身實現)。

這裏以簡單的Javaclient爲例進行client測試。


4.1 依賴jar包

Java測試程序除了需要封裝好的tair相關jar包以外,還需要tair依賴的一些jar包,詳細的有如下幾個(不必定是這個版本號號):

commons-logging-1.1.3.jar
slf4j-api-1.7.7.jar
slf4j-log4j12-1.7.7.jar
log4j-1.2.17.jar
mina-core-1.1.7.jar
tair-client-2.3.1.jar

4.2 Javaclient程序


首先請參考Tair用戶指南裏面的關於javaclient的接口說明,如下直接給出演示樣例,很是easy理解。


package tair.client;

import java.util.ArrayList;
import java.util.List;

import com.taobao.tair.DataEntry;
import com.taobao.tair.Result;
import com.taobao.tair.ResultCode;
import com.taobao.tair.impl.DefaultTairManager;

/**
 * @author WangJianmin
 * @date 2014-7-9
 * @description Java-client test application for tair.
 *
 */
public class TairClientTest {

	public static void main(String[] args) {

		// 建立config server列表
		List<String> confServers = new ArrayList<String>();
		confServers.add("10.10.7.144:51980"); 
	//	confServers.add("10.10.7.144:51980"); // 可選

		// 建立client實例
		DefaultTairManager tairManager = new DefaultTairManager();
		tairManager.setConfigServerList(confServers);

		// 設置組名
		tairManager.setGroupName("group_1");
		// 初始化client
		tairManager.init();

		// put 10 items
		for (int i = 0; i < 10; i++) {
			// 第一個參數是namespace,第二個是key,第三是value,第四個是版本號。第五個是有效時間
			ResultCode result = tairManager.put(0, "k" + i, "v" + i, 0, 10);
			System.out.println("put k" + i + ":" + result.isSuccess());
			if (!result.isSuccess())
				break;
		}

		// get one
		// 第一個參數是namespce。第二個是key
		Result<DataEntry> result = tairManager.get(0, "k3");
		System.out.println("get:" + result.isSuccess());
		if (result.isSuccess()) {
			DataEntry entry = result.getValue();
			if (entry != null) {
				// 數據存在
				System.out.println("value is " + entry.getValue().toString());
			} else {
				// 數據不存在
				System.out.println("this key doesn't exist.");
			}
		} else {
			// 異常處理
			System.out.println(result.getRc().getMessage());
		}

	}

}

執行結果:

log4j:WARN No appenders could be found for logger (com.taobao.tair.impl.ConfigServer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
put k0:true
put k1:true
put k2:true
put k3:true
put k4:true
put k5:true
put k6:true
put k7:true
put k8:true
put k9:true
get:true
value is v3

注意事項:測試假設不是在config server或data server上進行,那麼必定要確保測試端系統與config server和data server能互相通訊,即ping通。不然有可能會報如下這種錯誤:

  
  
  
  
Exception in thread "main" java.lang.RuntimeException: init config failed
 at com.taobao.tair.impl.DefaultTairManager.init(DefaultTairManager.java:80)
 at tair.client.TairClientTest.main(TairClientTest.java:27)

我已將演示樣例程序、需要的jar包及Makefile文件(我在Linux系統下測試,未用Eclipse跑程序)打包,需要的可以從這裏下載。



5. 參考資料


1. TAIR home page

2. Tair用戶指南

3. Too many open files 問題的解決

相關文章
相關標籤/搜索