此文轉載必須註明原文地址,請尊重做者的勞動成果! http://www.cnblogs.com/lyongerr/p/5040071.htmlhtml
目錄node
文檔控制... 2python
1 mcrouter簡介... 6linux
3.1 編譯環境.. 6github
3.3 安裝編譯環境.. 7docker
3.4 編譯gcc4.9. 7json
3.10 編譯folly(Facebook Open-source Library) 10
3.10.1 編譯double-conversion. 10
4 mcrouter和memcached的ssl通訊... 12
4.2 mcrouter和memcached直接通訊.. 12
4.2.4 在stunnel server啓動tcpdump. 13
4.2.6 stunnel server端抓包分析.. 14
4.3 mcrouter和memcached使用stunnel進行通訊.. 15
4.3.7 在stunnel server啓動tcpdump. 17
4.3.9 stunnel server端抓包分析.. 18
5.4.1.1 OperationSelectorRoute. 23
mcrouter是一個memcached協議的路由器,被facebook用於在他們遍及全球的數據中心中的數十個集羣幾千個服務器之間控制流量。它適用於大規模的級別中,在峯值的時候,mcrouter處理接近50億的請求/秒。
l Memcached ASCII protocol
l Connection pooling
l Multiple hashing schemes
l Prefix routing
l Replicated pools
l Production traffic shadowing
l Online reconfiguration
l Flexible routing
l Destination health monitoring/automatic failover
l Cold cache warm up
l Broadcast operations
l Reliable delete stream
l Multi-cluster support
l Rich stats and debug commands
l Quality of service
l Large values
l Multi-level caches
l IPv6 support
l SSL support
功能 |
備註 |
|
192.168.75.130 |
mcrouter編譯機器 |
定製系統/vm |
yum -y install epel-release
能夠yum list 試試,若是不成功須要註釋mirrorlist這行,取消baseurl這行的註釋。須要epel源的緣由是因爲3.3中許多包都對它有依賴。
yum -y install bzip2-devel libevent-devel libcap-devel scons \
jemalloc-devel gmp-devel mpfr-devel libmpc-devel wget \
python-devel rpm-build \
m4 cmake libicu-devel chrpath openmpi-devel \
mpich-devel openssl-devel \
glibc-devel.i686 glibc-devel.x86_64 gcc gcc-c++ zlib-devel \
gmp-devel mpfr-devel libmpc-devel \
gflags-devel git bzip2 \
unzip libtool bison flex snappy-devel \
numactl-devel cyrus-sasl-devel
mcrouter的編譯必須基於gcc4.8+,folly用到了諸如 chrono 之類的C++11庫,必須使用gcc 4.8以上版本,纔可以完整支持這些用到的C++11特性和標準庫。而我這裏選擇的是4.9版本,編譯gcc4.8+的版本須要gmp、mpfr、mpc,故先編譯之。
注:如下全部編譯步驟中,隨着時間的推移,可能安裝包所在路徑在wget的時候會not found,這是正常的,遇到這種狀況請經過其餘官方渠道下載對應版本,目前我是測試過本文全部軟件包都可以經過給出的連接進行下載。
cd /opt && wget https://gmplib.org/download/gmp/gmp-5.1.3.tar.bz2
tar jxf gmp-5.1.3.tar.bz2 && cd gmp-5.1.3/
./configure --prefix=/usr/local/gmp
make && make install
cd /opt && wget http://www.mpfr.org/mpfr-3.1.2/mpfr-3.1.2.tar.bz2
tar jxf mpfr-3.1.2.tar.bz2 ;cd mpfr-3.1.2/
./configure --prefix=/usr/local/mpfr -with-gmp=/usr/local/gmp
make && make install
cd /opt && wget http://ftp.gnu.org/gnu/mpc/mpc-1.0.1.tar.gz
tar xzf mpc-1.0.1.tar.gz ;cd mpc-1.0.1
./configure --prefix=/usr/local/mpc -with-mpfr=/usr/local/mpfr -with-gmp=/usr/local/gmp
make && make install
cd /opt && wget http://ftp.gnu.org/gnu/gcc/gcc-4.9.1/gcc-4.9.1.tar.bz2
tar jxf gcc-4.9.1.tar.bz2 ;cd gcc-4.9.1
./configure --prefix=/usr/local/gcc -enable-threads=posix -disable-checking -disable-multilib -enable-languages=c,c++ -with-gmp=/usr/local/gmp -with-mpfr=/usr/local/mpfr/ -with-mpc=/usr/local/mpc/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/mpc/lib:/usr/local/gmp/lib:/usr/local/mpfr/lib/
make && make install
gcc4.9編譯完成之後,須要處理相關的環境變量和庫纔可以使用,不然後面編譯folly和boost會有問題。
echo "/usr/local/gcc/lib/" >> /etc/ld.so.conf.d/gcc-4.9.1.conf
echo "/usr/local/mpc/lib/" >> /etc/ld.so.conf.d/gcc-4.9.1.conf
echo "/usr/local/gmp/lib/" >> /etc/ld.so.conf.d/gcc-4.9.1.conf
echo "/usr/local/mpfr/lib/" >> /etc/ld.so.conf.d/gcc-4.9.1.conf
ldconfig
mv /usr/bin/gcc /usr/bin/gcc_old
mv /usr/bin/g++ /usr/bin/g++_old
mv /usr/bin/c++ /usr/bin/c++_old
ln -s -f /usr/local/gcc/bin/gcc /usr/bin/gcc
ln -s -f /usr/local/gcc/bin/g++ /usr/bin/g++
ln -s -f /usr/local/gcc/bin/c++ /usr/bin/c++
cp /usr/local/gcc/lib64/libstdc++.so.6.0.20 /usr/lib64/.
mv /usr/lib64/libstdc++.so.6 /usr/lib64/libstdc++.so.6.bak
ln -s -f /usr/lib64/libstdc++.so.6.0.20 /usr/lib64/libstdc++.so.6
使用gcc –v 、g++ --version爲4.9.1則表明成功,若這一步沒有達到請不要繼續下面的步驟了,由於後面folly和boost的編譯都是基於這個gcc環境。
cd /opt && wget http://www.cmake.org/files/v2.8/cmake-2.8.12.2.tar.gz
tar xvf cmake-2.8.12.2.tar.gz && cd cmake-2.8.12.2
./configure && make && make install
cd /opt && wget http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz
tar xvf autoconf-2.69.tar.gz && cd autoconf-2.69
./configure && make && make install
cd /opt && wget https://google-glog.googlecode.com/files/glog-0.3.3.tar.gz
tar xvf glog-0.3.3.tar.gz && cd glog-0.3.3
./configure && make && make install
因爲編譯ragel以前須要colm、kelbt,故先編譯之,固然你直接編譯ragel也不會有錯,可是會缺乏東西。
cd /opt && wget http://www.colm.net/files/colm/colm-0.13.0.2.tar.gz
tar xvf colm-0.13.0.2.tar.gz && cd colm-0.13.0.2
./configure && make && make install
cd /opt
cd /opt && wget http://www.colm.net/files/kelbt/kelbt-0.16.tar.gz
tar xvf kelbt-0.16.tar.gz && cd kelbt-0.16
./configure && make && make install
cd /opt && wget http://www.colm.net/files/ragel/ragel-6.9.tar.gz
tar xvf ragel-6.9.tar.gz && cd ragel-6.9
./configure --prefix=/usr --disable-manual && make && make install
Boost必須是Boost 1.51+,這裏選擇boost_1_56_0的版本,因爲定製系統的python環境是2.6的,而boost1.56必須基於python2.7+,固然不使用python2.6環境編譯boost也能成功,可是後面編譯folly就會有報錯,建議如下全部章節的編譯過程均要基於python2.7+。
yum -y install centos-release-SCL
yum -y install python27
scl enable python27 "easy_install pip"
scl enable python27 bash
python --version
cd /opt && wget http://downloads.sourceforge.net/boost/boost_1_56_0.tar.bz2
tar jxf boost_1_56_0.tar.bz2 && cd boost_1_56_0
./bootstrap.sh --prefix=/usr && ./b2 stage threading=multi link=shared
./b2 install threading=multi link=shared
Folly is an open-source C++ library developed and used at Facebook,Folly有用到double-conversion中的庫,故先編譯之。
rpm -Uvh http://sourceforge.net/projects/scons/files/scons/2.3.3/scons-2.3.3-1.noarch.rpm
cd/opt && git clone https://code.google.com/p/double-conversion/
cd double-conversion && scons install
cd /opt/ && git clone https://github.com/genx7up/folly.git
cp folly/folly/SConstruct.double-conversion /opt/double-conversion/
cd double-conversion && scons -f SConstruct.double-conversion
ln -sf src double-conversion
ldconfig
rm –rf /opt/folly
cd /opt
git clone https://github.com/facebook/folly
cd /opt/folly/folly/
export LD_LIBRARY_PATH="/opt/folly/folly/lib:$LD_LIBRARY_PATH"
export LD_RUN_PATH="/opt/folly/folly/lib"
export LDFLAGS="-L/opt/folly/folly/lib -L/opt/double-conversion -L/usr/local/lib -ldl"
export CPPFLAGS="-I/opt/folly/folly/include -I/opt/double-conversion"
autoreconf -ivf
./configure --with-boost-libdir=/usr/lib/
make && make install
folly make的時候中止在以下界面許久才表明正常,以前嘗試過很快就編譯完folly了,並且也沒有報錯,可是最後編譯mcrouter會有錯。
libtool: compile: g++ -DHAVE_CONFIG_H -I./.. -pthread -I/usr/include -std=gnu++0x -g -O2 -MT futures/Future.lo -MD -MP -MF futures/.deps/Future.Tpo -c futures/Future.cpp -o futures/Future.o >/dev/null 2>&1
cd /opt/folly/folly/test
wget https://googletest.googlecode.com/files/gtest-1.7.0.zip
unzip gtest-1.7.0.zip
cd /opt && git clone https://github.com/facebook/fbthrift.git
cd fbthrift/thrift
ln -sf thrifty.h "/opt/fbthrift/thrift/compiler/thrifty.hh"
export LD_LIBRARY_PATH="/opt/fbthrift/thrift/lib:$LD_LIBRARY_PATH"
export LD_RUN_PATH="/opt/fbthrift/thrift/lib"
export LDFLAGS="-L/opt/fbthrift/thrift/lib -L/usr/local/lib"
export CPPFLAGS="-I/opt/fbthrift/thrift/include -I/opt/fbthrift/thrift/include/python2.7 -I/opt/folly -I/opt/double-conversion"
echo "/usr/local/lib/" >> /etc/ld.so.conf.d/gcc-4.9.1.conf && ldconfig
開始這一步以前必須保證以上全部的編譯步驟徹底沒有任何報錯,不然編譯mcrouter會有問題。
cd /opt && git clone https://github.com/facebook/mcrouter.git
cd mcrouter/mcrouter
export LD_LIBRARY_PATH="/opt/mcrouter/mcrouter/lib:$LD_LIBRARY_PATH"
export LD_RUN_PATH="/opt/folly/folly/test/.libs:/opt/mcrouter/mcrouter/lib"
export LDFLAGS="-L/opt/mcrouter/mcrouter/lib -L/usr/local/lib -L/opt/folly/folly/test/.libs"
export CPPFLAGS="-I/opt/folly/folly/test/gtest-1.7.0/include -I/opt/mcrouter/mcrouter/include -I/opt/folly -I/opt/double-conversion -I/opt/fbthrift -I/opt/boost_1_56_0"
export CXXFLAGS="-fpermissive"
autoreconf --install && ./configure --with-boost-libdir=/usr/lib/
make && make install
mcrouter --help
注意:make mcrouter的時候出現以下輸出才表明正常,並且會停在這個界面一下子。
g++ -DHAVE_CONFIG_H -I.. -I/opt/mcrouter/install/include -DLIBMC_FBTRACE_DISABLE -Wno-missing-field-initializers -Wno-deprecated -W -Wall -Wextra -Wno-unused-parameter -fno-strict-aliasing -g -O2 -std=gnu++1y -MT mcrouter-server.o -MD -MP -MF .deps/mcrouter-server.Tpo -c -o mcrouter-server.o `test -f 'server.cpp' || echo './'`server.cpp
以上是比較順利的狀況下所需的完整步驟,經過mcrouter –help能夠檢驗是否編譯成功,如編譯失敗能夠參考最後的常見錯誤彙總章節。
本文不介紹stunnel安裝以及使用
IP |
功能 |
備註 |
192.168.75.130 |
mcrouter/stunnel client |
定製系統/vm |
192.168.75.131 |
stunnel server |
stunnel server啓動一個實例監聽11211端口。
sh memcached_stop
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11211 -u www -t 10 -l 192.168.75.131
l -l <ip_addr>:指定進程監聽的地址;
l -d: 以服務模式運行;
l -u <username>:以指定的用戶身份運行memcached進程;
l -m <num>:用於緩存數據的最大內存空間,單位爲MB,默認爲64MB;
l -c <num>:最大支持的併發鏈接數,默認爲1024;
l -p <num>: 指定監聽的TCP端口,默認爲11211;
l -U <num>:指定監聽的UDP端口,默認爲11211,0表示關閉UDP端口;
l -t <threads>:用於處理入站請求的最大線程數,僅在memcached編譯時開啓了支持線程纔有效;
l -f <num>:設定Slab Allocator定義預先分配內存空間大小固定的塊時使用的增加因子;
l -M:當內存空間不夠使用時返回錯誤信息,而不是按LRU算法利用空間;
l -n: 指定最小的slab chunk大小;單位是字節;
l -S: 啓用sasl進行用戶認證;
stunel client上配置mcrouter的配置文件config.json。
cat config.json
{
"pools": {
"A": {
"servers": [
// hosts of replicated pool, e.g.:
"192.168.75.131:11211",
]
}
},
"route": {
"type": "PrefixPolicyRoute",
"operation_policies": {
"delete": "AllSyncRoute|Pool|A",
"add": "AllSyncRoute|Pool|A",
"get": "LatestRoute|Pool|A",
"set": "AllSyncRoute|Pool|A"
}
}
}
紅色部分爲mcrouter的memcached node 監聽的IP和端口, mcrouter啓動後會和它創建通訊。
注意: mcrouter和mecached node兩機器間防火牆要互加白名單。
啓動mcrouter並監聽1919端口
mcrouter -p 1919 -f config.json &
tcpdump -i eth1 -nn -A -s 0 -w /home/open/stunnel_test.pcap port 11211
l -i 指定監聽的網絡接口。
l -nn 直接以IP和端口號顯示,而非主機與服務器名稱。
l -w 直接將分組寫入文件中,而不是不分析並打印出來。
l -A 以ASCII格式打印出全部分組,並將鏈路層的頭最小化。
l -s 從每一個分組中讀取最開始的snaplen個字節,而不是默認的68個字節。-s 0表示不限制長度,輸出整個包。
往mcrouter寫入測試數據
telnet 127.0.0.1 1919
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set testkey1 0 0 3
liu
STORED
set testkey2 0 0 4
yong
STORED
set testkey3 0 0 5
43999
STORED
info字段顯示了明文信息。具體以下圖。
由以上可知mcrouter和memcached在不使用stunnel加密的狀況下通訊是明文傳輸的。
兩臺機器均執行
sh /root/memcached_stop
主要是怕啓動stunnel server端口有衝突
stunnel server上
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11211 -u www -t 10 -l 127.0.0.1
cat /usr/local/stunnel/etc/stunnel/stunnel.conf
sslVersion = TLSv1
CAfile = /usr/local/stunnel/etc/stunnel/stunnel.pem
verify = 2
cert = /usr/local/stunnel/etc/stunnel/stunnel.pem
pid = /var/run/stunnel/stunnel.pid
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
debug = 7
output = /data/logs/stunnel.log
setuid = root
setgid = root
[memcached]
accept = 192.168.75.131:11211
connect = 127.0.0.1:11211
l accept = 192.168.75.131:11211 表明stunnel server監聽的端口
l connect = 127.0.0.1:11211 表明stunnel解密數據後要轉發的目的地
cat /usr/local/stunnel/etc/stunnel/stunnel.conf
cert = /usr/local/stunnel/etc/stunnel/stunnel.pem
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
verify = 2
CAfile = /usr/local/stunnel/etc/stunnel/stunnel.pem
client = yes
delay = no
sslVersion = TLSv1
output = /data/logs/stunnel.log
[memcached]
accept = 127.0.0.1:11211
connect = 192.168.75.131:11211
server和client均執行
/usr/local/stunnel/sbin/stunnel
stunnel server啓動過程當中輸出以下信息,顯示成功加載了證書和祕鑰。
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Snagged 64 random bytes from /root/.rnd
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Wrote 1024 new random bytes to /root/.rnd
2015.11.01 16:35:22 LOG7[21016:140179061770176]: RAND_status claims sufficient entropy for the PRNG
2015.11.01 16:35:22 LOG7[21016:140179061770176]: PRNG seeded successfully
2015.11.01 16:35:22 LOG4[21016:140179061770176]: Wrong permissions on /usr/local/stunnel/etc/stunnel/stunnel.pem
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Certificate: /usr/local/stunnel/etc/stunnel/stunnel.pem
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Certificate loaded
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Key file: /usr/local/stunnel/etc/stunnel/stunnel.pem
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Private key loaded
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Loaded verify certificates from /usr/local/stunnel/etc/stunnel/stunnel.pem
cat config.json
{
"pools": {
"A": {
"servers": [
// hosts of replicated pool, e.g.:
"127.0.0.1:11211",
]
}
},
"route": {
"type": "PrefixPolicyRoute",
"operation_policies": {
"delete": "AllSyncRoute|Pool|A",
"add": "AllSyncRoute|Pool|A",
"get": "LatestRoute|Pool|A",
"set": "AllSyncRoute|Pool|A"
}
}
}
127.0.0.1:11211爲stunnel client監聽的地址。
mcrouter -p 1919 -f config.json &
tcpdump -i eth1 -nn -A -s 0 -w /home/open/mcrouter1.pcap port 11211
-i 後面爲stunel server監聽的網卡,不是memcached監聽的網卡。由於傳給memcached的時候stunnel已經進行解密了。
mcrouter上寫入測試數據
telnet 127.0.0.1 1919
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set testkey4 0 0 4
hell
STORED
set testkey5 0 0 12
hello world!
STORED
set testkey6 0 0 3
liu
STORED
在stunnel server端讀取數據進行驗證
telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
get testkey4
VALUE testkey4 0 4
hell
END
get testkey5
VALUE testkey5 0 12
hello world!
END
get testkey6
VALUE testkey6 0 3
liu
END
由以上可知數據已經從stunnel client端傳入到了memcached。
info字段已經沒有顯示明文信息。咱們取最後四條數據包來分析,以下圖。
因而可知從mcouter傳入到stunnel server的數據已經加密,沒法抓包獲取。
l Pools: Destination hosts are grouped into "pools". A pool is a basic building block of a routing config. At a minimum, a pool consists of an ordered list of destination hosts and a hash function.
l Key:A memcached key is typically a short (mcrouter limit is 250 characters) ASCII string which does not contain any whitespace or control characters.
l Route handles:Routes are composed of blocks called "route handles". Each route handle encapsulates some piece of routing logic, such as "send a request to a single destination host" or "provide failover."
l 普通分佈式: 沒有冗餘的分佈式,即數據分佈在不一樣的memcahed上,並且每一個memcached上的數據都不相同。
l 高可用分佈式:數據分佈在不一樣的memcahed上,同時每一個memcached都有一個冗餘的memcached做爲互備,即有冗餘的分佈式+高可用。
IP |
功能 |
備註 |
192.168.75.130 |
mcrouter測試機、 memcached localhost池 |
定製系統/vm |
192.168.75.131 |
memcached bakcup池 |
l Definition:Routes to one random destination from list of children.
l Properties:children.
cat config.json
{
"pools": {
"backup": { "servers": [
"192.168.75.131:11210",
"192.168.75.131:11211",
"192.168.75.131:11212",
] },
"localhost": { "servers": [
"127.0.0.1:11210",
"127.0.0.1:11211",
"127.0.0.1:11212",
] }
},
"route": {
"type": "RandomRoute",
"children" : [ "PoolRoute|localhost", "PoolRoute|backup" ]
}
}
上述配置文件分別定義了兩個名爲bakcup和localhost的memcached池,每一個池均有三個memcached實例。關鍵詞 RandomRoute是指路由方式,即路由句柄。因爲RandomRoute的路由方式符合普通分佈式需求,故選擇之。
192.168.75.130上
sh memcached_stop
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11210 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11210.log 2>&1
/usr/local/mcc/bin/memcached -d -m 2048 -c 4096 -p 11211 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11211.log 2>&1
/usr/local/mcc/bin/memcached -d -m 64 -c 4096 -p 11212 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11212.log 2>&1
192.168.75.131上
sh memcached_stop
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11210 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11210.log 2>&1
/usr/local/mcc/bin/memcached -d -m 2048 -c 4096 -p 11211 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11211.log 2>&1
/usr/local/mcc/bin/memcached -d -m 64 -c 4096 -p 11212 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11212.log 2>&1
l -l <ip_addr>:指定進程監聽的地址;
l -d: 以服務模式運行;
l -u <username>:以指定的用戶身份運行memcached進程;
l -m <num>:用於緩存數據的最大內存空間,單位爲MB,默認爲64MB;
l -c <num>:最大支持的併發鏈接數,默認爲1024;
l -p <num>: 指定監聽的TCP端口,默認爲11211;
l -t <threads>:用於處理入站請求的最大線程數,僅在memcached編譯時開啓了支持線程纔有效;
l -v:表明打印普通的錯誤或者警告類型的日誌信息
l -vv:比-v打印的日誌更詳細,包含了客戶端命令和server端的響應信息
l -vvv:則是最詳盡的,甚至包含了內部的狀態信息打印
這裏是使用-vv的目的是方便查看測試信息而已。
192.168.75.130上
mcrouter -p 1919 -f /data/backup/config.json
後面不加&,方便輸出調試信息,下文提到的mcrouter輸出均指這類調試信息。
cat setkey.sh
#!/bin/bash
sum=0
num=$1
for i in `seq 1 $num`
do
echo -e "set key${i} 0 0 4\ntest" | nc 127.0.0.1 1919
sum=$((sum+1))
done
echo
echo "total writes: ${sum}"
此腳本方便一次性向mcrouter寫入多條測試數據。
cat getkey.sh
#!/bin/bash
key=$1
for port in 11210 11211 11212
do
echo "Port $port values:"
echo "get $1" | nc 192.168.75.131 $port
echo
done
此腳本方便讀取數據。
隨機寫入10w條數據,在192.168.75.130執行
sh setkey.sh 100000
寫入數據的時候輸出信息以下所示
I1113 11:47:54.126411 117652 ProxyDestination.cpp:359] server 192.168.75.131:11212:TCP:ascii-1000 up (1 of 6)
I1113 11:47:54.130604 117652 ProxyDestination.cpp:359] server 192.168.75.131:11210:TCP:ascii-1000 up (2 of 6)
I1113 11:47:54.134222 117652 ProxyDestination.cpp:359] server 127.0.0.1:11211:TCP:ascii-1000 up (3 of 6)
I1113 11:47:54.137917 117652 ProxyDestination.cpp:359] server 127.0.0.1:11212:TCP:ascii-1000 up (4 of 6)
I1113 11:47:54.146790 117652 ProxyDestination.cpp:359] server 127.0.0.1:11210:TCP:ascii-1000 up (5 of 6)
I1113 11:47:54.151669 117652 ProxyDestination.cpp:359] server 192.168.75.131:11211:TCP:ascii-1000 up (6 of 6)
I1113 11:51:49.416658 117652 ProxyDestination.cpp:359] server 127.0.0.1:11212:TCP:ascii-1000 closed (5 of 6)
I1113 11:51:49.416856 117652 ProxyDestination.cpp:359] server 192.168.75.131:11210:TCP:ascii-1000 closed (4 of 6)
I1113 11:51:49.416931 117652 ProxyDestination.cpp:359] server 127.0.0.1:11211:TCP:ascii-1000 closed (3 of 6)
I1113 11:51:49.417023 117652 ProxyDestination.cpp:359] server 192.168.75.131:11212:TCP:ascii-1000 closed (2 of 6)
I1113 11:51:49.417177 117652 ProxyDestination.cpp:359] server 192.168.75.131:11211:TCP:ascii-1000 closed (1 of 6)
I1113 11:51:49.417248 117652 ProxyDestination.cpp:359] server 127.0.0.1:11210:TCP:ascii-1000 closed (0 of 6)
上述輸出中顯示了mcrouter與哪些實例創建了連接、與多少個memcached創建了鏈接,好比第一條紅色輸出,表示當前數據是與192.168.75.131:11211創建了鏈接,此時mcrouter已經和6個memcached創建了鏈接。最後一條紅色輸出表示此時和192.168.75.131:11211斷開了鏈接,同時已經和0個memcached創建鏈接。
memcached實例 |
cmd_set次數 |
bytes_written大小 |
127.0.0.1:11210 |
16881 |
135048 |
127.0.0.1:11211 |
16569 |
132552 |
127.0.0.1:11212 |
16561 |
132488 |
192.168.75.131:11210 |
16824 |
134592 |
192.168.75.131:11211 |
16604 |
132832 |
192.168.75.131:11212 |
16561 |
132488 |
從以上數據能夠看出,cmd_set總數是10w次,符合預想的結果。向mcrouter寫入10w條數據,數據幾乎是平均分佈在每一個memcached實例中,符合普通分佈式的特色。若是想觀察的更細,能夠看一下對應實例的memcached日誌。
l Definition:Sends to different targets based on specified operations.
l Properties:default_policy、operation_policies.
l Definition:All sets and deletes go to the target ("cold") route handle. Gets are attempted on the "cold" route handle and, in case of a miss, data is fetched from the "warm" route handle (where the request is likely to result in a cache hit). If "warm" returns a hit, the response is forwarded to the client and an asynchronous request, with the configured expiration time, updates the value in the "cold" route handle.
l Properties:cold、warm、exptime .
l Definition:Immediately sends the same request to all child route handles. Collects all replies and responds with the "worst" reply (i.e., the error reply, if any).
l Properties:children.
cat config.json
{
"pools": {
"backup": { "servers": [
"192.168.75.131:11210",
"192.168.75.131:11211",
"192.168.75.131:11212",
] },
"localhost": { "servers": [
"127.0.0.1:11210",
"127.0.0.1:11211",
"127.0.0.1:11212",
] }
},
"route": {
"type": "OperationSelectorRoute",
"operation_policies": {
"get": {
"type": "WarmUpRoute",
"cold": "PoolRoute|localhost",
"warm": "PoolRoute|backup",
"exptime": 0
}
},
"default_policy": {
"type": "AllSyncRoute",
"children": [
"PoolRoute|localhost",
"PoolRoute|backup"
]
}
}
}
配置文件中一樣定義了兩個名爲backup和localhost的memcached池,每一個池包含三個memcached實例。OperationSelectorRoute路由句柄定義了若是是get操做,則先從localhost池裏面取數據,若miss,則從backup池取,若從backup池取到數據則同時把數據寫入localhost池。exptime爲數據寫入localhost池的有效期,0爲永久。而default_policy則定義了除get之外的其餘操做(set、add、delete)則經過AllSyncRoute句柄同時寫入localhost和backup池,最後也就是兩個池的數據是互備的。下面來進行測試並驗證。
192.168.75.130上:
sh memcached_stop
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11210 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11210.log 2>&1
/usr/local/mcc/bin/memcached -d -m 2048 -c 4096 -p 11211 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11211.log 2>&1
/usr/local/mcc/bin/memcached -d -m 64 -c 4096 -p 11212 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11212.log 2>&1
192.168.75.131上:
sh memcached_stop
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11210 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11210.log 2>&1
/usr/local/mcc/bin/memcached -d -m 2048 -c 4096 -p 11211 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11211.log 2>&1
/usr/local/mcc/bin/memcached -d -m 64 -c 4096 -p 11212 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11212.log 2>&1
目的是清空上次實驗的殘留數據。
mcrouter -p 1919 -f /data/backup/config.json
後面不加&,方便輸出調試信息。
隨機寫入10w條數據,在192.168.75.130執行
sh setkey.sh 100000
memcached實例 |
cmd_set次數 |
bytes_written大小 |
127.0.0.1:11210 |
33705 |
269640 |
127.0.0.1:11211 |
33173 |
265384 |
127.0.0.1:11212 |
33122 |
264976 |
192.168.75.131:11210 |
33705 |
269640 |
192.168.75.131:11211 |
33173 |
265384 |
192.168.75.131:11212 |
33122 |
264976 |
從以上數據能夠看出,cmd_set總數是20w次,符合預想的結果。向mcrouter寫入10w條數據時,127.0.0.1:11210和192.168.75.131:11210實例寫入的數據次數、大小一致,也就是它們是互備的而且位於不一樣的池,同時能夠看到其餘實例也是互備。符合有冗餘的分佈式特性。
爲了測試高可用,咱們手動中止192.168.75.130上的全部實例,模擬故障或數據丟失。
sh memcached_stop
sh memcached_start
在192.168.75.130(loaclhost池)讀取key一、key2,此時因爲重啓了memcached全部實例,應該沒有數據。
Ø 讀key1
sh getkey.sh key1
Port 11210 values:
END
Port 11211 values:
END
Port 11212 values:
END
Ø 讀key2
Port 11210 values:
END
Port 11211 values:
END
Port 11212 values:
END
從mcrouter讀取key一、key2,並觀察相應實例的memcached日誌和mcrouter輸出。
Ø get key1
mcrouter的輸出以下:
I1113 15:44:45.147187 84128 ProxyDestination.cpp:359] server 127.0.0.1:11212:TCP:ascii-1000 up (1 of 6)
I1113 15:44:45.148263 84128 ProxyDestination.cpp:359] server 192.168.75.131:11212:TCP:ascii-1000 up (2 of 6)
memcached相應實例的日誌顯示以下:
127.0.0.1:11212
<58 new auto-negotiating client connection
58: Client using the ascii protocol
<58 get key1
>58 END
<58 add key1 0 0 4
>58 STORED
192.168.75.131:11212
<58 new auto-negotiating client connection
58: Client using the ascii protocol
<58 get key1
>58 sending key key1
>58 END
Ø get key2
mcrouter的輸出以下:
I1113 15:48:07.653182 84128 ProxyDestination.cpp:359] server 127.0.0.1:11210:TCP:ascii-1000 up (1 of 6)
I1113 15:48:07.654258 84128 ProxyDestination.cpp:359] server 192.168.75.131:11210:TCP:ascii-1000 up (2 of 6)
memcached相應實例的日誌顯示以下:
127.0.0.1:11210
<58 new auto-negotiating client connection
58: Client using the ascii protocol
<58 get key2
>58 END
<58 add key2 0 0 4
>58 STORED
192.168.75.131:11210
<58 new auto-negotiating client connection
58: Client using the ascii protocol
<58 get key2
>58 sending key key2
>58 END
咱們再從192.168.75.130(localhost池)讀取key一、key2。
Ø 讀key1
sh getkey.sh key1
Port 11210 values:
END
Port 11211 values:
END
Port 11212 values:
VALUE key1 0 4
test
END
Ø 讀key2
sh getkey.sh key2
Port 11210 values:
VALUE key2 0 4
test
END
Port 11211 values:
END
Port 11212 values:
END
由此能夠看出mcrouter支持高可用功能,但有個缺陷,就是localhost池故障恢復後,備池不會主動且及時的向恢復後的池同步數據,mcrouter須要人工請求後,並斷定從localhost池取不到數據後纔會再從新寫入數據到localhost池。
由5.4.6章節的數據分析和5.4.7章節的日誌顯示,說明在mcrouter中寫入10w條數據時,符合高可用分佈式的特性。即mcrouter自己具備高可用分佈式的特色。驗證了5.4.2章節的說法。
mcrouter的配置文件支持reload,並且默認會主動加載生效,正如官方做者提到(mcrouter supports dynamic reconfiguration so you don't need to restart mcrouter to apply config changes.),若是你配置文件修改出錯並保存,mcrouter會有錯誤提示,並依然保持以前的正確配置。固然這個功能是可選的,你能夠在啓動時加上--disable-reload-configs參數,而後你編輯配置文件並保存後,mcrouter不會自動刷新配置。
error: Could not link against boost_thread-mt !
或者
checking whether the Boost::Context library is available... yes
configure: error: Could not find a version of the library!
解決:./configure --with-boost-libdir=/usr/lib/ 加上boost的庫所在路徑
/usr/bin/ld: /usr/local/gcc-4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3/../../../../lib64/libiberty.a(cp-demangle.o): relocation R_X86_64_32S against `.rodata' can not be used when making a shared object; recompile with -fPIC
/usr/local/gcc-4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3/../../../../lib64/libiberty.a: could not read symbols: Bad value
collect2: error: ld returned 1 exit status
make[2]: *** [libfolly.la] Error 1
make[2]: Leaving directory `/data/src/folly/folly'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/data/src/folly/folly'
make: *** [all] Error 2
解決:檢查以前的gcc編譯和boost編譯過程有無問題,gcc必須是4.8+,boost必須是1.51+
g++ -DHAVE_CONFIG_H -I../.. -DLIBMC_FBTRACE_DISABLE -Wno-missing-field-initializers -Wno-deprecated -W -Wall -Wextra -Wno-unused-parameter -fno-strict-aliasing -g -O2 -std=gnu++1y -MT fbi/cpp/libmcrouter_a-LogFailure.o -MD -MP -MF fbi/cpp/.deps/libmcrouter_a-LogFailure.Tpo -c -o fbi/cpp/libmcrouter_a-LogFailure.o `test -f 'fbi/cpp/LogFailure.cpp' || echo './'`fbi/cpp/LogFailure.cpp
fbi/cpp/LogFailure.cpp:24:29: fatal error: folly/Singleton.h: No such file or directory
#include <folly/Singleton.h>
解決:須要檢查folly在make過程當中是否有wraning信息輸出 ,有wraning信息的話,須要檢查folly的編譯過程,make mcrouter編譯時若是有相似如下輸出才表明是正確的,並且會中止在這個界面比較久,不然有多是folly那裏make的時候有問題致使了mcrouter最後編譯失敗。
libtool: compile: g++ -DHAVE_CONFIG_H -I./.. -pthread -I/usr/include -std=gnu++0x -g -O2 -MT futures/Future.lo -MD -MP -MF futures/.deps/Future.Tpo -c futures/Future.cpp -o futures/Future.o >/dev/null 2>&1
.10.31 01:05:50 LOG7[18392:140063756588992]: Certificate: /usr/local/stunnel/etc/private.pem
2015.10.31 01:05:50 LOG3[18392:140063756588992]: Error reading certificate file: /usr/local/stunnel/etc/private.pem
2015.10.31 01:05:50 LOG3[18392:140063756588992]: error stack: 140DC009 : error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib
2015.10.31 01:05:50 LOG3[18392:140063756588992]: SSL_CTX_use_certificate_chain_file: 906D06C: error:0906D06C:PEM routines:PEM_read_bio:no start line
檢查stunnel.conf文件中cert和CAfile字段對應證書是否爲相應文件,文件所屬權限是否正確等。
2015.11.01 15:31:31 LOG3[71906:140502252201920]: Cannot create pid file /usr/local/stunnel/var/run/stunnel/stunnel.pid
2015.11.01 15:31:31 LOG3[71906:140502252201920]: create: No such file or directory (2)
已經提示很是清楚了,主要是記得在stunnel client的stunnel.conf文件中加上output字段,方便排錯。
總之,有錯誤就看日誌和Google。
http://qiita.com/shivaken/items/8742e0ddc3c72f242d03
http://confluence.sharuru07.jp/pages/viewpage.action?pageId=361455
http://www.tiham.com/cache-cluster/mcrouter-install.html
http://dev.classmethod.jp/cloud/aws/elasticache-carried-mcrouter/
https://github.com/genx7up/docker-mcrouter
http://fuweiyi.com/others/2014/05/15/a-Centos-Squid-Stunnel-proxy.html
http://blog.cloudpack.jp/2014/12/16/router-for-scaling-memcached-with-mcrouter-on-docker/
https://github.com/facebook/mcrouter/wiki
http://www.oschina.net/translate/introducing-mcrouter-a-memcached-protocol-router-for-scaling-memcached-deployments