Google開源軟負載seesaw

https://github.com/google/seesawnode

 ------------------------git

在分佈式系統中,負載均衡是很是重要的環節,經過負載均衡將請求派發到網絡中的一個或多個節點上進行處理。一般來講,負載均衡分爲硬件負載均衡及軟件負載均衡。硬件負載均衡,顧名思義,在服務器節點之間安裝專門的硬件進行負載均衡的工做,F5便爲其中的佼佼者。軟件負載均衡則是經過在服務器上安裝的特定的負載均衡軟件或是自帶負載均衡模塊完成對請求的分配派發。github

通常而言,有如下幾種常見的負載均衡策略:golang

一.輪詢。做爲很是經典的負載均衡策略,早期該策略應用地很是普遍。其原理很簡單,給每一個請求標記一個序號,而後將請求依次派發到服務器節點中,適用於集羣中各個節點提供服務能力等同且無狀態的場景。其缺點也很是明顯,該策略將節點視爲等同,與實際中複雜的環境不符。加權輪詢爲輪詢的一個改進策略,每一個節點會有權重屬性,可是由於權重的設置難以作到隨實際狀況變化,仍有必定的不足。算法

二.隨機。與輪詢類似,只是不須要對每一個請求進行編號,每次隨機取一個。一樣地,該策略也將後端的每一個節點是爲等同的。另外一樣也有改進的加權隨機的算法,再也不贅述。後端

三.最小響應時間。經過記錄每次請求所需的時間,得出平均的響應時間,而後根據響應時間選擇最小的響應時間。該策略能較好地反應服務器的狀態,可是因爲是平均響應時間的關係,時間上有些滯後,沒法知足快速響應的要求。所以在此基礎之上,會有一些改進版本的策略,如只計算最近若干次的平均時間的策略等。服務器

四. 最小併發數。客戶端的每一次請求服務在服務器停留的時間可能會有較大的差別,隨着工做時間加長,若是採用簡單的輪循或隨機均衡算法,每一臺服務器上的鏈接進程可能會產生較大的不一樣,並無達到真正的負載均衡。最小併發數的策略則是記錄了當前時刻,每一個備選節點正在處理的事務數,而後選擇併發數最小的節點。該策略可以快速地反應服務器的當前情況,較爲合理地將負責分配均勻,適用於對當前系統負載較爲敏感的場景。網絡

五.哈希。在後端節點有狀態的狀況下,須要使用哈希的方法進行負載均衡,此種狀況下狀況比較複雜,本文對此不作探討。併發

另外還有其餘的負載均衡策略再也不一一列舉,有興趣的同窗能夠本身去查閱相關資料。app

分佈式系統面臨着遠比單機系統更加複雜的環境,包括不一樣的網絡環境、運行平臺、機器配置等等。在如此複雜的環境中,發生錯誤是不可避免的,而後如何可以作到容錯性,將發生錯誤的代價下降到最低是在分佈式系統中必需要考慮的問題。選擇不一樣的負載均衡策略將會有很是大的不一樣。

------------------------

Seesaw v2

GoDoc

Note: This is not an official Google product.

About

Seesaw v2 is a Linux Virtual Server (LVS) based load balancing platform.

It is capable of providing basic load balancing for servers that are on the same network, through to advanced load balancing functionality such as anycast, Direct Server Return (DSR), support for multiple VLANs and centralised configuration.

Above all, it is designed to be reliable and easy to maintain.

Requirements

A Seesaw v2 load balancing cluster requires two Seesaw nodes - these can be physical machines or virtual instances. Each node must have two network interfaces - one for the host itself and the other for the cluster VIP. All four interfaces should be connected to the same layer 2 network.

Building

Seesaw v2 is developed in Go and depends on several Go packages:

Additionally, there is a compile and runtime dependency on libnl and a compile time dependency on the Go protobuf compiler.

On a Debian/Ubuntu style system, you should be able to prepare for building by running:

apt-get install golang
apt-get install libnl-3-dev libnl-genl-3-dev

If your distro has a go version before 1.5, you may need to fetch a newer release from https://golang.org/dl/.

After setting GOPATH to an appropriate location (for example ~/go):

go get -u golang.org/x/crypto/ssh
go get -u github.com/dlintw/goconf
go get -u github.com/golang/glog
go get -u github.com/miekg/dns
go get -u github.com/kylelemons/godebug/pretty

Ensure that ${GOPATH}/bin is in your ${PATH} and in the seesaw directory:

make test
make install

If you wish to regenerate the protobuf code, the protobuf compiler and Go protobuf compiler generator are also needed:

apt-get install protobuf-compiler
go get -u github.com/golang/protobuf/{proto,protoc-gen-go}

The protobuf code can then be regenerated with:

make proto

Installing

After make install has run successfully, there should be a number of binaries in ${GOPATH}/bin with a seesaw_ prefix. Install these to the appropriate locations:

SEESAW_BIN="/usr/local/seesaw"
SEESAW_ETC="/etc/seesaw"
SEESAW_LOG="/var/log/seesaw"

INIT=`ps -p 1 -o comm=`

install -d "${SEESAW_BIN}" "${SEESAW_ETC}" "${SEESAW_LOG}"

install "${GOPATH}/bin/seesaw_cli" /usr/bin/seesaw

for component in {ecu,engine,ha,healthcheck,ncc,watchdog}; do
  install "${GOPATH}/bin/seesaw_${component}" "${SEESAW_BIN}"
done

if [ $INIT = "init" ]; then
  install "etc/init/seesaw_watchdog.conf" "/etc/init"
elif [ $INIT = "systemd" ]; then
  install "etc/systemd/system/seesaw_watchdog.service" "/etc/systemd/system"
  systemctl --system daemon-reload
fi
install "etc/seesaw/watchdog.cfg" "${SEESAW_ETC}"

# Enable CAP_NET_RAW for seesaw binaries that require raw sockets.
/sbin/setcap cap_net_raw+ep "${SEESAW_BIN}/seesaw_ha"
/sbin/setcap cap_net_raw+ep "${SEESAW_BIN}/seesaw_healthcheck"

The setcap binary can be found in the libcap2-bin package on Debian/Ubuntu.

Configuring

Each node needs a /etc/seesaw/seesaw.cfg configuration file, which provides information about the node and who its peer is. Additionally, each load balancing cluster needs a cluster configuration, which is in the form of a text-based protobuf - this is stored in /etc/seesaw/cluster.pb.

An example seesaw.cfg file can be found in etc/seesaw/seesaw.cfg.example - a minimal seesaw.cfg provides the following:

  • anycast_enabled - True if anycast should be enabled for this cluster.
  • name - The short name of this cluster.
  • node_ipv4 - The IPv4 address of this Seesaw node.
  • peer_ipv4 - The IPv4 address of our peer Seesaw node.
  • vip_ipv4 - The IPv4 address for this cluster VIP.

The VIP floats between the Seesaw nodes and is only active on the current master. This address needs to be allocated within the same netblock as both the node IP address and peer IP address.

An example cluster.pb file can be found in etc/seesaw/cluster.pb.example - a minimal cluster.pb contains a seesaw_vip entry and two node entries. For each service that you want to load balance, a separate vserver entry is needed, with one or more vserver_entry sections (one per port/proto pair), one or more backends and one or more healthchecks. Further information is available in the protobuf definition - see pb/config/config.proto.

On an upstart based system, running restart seesaw_watchdog will start (or restart) the watchdog process, which will in turn start the other components.

Anycast

Seesaw v2 provides full support for anycast VIPs - that is, it will advertise an anycast VIP when it becomes available and will withdraw the anycast VIP if it becomes unavailable. For this to work the Quagga BGP daemon needs to be installed and configured, with the BGP peers accepting host-specific routes that are advertised from the Seesaw nodes within the anycast range (currently hardcoded as 192.168.255.0/24).

Command Line

Once initial configuration has been performed and the Seesaw components are running, the state of the Seesaw can be viewed and controlled via the Seesaw command line interface. Running seesaw (assuming /usr/bin is in your path) will give you an interactive prompt - type ? for a list of top level commands. A quick summary:

  • config reload - reload the cluster.pb from the current config source.
  • failover - failover between the Seesaw nodes.
  • show vservers - list all vservers configured on this cluster.
  • show vserver <name> - show the current state for the named vserver.

Troubleshooting

A Seesaw should have five components that are running under the watchdog - the process table should show processes for:

  • seesaw_ecu
  • seesaw_engine
  • seesaw_ha
  • seesaw_healthcheck
  • seesaw_ncc
  • seesaw_watchdog

All Seesaw v2 components have their own logs, in addition to the logging provided by the watchdog. If any of the processes are not running, check the corresponding logs in /var/log/seesaw (e.g. seesaw_engine.{log,INFO}).

相關文章
相關標籤/搜索