簡析gRPC client 鏈接管理

簡析gRPC client 鏈接管理

背景

客戶端skd 使用gRPC做爲通訊協議,定時(大概是120s)向服務器發送pingServer 請求。
服務端是80端口,如xxx:80.

問題

發現客戶端不斷的端口重連服務器的。
使用netstat -antp

clipboard.png

如圖, 如標紅的服務器地址鏈接是TIME_WAIT,後面有和服務器創建鏈接 ESTABLISHED。
TIME_WAIT 狀態代表是client 端主動斷開了鏈接。

這和我以前的認知有點衝突,gRPC 應該是長鏈接,爲何這裏每次都斷開呢,這樣不就長了短鏈接了嗎?
並且客戶端主動斷開的,會不會是client端哪裏有問題?

帶着疑問,在client 抓了一包,
發現client 老是受到一個 length 爲17 的包,而後就開始FIN 包,走TCP 揮手的流程。
使用WireShark 對tcpdump的結果查看,發現這個length 17 的包,是一個GOAWAY 包。

如圖:html

clipboard.png

這個是HTTP2定義的一個「優雅」退出的機制。

這裏有HTTP2 GOAWAY stream 包的說明。

HTTP2 GOAWAY 說明ios

根據以前的對gRPC的瞭解,gRPC client 會解析域名,而後會維護一個lb 負載均衡,
這個應該是gRPC對idle 鏈接的管理。pingServer 的時間間隔是120s, 可是gRPC 認爲中間是idle鏈接,
因此通知client 關閉空閒鏈接?

爲了驗證這個想法,修改了一下gRPC 的demo, 由於咱們client 端使用是cpp 的gRPC 異步調用方式,
因此更加gRPC 的異步demo, 寫了一個簡單訪問服務器的async_client

代碼:服務器

#include <iostream>
#include <memory>
#include <string>

#include <grpcpp/grpcpp.h>
#include <grpc/support/log.h>
#include <thread>

#include "gateway.grpc.pb.h"

using grpc::Channel;
using grpc::ClientAsyncResponseReader;
using grpc::ClientContext;
using grpc::CompletionQueue;
using grpc::Status;
using yournamespace::PingReq;
using yournamespace::PingResp;
using yournamespace::srv;

class GatewayClient {
  public:
    explicit GatewayClient(std::shared_ptr<Channel> channel)
            : stub_(srv::NewStub(channel)) {}

    // Assembles the client's payload and sends it to the server.
    //void PingServer(const std::string& user) {
    void PingServer() {
        // Data we are sending to the server.
        PingReq request;
        request.set_peerid("1111111111111113");
        request.set_clientinfo("");

        request.set_capability(1);
        request.add_iplist(4197554190);
        request.set_tcpport(8080);
        request.set_udpport(8080);
        request.set_upnpip(4197554190);
        request.set_upnpport(8080);
        request.set_connectnum(10000);
        request.set_downloadingspeed(100);
        request.set_uploadingspeed(10);
        request.set_maxdownloadspeed(0);
        request.set_maxuploadspeed(0);

        // Call object to store rpc data
        AsyncClientCall* call = new AsyncClientCall;

        // stub_->PrepareAsyncSayHello() creates an RPC object, returning
        // an instance to store in "call" but does not actually start the RPC
        // Because we are using the asynchronous API, we need to hold on to
        // the "call" instance in order to get updates on the ongoing RPC.
        call->response_reader =
            stub_->AsyncPing(&call->context, request, &cq_);

        // StartCall initiates the RPC call
        //call->response_reader->StartCall();

        // Request that, upon completion of the RPC, "reply" be updated with the
        // server's response; "status" with the indication of whether the operation
        // was successful. Tag the request with the memory address of the call object.
        call->response_reader->Finish(&call->reply, &call->status, (void*)call);

    }

    // Loop while listening for completed responses.
    // Prints out the response from the server.
    void AsyncCompleteRpc() {
        void* got_tag;
        bool ok = false;

        // Block until the next result is available in the completion queue "cq".
        while (cq_.Next(&got_tag, &ok)) {
            // The tag in this example is the memory location of the call object
            AsyncClientCall* call = static_cast<AsyncClientCall*>(got_tag);

            // Verify that the request was completed successfully. Note that "ok"
            // corresponds solely to the request for updates introduced by Finish().
            GPR_ASSERT(ok);

            if (call->status.ok())
                std::cout << "xNetClient received: " << call->reply.code() << "  task:" << call->reply.tasks_size() <<"  pinginterval:"<< call->reply.pinginterval() << std::endl;
            else
                //std::cout << "RPC failed" << std::endl;
            std::cout << ": status = " << call->status.error_code() << " (" << call->status.error_message() << ")" << std::endl;

            // Once we're complete, deallocate the call object.
            delete call;
        }
    }

  private:

    // struct for keeping state and data information
    struct AsyncClientCall {
        // Container for the data we expect from the server.
        PingResp reply;

        // Context for the client. It could be used to convey extra information to
        // the server and/or tweak certain RPC behaviors.
        ClientContext context;

        // Storage for the status of the RPC upon completion.
        Status status;


        std::unique_ptr<ClientAsyncResponseReader<PingResp>> response_reader;
    };

    // Out of the passed in Channel comes the stub, stored here, our view of the
    // server's exposed services.
    std::unique_ptr<srv::Stub> stub_;

    // The producer-consumer queue we use to communicate asynchronously with the
    // gRPC runtime.
    CompletionQueue cq_;
};

int main(int argc, char** argv) {


    // Instantiate the client. It requires a channel, out of which the actual RPCs
    // are created. This channel models a connection to an endpoint (in this case,
    // localhost at port 50051). We indicate that the channel isn't authenticated
    // (use of InsecureChannelCredentials()).

    if (argc < 2){
    std::cout << "usage: " <<argv[0]<< " domain:port" << std::endl;
    std::cout << "eg: " <<argv[0]<< " gw.xnet.xcloud.sandai.net:80" << std::endl;
    return 0;
    }

    GatewayClient xNetClient(grpc::CreateChannel( argv[1], grpc::InsecureChannelCredentials()));

    // Spawn reader thread that loops indefinitely
    std::thread thread_ = std::thread(&GatewayClient::AsyncCompleteRpc, &xNetClient);

    for (int i = 0; i < 1000; i++) {
        xNetClient.PingServer();  // The actual RPC call!
        std::this_thread::sleep_for(std::chrono::seconds(120));
    }

    std::cout << "Press control-c to quit" << std::endl << std::endl;
    thread_.join();  //blocks forever

    return 0;
}

接下來的時間很簡單,運行一下。
使用netstat -natp 觀察,能夠從新。 async_client 也是斷開,重連。
進一步調試發現,把發包的時間修改成10s 的時候,能夠保持鏈接,大於10s基本上鍊接就會斷開。負載均衡

小結

小結一下:
gRPC 管理鏈接的方式,默認狀況下,大於10s沒有數據發送,gRPC 就會認爲是個idle 鏈接。server 端會給client 端發送一個GOAWAY 的包。client 收到這個包以後就會主動關閉鏈接。下次須要發包的時候,就會從新創建鏈接。dom

目前還不知道是否是有配置項修改這個值,對gRPC 的機制還不是很熟,後面再研究一下。異步

相關文章
相關標籤/搜索