最近在作OpenStack控制節點高可用(三控)的測試,當關掉其中一個控制節點的時候,nova service-list 看到全部nova服務都是down的。 nova-compute的log中有大量這種錯誤信息:python
2016-11-08 03:46:23.887 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.275 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
上述拋出的異常在oslo_messaging/_drivers/impl_rabbit.py中定位出來了:bash
def _heartbeat_thread_job(self): """Thread that maintains inactive connections """ while not self._heartbeat_exit_event.is_set(): with self._connection_lock.for_heartbeat(): recoverable_errors = ( self.connection.recoverable_channel_errors + self.connection.recoverable_connection_errors) try: try: self._heartbeat_check() # NOTE(sileht): We need to drain event to receive # heartbeat from the broker but don't hold the # connection too much times. In amqpdriver a connection # is used exclusivly for read or for write, so we have # to do this for connection used for write drain_events # already do that for other connection try: self.connection.drain_events(timeout=0.001) except socket.timeout: pass except recoverable_errors as exc: LOG.info(_LI("A recoverable connection/channel error " "occurred, trying to reconnect: %s"), exc) self.ensure_connection() except Exception: LOG.warning(_LW("Unexpected error during heartbeart " "thread processing, retrying...")) LOG.debug('Exception', exc_info=True) self._heartbeat_exit_event.wait( timeout=self._heartbeat_wait_timeout) self._heartbeat_exit_event.clear()
本來heartbeat check就是來檢測組件服務和rabbitmq server之間的鏈接是不是活着的,oslo_messaging中的heartbeat_check任務在服務啓動的時候就跑在後臺了,當關閉一個控制節點時,實際上也關閉了一個rabbitmq server節點。只不過這裏會一直處於循環之中,一直拋出recoverable_errors捕獲到的異常,只有當self._heartbeat_exit_event.is_set()纔會退出while循環。按理說應該加個超時的東西,這樣就就不會一直處於循環之中,過好幾分鐘後才恢復。socket
今天我在虛擬機中安裝了三控高可用,在nova.conf中加了以下參數:ide
[oslo_messaging_rabbit]測試
rabbit_max_retries = 2 # 重連最大次數this
heartbeat_timeout_threshold = 0 # 禁止heartbeat checkspa
測試,nova_compute 並不會一直拋出recoverable_errors捕獲到的異常,nova service-list並不會出現全部服務down的狀況。debug
後續有待在物理機上測試。。。。。。server