我寫爬蟲,用到了asyncio相關的事件循環,新建了一個線程去run_forever()
,在docker中運行。後來程序有異常,主線程掛了,可是居然不報錯。查了好久,才找出來。
若是你新建一個線程去運行通常的死循環,主線程出錯退出,是會報錯的,雖然子線程還會繼續運行。
若是你新建一個線程去運行event_loop.run_forever()
,主線程出異常退出,沒有任何錯誤提示,子線程同樣繼續運行。python
我查了好久也不知道爲何,在本地跑,一切正常。測試程序以下:docker
import asyncio import logging from threading import Thread logging.basicConfig(level=logging.INFO, format='[%(asctime)s] - %(levelname)s in %(filename)s: %(message)s') logger = logging.getLogger(__name__) def start_loop(event_loop): """start run_forever""" asyncio.set_event_loop(event_loop) event_loop.run_forever() def get_event_loop(): """new and return event_loop""" event_loop = asyncio.new_event_loop() t0 = Thread(target=start_loop, args=(event_loop,)) t0.start() return event_loop if __name__ == '__main__': loop = get_event_loop() logger.info('make error') raise TimeoutError('sfasf') """ [2019-04-16 13:40:46,101] - INFO in docker_test.py: make error Traceback (most recent call last): File "D:/Code/python/concurrent_crawler/test/docker_test/docker_test.py", line 38, in <module> raise TimeoutError('sfasf') TimeoutError: sfasf """
若是在docker中就沒有任何錯誤提示,最後解決辦法以下async
import asyncio import logging from threading import Thread logging.basicConfig(level=logging.INFO, format='[%(asctime)s] - %(levelname)s in %(filename)s: %(message)s') logger = logging.getLogger(__name__) def start_loop(event_loop): """start run_forever""" asyncio.set_event_loop(event_loop) event_loop.run_forever() def get_event_loop(): """new and return event_loop""" event_loop = asyncio.new_event_loop() t0 = Thread(target=start_loop, args=(event_loop,)) t0.setDaemon(True) # 隨着主線程結束而結束 t0.start() return event_loop if __name__ == '__main__': loop = get_event_loop() logger.info('make error') raise TimeoutError('sfasf')
利用線程的setDaemon(True)方法,結束子線程。這樣纔會輸出出來!
這個問題我放到stackoverflow了linkoop