python讀取大文件處理方式

一.前言python

咱們在處理小的文本文件時通常使用.read().readline() 和 .readlines(),當咱們的文件有10個G甚至更大時,用上面的方法內存就直接爆掉了。git

二.解決辦法github

1.看到文件這麼大,咱們的第一反應都是把文件分割成小塊的讀取不就行了嗎post

def read_in_chunks(filePath, chunk_size=1024*1024):
    """
    Lazy function (generator) to read a file piece by piece.
    Default chunk size: 1M
    You can set your own chunk size 
    """
    file_object = open(filePath)
    while True:
        chunk_data = file_object.read(chunk_size)
        if not chunk_data:
            break
        yield chunk_data
if __name__ == "__main__":
    filePath = './path/filename'
    for chunk in read_in_chunks(filePath):
        process(chunk) # <do something with chunk>

2.使用with open()spa

#If the file is line based
with open(...) as f:
    for line in f:
        process(line) # <do something with line>

3.fileinput處理code

import fileinput
for line in fileinput.input(['sum.log']):
    print line

 

 

參考:http://chenqx.github.io/2014/10/29/Python-fastest-way-to-read-a-large-file/blog

         http://www.zhidaow.com/post/python-read-big-file內存

相關文章
相關標籤/搜索