從羣裏獲取了一個超大的json(也就800M),須要導入mongodb,而後遇到了一個問題:html
$ mongoimport --db weibo --collection data --file test.json 2018-05-09T16:10:22.357+0800 connected to: localhost 2018-05-09T16:10:22.360+0800 Failed: error processing document #2: invalid character ',' looking for beginning of value 2018-05-09T16:10:22.360+0800 imported 0 documents
一開始,我先去 菜鳥工具1驗證了一下個人json格式不是否是正確,json格式是沒有沒問題的。node
我覺得是編碼的問題,多是mac下的編碼有問題,由於stackoverflow2有一個談論這個問題的。上邊的回覆說是有UTF8不支持的字符,可是他們遇到的問題都是\\
,我仍是去Windows服務器上裝上了mongodb,而後仍是這個問題,可見我這個可能不是字符問題。
其中JP Lew遇到了也是,
的問題提到的這個作法很不錯,使用-vvvv
這個參數肯定位置。python
$ mongoimport --db weibo --collection data --file test.json -vvvvv 2018-05-09T16:30:09.538+0800 using 4 decoding workers 2018-05-09T16:30:09.539+0800 using 1 insert workers 2018-05-09T16:30:09.539+0800 will listen for SIGTERM, SIGINT, and SIGKILL 2018-05-09T16:30:09.542+0800 filesize: 823127226 bytes 2018-05-09T16:30:09.542+0800 using fields: 2018-05-09T16:30:09.552+0800 connected to: localhost 2018-05-09T16:30:09.552+0800 ns: weibo.data 2018-05-09T16:30:09.552+0800 connected to node type: standalone 2018-05-09T16:30:09.553+0800 standalone server: setting write concern w to 1 2018-05-09T16:30:09.553+0800 using write concern: w='1', j=false, fsync=false, wtimeout=0 2018-05-09T16:30:09.553+0800 standalone server: setting write concern w to 1 2018-05-09T16:30:09.553+0800 using write concern: w='1', j=false, fsync=false, wtimeout=0 2018-05-09T16:30:09.555+0800 Failed: error processing document #2: invalid character ',' looking for beginning of value 2018-05-09T16:30:09.555+0800 imported 0 documents
嗯,仍是這個問題,因此我這個問題應該跟JP的那個也不同。並且我這個應該是第一個json就出問題了!linux
由於文件裏好多東西都沒用,因此我想只把有用的那幾行挑出來,可是結果感人,仍是想個正經辦法把。
附上 cat
+grep
提取個別行3:mongodb
[root@localhost test]# cat test.txt hnlinux peida.cnblogs.com ubuntu ubuntu linux redhat Redhat linuxmint [root@localhost test]# cat test2.txt linux Redhat [root@localhost test]# cat test.txt | grep -f test2.txt hnlinux ubuntu linux Redhat linuxmint
最後,在我一次又一次的實驗下,終於找到了問題:json
{ ... }, { ... }, ...
泥煤兩個json中間多了一個逗號啊,而後寫了一個腳本把這個逗號去掉吧。。。ubuntu
import os import re import sys args = sys.argv if len(args) != 3 or args[1] == args[2]: raise Warning() abs_path = os.path.abspath('.') org_path = os.path.join(abs_path, args[1]) new_path = os.path.join(abs_path, args[2]) re_com = re.compile(r'^},') try: fr = open(org_path, 'r') fw = open(new_path, 'w') for line in fr: if re_com.match(line): line = '}\n' fw.writelines(line) except IOError as e: print(e) finally: if fr: fr.close() if fw: fw.close()
這麼pythonic的處理大文件的方式來自:https://www.cnblogs.com/wulaa... :bash
with open(filename, 'r') as file: for line in file: ....
ok,把新文件導入進去~服務器
$ mongoimport --db weibo --collection data --file new.json 2018-05-09T15:58:36.211+0800 connected to: localhost 2018-05-09T15:58:39.194+0800 [##......................] weibo.data77.5MB/785MB (9.9%) 2018-05-09T15:58:42.195+0800 [####....................] weibo.data160MB/785MB (20.4%) 2018-05-09T15:58:45.195+0800 [#######.................] weibo.data243MB/785MB (31.0%) 2018-05-09T15:58:48.203+0800 [#########...............] weibo.data323MB/785MB (41.1%) 2018-05-09T15:58:51.197+0800 [############............] weibo.data402MB/785MB (51.2%) 2018-05-09T15:58:54.195+0800 [##############..........] weibo.data478MB/785MB (60.9%) 2018-05-09T15:58:57.196+0800 [#################.......] weibo.data560MB/785MB (71.4%) 2018-05-09T15:59:00.195+0800 [###################.....] weibo.data642MB/785MB (81.8%) 2018-05-09T15:59:03.196+0800 [######################..] weibo.data722MB/785MB (92.0%) 2018-05-09T15:59:05.521+0800 [########################] weibo.data785MB/785MB (100.0%) 2018-05-09T15:59:05.522+0800 imported 95208 documents
Bingo!工具
這裏有兩篇討論這個問題的,我沒來得及看,留給有須要的人(好吧,主要是看着太吃力了):