中文詞向量訓練

1. 英文預訓練詞向量很不錯,  https://nlp.stanford.edu/projects/glove/php

使用時首行加入一行行數和向量維度, gensim便可調用.python

# sed -i '1i 400000 300' glove.6b.300d.txt

from gensim.models.keyedvectors import KeyedVectors

model = KeyedVectors.load_word2vec_format('glove.6b.300d.txt', binary=False)

# 獲取最類似
for w, s in model.most_similar('apple', topn=5):
    print w, s

# 獲取向量
print model['apple']

 

2. 網上找了不少中文,不盡人意,直接本身訓練, 也不會很複雜.app

2.1 構建中文語料庫, 下載推薦: http://www.sogou.com/labs/resource/list_news.phpspa

# 搜狐新聞 2.1G
tar -zxvf news_sohusite_xml.full.tar.gz 
cat news_sohusite_xml.full.tar.gz | iconv -f gb18030 -t utf-8 | grep "<content>" > news_sohusite.txt
sed -i 's/<content>//g' news_sohusite.txt
sed -i 's/<\/content>//g' news_sohusite.txt
python -m jieba -d ' ' news_sohusite.txt > news_sohusite_cutword.txt

# 全網新聞 1.8G
tar -zxvf news_tensites_xml.full.tar.gz 
cat news_tensites_xml.full.tar.gz | iconv -f gb18030 -t utf-8 | grep "<content>" > news_tensite.txt
sed -i 's/<content>//g' news_tensite.txt
sed -i 's/<\/content>//g' news_tensite.txt
python -m jieba -d ' ' news_tensite.txt > news_tensite_cutword.txt

# 其它自身的結合業務須要的預料, 如公司簡介
python -m jieba -d ' ' other_entdesc.txt > other_entdesc_cutword.txt

# 合併切割好的語料
cat news_sohusite_cutword.txt news_tensite_cutword.txt other_entdesc_cutword.txt > w2v_chisim_corpus.txt

2.2 利用gensim庫進行訓練#!/usr/bin/env pythoncode

from gensim.models.word2vec import Word2Vec from gensim.models.word2vec import LineSentence sentences = LineSentence('w2v_chisim_corpus.txt') model = Word2Vec(sentences, size=300, window=8, min_count=10, sg=1, workers=4) # sg=0 使用cbow訓練, sg=1對低頻詞較爲敏感 model.save('w2v_chisim.300d.txt') 
for w, s in model.most_similar(u'蘋果'): print w, s for w, s in model.most_similar(u'中國'): print w, s for w, s in model.most_similar(u'中山大學'): print w, s

 

如何, 是否是也很簡單, your show time now, good luck!orm

相關文章
相關標籤/搜索