咱們知道,對於咱們的現實世界來講,咱們人類能表達和理解是最高維度是3維,超過3維的向量只能存在於數學中,沒法在物理世界中被咱們認識到。可是在大多數機器學習工程項目中,數據向量的維度都不僅2或3個維度,咱們沒法直接將其表如今座標圖像上。爲了可以更好地認識這些高維空間的數據向量,咱們須要一種技術,可以將高維空間的數據向量投影到低維的2/3維座標系中。這就是咱們本文要討論的多維縮放(multidimensional scaling)技術。html
無論數據向量自己維度是多少,咱們總能夠按照某種標準(歐氏距離、皮爾遜相關度指數)來度量兩兩數據點之間的距離,並獲得一個標量。node
假設咱們有4個m維數據向量,經過循環計算兩兩數據向量之間的距離,python
將全部數據項隨機放置在二維圖上,以下圖所示,git
在初始狀態的時候,由於隨機放置的關係,全部數據項兩兩之間的當前距離都是根據當前實際距離(歐式距離)計算求得的,以下圖所示,github
顯然,這樣的空間佈局不符合數據項之間距離的實際狀況,針對每兩兩構成的一對數據項,咱們將它們的目標距離與當前距離進行比較,並求出一個偏差值。根據偏差的狀況,咱們會按照比例將每一個數據項的所在位置進行適當的移動。web
下圖展現了咱們對數據項A的施力狀況,算法
每個節點的移動,都是全部其餘節點施加在該節點上的推或拉的綜合效應。節點每移動一次,其當前距離和目標距離間的差距就會減小一些。這一過程會不斷重複屢次,直到咱們沒法再經過移動節點來減小整體偏差爲止。編程
能夠想象,通過多輪的迭代後,全部的節點之間的距離不必定能完美匹配數據項之間的目標距離,可是必定會進入一個最佳平衡,在這個最佳平衡下,全部數據項之間距離和目標距離之間的總差值,達到一個全局最小。json
from PIL import Image,ImageDraw def readfile(filename): lines=[line for line in file(filename)] # First line is the column titles colnames=lines[0].strip().split('\t')[1:] rownames=[] data=[] for line in lines[1:]: print "line: ", line p=line.strip().split('\t') # First column in each row is the rowname rownames.append(p[0]) # The data for this row is the remainder of the row data.append([float(x) for x in p[1:]]) return rownames,colnames,data from math import sqrt def scaledown(data,distance=pearson,rate=0.01): n=len(data) # The real distances between every pair of items realdist=[[distance(data[i],data[j]) for j in range(n)] for i in range(0,n)] print "realdist: ", realdist # Randomly initialize the starting points of the locations in 2D loc=[[random.random(),random.random()] for i in range(n)] fakedist=[[0.0 for j in range(n)] for i in range(n)] lasterror=None for m in range(0,1000): # Find projected distances for i in range(n): for j in range(n): fakedist[i][j]=sqrt(sum([pow(loc[i][x]-loc[j][x],2) for x in range(len(loc[i]))])) # Move points grad=[[0.0,0.0] for i in range(n)] totalerror=0 for k in range(n): for j in range(n): if j==k: continue # The error is percent difference between the distances errorterm=(fakedist[j][k]-realdist[j][k])/realdist[j][k] # Each point needs to be moved away from or towards the other # point in proportion to how much error it has grad[k][0]+=((loc[k][0]-loc[j][0])/fakedist[j][k])*errorterm grad[k][1]+=((loc[k][1]-loc[j][1])/fakedist[j][k])*errorterm # Keep track of the total error totalerror+=abs(errorterm) print totalerror # If the answer got worse by moving the points, we are done if lasterror and lasterror<totalerror: break lasterror=totalerror # Move each of the points by the learning rate times the gradient for k in range(n): loc[k][0]-=rate*grad[k][0] loc[k][1]-=rate*grad[k][1] return loc def draw2d(data,labels,jpeg='mds2d.jpg'): img=Image.new('RGB',(2000,2000),(255,255,255)) draw=ImageDraw.Draw(img) for i in range(len(data)): x=(data[i][0]+0.5)*1000 y=(data[i][1]+0.5)*1000 draw.text((x,y),labels[i],(0,0,0)) img.save(jpeg,'JPEG') if __name__ == '__main__': blognames, words, data = readfile('blogdata.txt') coords = scaledown(data) draw2d(coords, blognames, jpeg='blogs2d.jpg')
上圖中顯示了多維縮放算法的執行結果,雖然聚類的分佈狀況沒有像樹狀圖那樣直觀,可是咱們仍然能夠找到一些主題分組(topical grouping)。api
Relevant Link:
《集體智慧編程》Toby Segaran著 - 第3章
這章咱們從空間結構可視化角度來探索下神經網絡內部(隱層)的拓樸結構。
咱們先從一個[2維的輸入樣本(輸入層是2維的):只包含輸入層和輸出層且每層只有2個神經元(x/y分別對應輸入和輸出的座標點)的神經網絡]入手來探查這個問題。
這是一張二維平面圖(輸入層是由2維的點組成的集合),圖像的2條曲線分別表明了2個類別,2條曲線上的點表明了咱們的輸入數據集dataset,咱們但願神經網絡對這2類進行正確區分,即對分類建模
因爲咱們的神經網絡只有輸入層和輸出層,每層的神經元都是2,分別表明了x和y,這樣神經網絡只能盡力去"尋找"一個直線來進行迴歸分類,以下圖,
可是由於輸入層和輸出層之間是線性關係,很顯然,這沒法獲得一個"相對完美"的結果,由於圖上咱們的輸入數據集是該二維空間上線性不可分的。
要解決這個問題,就須要使用線性變換對原始空間進行」旋轉「和」拉伸「。咱們在網絡中增長一個hidden layer(2維),經過隱層把輸入向量所在的線性空間進行拉伸和旋轉,在新的線性空間中,我麼能夠很容易找到一個線性分界面進行分類,以下圖,
這小節咱們來看一個咱們都熟悉的MNIST問題來講,輸入層的維度是圖像的全部像素點,即784維,隱層由全鏈接DNN組成,100維。進行200epoch的訓練後獲得一組神經元參數。咱們經過t-SNE可視化技術來探查一下,神經網絡是如何經過調整權重向量來逐漸擬合出輸入數據在高維空間的真實拓樸含義
咱們能夠看到,
Relevant Link:
http://blog.csdn.net/unoboros/article/details/30451213 http://www.cnblogs.com/boostable/p/iage_high_space_sphere.html
embedding是NLP特徵工程中經常使用的一種語料處理方式,在word emberdding空間中,每個詞都是一個高維詞嵌入向量。
咱們能夠基於t-SNE來探查word2vec emberdding內部的工做原理,word2vec emberdding將低維空間的數據映射到高維空間向量中(保持語法不變性),而t-SNE則能夠將高維空間的向量在儘可能少失真的狀況下投影到2/3維空間。
# -*- coding: utf-8 -*- from gensim.models.word2vec import Word2Vec from sklearn.manifold import TSNE from sklearn.datasets import fetch_20newsgroups import re import matplotlib.pyplot as plt def clean(text): """Remove posting header, split by sentences and words, keep only letters""" lines = re.split('[?!.:]\s', re.sub('^.*Lines: \d+', '', re.sub('\n', ' ', text))) return [re.sub('[^a-zA-Z]', ' ', line).lower().split() for line in lines] if __name__ == '__main__': # download example data ( may take a while) train = fetch_20newsgroups() sentences = [line for text in train.data for line in clean(text)] model = Word2Vec(sentences, workers=4, size=100, min_count=50, window=10, sample=1e-3) print (model.most_similar('memory')) X = model[model.wv.vocab] tsne = TSNE(n_components=2) X_tsne = tsne.fit_transform(X) plt.scatter(X_tsne[:, 0], X_tsne[:, 1]) plt.show()
能夠看到,emberdding vocabulary詞彙表中,和memory這個計算機詞彙相近的詞是cpu和cache,符合實際意義的語法
把局部放大看,
Relevant Link:
http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/ http://www.iro.umontreal.ca/~lisa/pointeurs/turian-wordrepresentations-acl10.pdf https://stackoverflow.com/questions/43166762/what-is-relation-between-tsne-and-word2vec https://stackoverflow.com/questions/40581010/how-to-run-tsne-on-word2vec-created-from-gensim http://learningaboutdata.blogspot.com/2014/06/plotting-word-embedding-using-tsne-with.html https://stackoverflow.com/questions/40581010/how-to-run-tsne-on-word2vec-created-from-gensim https://stackoverflow.com/questions/43776572/visualise-word2vec-generated-from-gensim https://www.quora.com/How-do-I-visualise-word2vec-word-vectors http://nlp.yvespeirsman.be/blog/visualizing-word-embeddings-with-tsne/ http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/ 《word2vec_中的數學原理詳解》 http://blog.csdn.net/u014595019/article/details/51884529 http://download.csdn.net/detail/mzg12345678/7988741
https://www.tensorflow.org/versions/r0.12/tutorials/word2vec
Paragraph/Sentence Vector Model的核心思想從一篇document/或者一大段話(sentences)中抽取一部分的短語(Paragraph),經常是文章的標題描述或者引導語之類的字符串,並經過將這段Paragraph映射到詞向量空間中,來得到document/sentences的向量表示,以下圖所示,
Paragraph/Sentence Vector Model模型並非直接針對一個變長的文本計算vector,而是基於sentence詞向量的基礎之上,經過從原始sentence中摘取一段Paragraph(相似於摘要),把這段Paragraph看做是一個word(初始化一個權重向量),並經過它周圍的context上下文來進行無監督的訓練,權重參數的調整經過Gredien decend + BP來完成。
這節咱們來看一下短語向量的空間拓樸可視化表示,
demo.py
#!/usr/bin/env python # -*- coding: utf-8 -*- # # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html import logging import sys import os from word2vec import Word2Vec, Sent2Vec, LineSentence logging.basicConfig(format='%(asctime)s : %(threadName)s : %(levelname)s : %(message)s', level=logging.INFO) logging.info("running %s" % " ".join(sys.argv)) input_file = './modleTrain/test.txt' # Emberdding dimension = 100 # The maximum distance between the current and predicted word within a sentence = 5 # Model = CBOW # Ignore total frequency lower than = 5. model = Word2Vec(LineSentence(input_file), size=100, window=5, sg=0, min_count=5, workers=8) model.save(input_file + '.model') # get word emberdding vector vocabulary model.save_word2vec_format(input_file + '.vec') sent_file = './modleTrain/sent.txt' model = Sent2Vec(LineSentence(sent_file), model_file=input_file + '.model') model.save_sent2vec_format(sent_file + '.vec') program = os.path.basename(sys.argv[0]) logging.info("finished running %s" % program)
word2vec.py
#!/usr/bin/env python # -*- coding: utf-8 -*- # # Copyright (C) 2013 Radim Rehurek <me@radimrehurek.com> # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html """ Deep learning via word2vec's "skip-gram and CBOW models", using either hierarchical softmax or negative sampling [1]_ [2]_. The training algorithms were originally ported from the C package https://code.google.com/p/word2vec/ and extended with additional functionality. For a blog tutorial on gensim word2vec, with an interactive web app trained on GoogleNews, visit http://radimrehurek.com/2014/02/word2vec-tutorial/ **Install Cython with `pip install cython` to use optimized word2vec training** (70x speedup [3]_). Initialize a model with e.g.:: >>> model = Word2Vec(sentences, size=100, window=5, min_count=5, workers=4) Persist a model to disk with:: >>> model.save(fname) >>> model = Word2Vec.load(fname) # you can continue training with the loaded model! The model can also be instantiated from an existing file on disk in the word2vec C format:: >>> model = Word2Vec.load_word2vec_format('/tmp/vectors.txt', binary=False) # C text format >>> model = Word2Vec.load_word2vec_format('/tmp/vectors.bin', binary=True) # C binary format You can perform various syntactic/semantic NLP word tasks with the model. Some of them are already built-in:: >>> model.most_similar(positive=['woman', 'king'], negative=['man']) [('queen', 0.50882536), ...] >>> model.doesnt_match("breakfast cereal dinner lunch".split()) 'cereal' >>> model.similarity('woman', 'man') 0.73723527 >>> model['computer'] # raw numpy vector of a word array([-0.00449447, -0.00310097, 0.02421786, ...], dtype=float32) and so on. If you're finished training a model (=no more updates, only querying), you can do >>> model.init_sims(replace=True) to trim unneeded model memory = use (much) less RAM. .. [1] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space. In Proceedings of Workshop at ICLR, 2013. .. [2] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, 2013. .. [3] Optimizing word2vec in gensim, http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/ """ import logging import sys import os import heapq import time from copy import deepcopy import threading try: from queue import Queue except ImportError: from Queue import Queue from numpy import exp, dot, zeros, outer, random, dtype, get_include, float32 as REAL, \ uint32, seterr, array, uint8, vstack, argsort, fromstring, sqrt, newaxis, ndarray, empty, sum as np_sum # logger = logging.getLogger("gensim.models.word2vec") logger = logging.getLogger("sent2vec") # from gensim import utils, matutils # utility fnc for pickling, common scipy operations etc import utils, matutils # utility fnc for pickling, common scipy operations etc from six import iteritems, itervalues, string_types from six.moves import xrange try: from gensim_addons.models.word2vec_inner import train_sentence_sg, train_sentence_cbow, FAST_VERSION except ImportError: try: # try to compile and use the faster cython version import pyximport models_dir = os.path.dirname(__file__) or os.getcwd() pyximport.install(setup_args={"include_dirs": [models_dir, get_include()]}) from word2vec_inner import train_sentence_sg, train_sentence_cbow, FAST_VERSION except: # failed... fall back to plain numpy (20-80x slower training than the above) FAST_VERSION = -1 def train_sentence_sg(model, sentence, alpha, work=None): """ Update skip-gram model by training on a single sentence. The sentence is a list of Vocab objects (or None, where the corresponding word is not in the vocabulary. Called internally from `Word2Vec.train()`. This is the non-optimized, Python version. If you have cython installed, gensim will use the optimized version from word2vec_inner instead. """ if model.negative: # precompute negative labels labels = zeros(model.negative + 1) labels[0] = 1.0 for pos, word in enumerate(sentence): if word is None: continue # OOV word in the input sentence => skip reduced_window = random.randint(model.window) # `b` in the original word2vec code # now go over all words from the (reduced) window, predicting each one in turn start = max(0, pos - model.window + reduced_window) for pos2, word2 in enumerate(sentence[start: pos + model.window + 1 - reduced_window], start): # don't train on OOV words and on the `word` itself if word2 and not (pos2 == pos): l1 = model.syn0[word2.index] neu1e = zeros(l1.shape) if model.hs: # work on the entire tree at once, to push as much work into numpy's C routines as possible (performance) l2a = deepcopy(model.syn1[word.point]) # 2d matrix, codelen x layer1_size fa = 1.0 / (1.0 + exp(-dot(l1, l2a.T))) # propagate hidden -> output ga = ( 1 - word.code - fa) * alpha # vector of error gradients multiplied by the learning rate model.syn1[word.point] += outer(ga, l1) # learn hidden -> output neu1e += dot(ga, l2a) # save error if model.negative: # use this word (label = 1) + `negative` other random words not from this sentence (label = 0) word_indices = [word.index] while len(word_indices) < model.negative + 1: w = model.table[random.randint(model.table.shape[0])] if w != word.index: word_indices.append(w) l2b = model.syn1neg[word_indices] # 2d matrix, k+1 x layer1_size fb = 1. / (1. + exp(-dot(l1, l2b.T))) # propagate hidden -> output gb = (labels - fb) * alpha # vector of error gradients multiplied by the learning rate model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output neu1e += dot(gb, l2b) # save error model.syn0[word2.index] += neu1e # learn input -> hidden return len([word for word in sentence if word is not None]) def train_sentence_cbow(model, sentence, alpha, work=None, neu1=None): """ Update CBOW model by training on a single sentence. The sentence is a list of Vocab objects (or None, where the corresponding word is not in the vocabulary. Called internally from `Word2Vec.train()`. This is the non-optimized, Python version. If you have cython installed, gensim will use the optimized version from word2vec_inner instead. """ if model.negative: # precompute negative labels labels = zeros(model.negative + 1) labels[0] = 1. for pos, word in enumerate(sentence): if word is None: continue # OOV word in the input sentence => skip reduced_window = random.randint(model.window) # `b` in the original word2vec code start = max(0, pos - model.window + reduced_window) window_pos = enumerate(sentence[start: pos + model.window + 1 - reduced_window], start) word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)] l1 = np_sum(model.syn0[word2_indices], axis=0) # 1 x layer1_size if word2_indices and model.cbow_mean: l1 /= len(word2_indices) neu1e = zeros(l1.shape) if model.hs: l2a = model.syn1[word.point] # 2d matrix, codelen x layer1_size fa = 1. / (1. + exp(-dot(l1, l2a.T))) # propagate hidden -> output ga = (1. - word.code - fa) * alpha # vector of error gradients multiplied by the learning rate model.syn1[word.point] += outer(ga, l1) # learn hidden -> output neu1e += dot(ga, l2a) # save error if model.negative: # use this word (label = 1) + `negative` other random words not from this sentence (label = 0) word_indices = [word.index] while len(word_indices) < model.negative + 1: w = model.table[random.randint(model.table.shape[0])] if w != word.index: word_indices.append(w) l2b = model.syn1neg[word_indices] # 2d matrix, k+1 x layer1_size fb = 1. / (1. + exp(-dot(l1, l2b.T))) # propagate hidden -> output gb = (labels - fb) * alpha # vector of error gradients multiplied by the learning rate model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output neu1e += dot(gb, l2b) # save error model.syn0[word2_indices] += neu1e # learn input -> hidden, here for all words in the window separately return len([word for word in sentence if word is not None]) class Vocab(object): """A single vocabulary item, used internally for constructing binary trees (incl. both word leaves and inner nodes).""" def __init__(self, **kwargs): self.count = 0 self.__dict__.update(kwargs) def __lt__(self, other): # used for sorting in a priority queue return self.count < other.count def __str__(self): vals = ['%s:%r' % (key, self.__dict__[key]) for key in sorted(self.__dict__) if not key.startswith('_')] return "<" + ', '.join(vals) + ">" class Word2Vec(utils.SaveLoad): """ Class for training, using and evaluating neural networks described in https://code.google.com/p/word2vec/ The model can be stored/loaded via its `save()` and `load()` methods, or stored/loaded in a format compatible with the original word2vec implementation via `save_word2vec_format()` and `load_word2vec_format()`. """ def __init__(self, sentences=None, size=100, alpha=0.025, window=5, min_count=5, sample=0, seed=1, workers=1, min_alpha=0.0001, sg=1, hs=1, negative=0, cbow_mean=0): """ Initialize the model from an iterable of `sentences`. Each sentence is a list of words (unicode strings) that will be used for training. The `sentences` iterable can be simply a list, but for larger corpora, consider an iterable that streams the sentences directly from disk/network. See :class:`BrownCorpus`, :class:`Text8Corpus` or :class:`LineSentence` in this module for such examples. If you don't supply `sentences`, the model is left uninitialized -- use if you plan to initialize it in some other way. `sg` defines the training algorithm. By default (`sg=1`), skip-gram is used. Otherwise, `cbow` is employed. `size` is the dimensionality of the feature vectors. `window` is the maximum distance between the current and predicted word within a sentence. `alpha` is the initial learning rate (will linearly drop to zero as training progresses). `seed` = for the random number generator. `min_count` = ignore all words with total frequency lower than this. `sample` = threshold for configuring which higher-frequency words are randomly downsampled; default is 0 (off), useful value is 1e-5. `workers` = use this many worker threads to train the model (=faster training with multicore machines) `hs` = if 1 (default), hierarchical sampling will be used for model training (else set to 0) `negative` = if > 0, negative sampling will be used, the int for negative specifies how many "noise words" should be drawn (usually between 5-20) `cbow_mean` = if 0 (default), use the sum of the context word vectors. If 1, use the mean. Only applies when cbow is used. """ self.vocab = {} # mapping from a word (string) to a Vocab object self.index2word = [] # map from a word's matrix index (int) to word (string) self.sg = int(sg) self.table = None # for negative sampling --> this needs a lot of RAM! consider setting back to None before saving self.layer1_size = int(size) if size % 4 != 0: logger.warning("consider setting layer size to a multiple of 4 for greater performance") self.alpha = float(alpha) self.window = int(window) self.seed = seed self.min_count = min_count self.sample = sample self.workers = workers self.min_alpha = min_alpha self.hs = hs self.negative = negative self.cbow_mean = int(cbow_mean) if sentences is not None: self.build_vocab(sentences) self.train(sentences) def make_table(self, table_size=100000000, power=0.75): """ Create a table using stored vocabulary word counts for drawing random words in the negative sampling training routines. Called internally from `build_vocab()`. """ logger.info("constructing a table with noise distribution from %i words" % len(self.vocab)) # table (= list of words) of noise distribution for negative sampling vocab_size = len(self.index2word) self.table = zeros(table_size, dtype=uint32) if not vocab_size: logger.warning("empty vocabulary in word2vec, is this intended?") return # compute sum of all power (Z in paper) train_words_pow = float(sum([self.vocab[word].count ** power for word in self.vocab])) # go through the whole table and fill it up with the word indexes proportional to a word's count**power widx = 0 # normalize count^0.75 by Z d1 = self.vocab[self.index2word[widx]].count ** power / train_words_pow for tidx in xrange(table_size): self.table[tidx] = widx if 1.0 * tidx / table_size > d1: widx += 1 d1 += self.vocab[self.index2word[widx]].count ** power / train_words_pow if widx >= vocab_size: widx = vocab_size - 1 def create_binary_tree(self): """ Create a binary Huffman tree using stored vocabulary word counts. Frequent words will have shorter binary codes. Called internally from `build_vocab()`. """ logger.info("constructing a huffman tree from %i words" % len(self.vocab)) # build the huffman tree heap = list(itervalues(self.vocab)) heapq.heapify(heap) # 每次從當前全部word節點中取最小的兩個節點,組成左右子樹(左小右大),並將concat結果構成的新節點做爲父節點插入huffman樹中(從下往上生長,詞頻越小,越靠近葉子) - 構造huffman二叉樹 for i in xrange(len(self.vocab) - 1): min1, min2 = heapq.heappop(heap), heapq.heappop(heap) heapq.heappush(heap, Vocab(count=min1.count + min2.count, index=i + len(self.vocab), left=min1, right=min2)) # recurse over the tree, assigning a binary code to each vocabulary word if heap: max_depth, stack = 0, [(heap[0], [], [])] while stack: node, codes, points = stack.pop() if node.index < len(self.vocab): # leaf node => store its path from the root node.code, node.point = codes, points max_depth = max(len(codes), max_depth) else: # inner node => continue recursion points = array(list(points) + [node.index - len(self.vocab)], dtype=uint32) stack.append((node.left, array(list(codes) + [0], dtype=uint8), points)) stack.append((node.right, array(list(codes) + [1], dtype=uint8), points)) logger.info("built huffman tree with maximum node depth %i" % max_depth) def precalc_sampling(self): """Precalculate each vocabulary item's threshold for sampling""" if self.sample: logger.info( "frequent-word downsampling, threshold %g; progress tallies will be approximate" % (self.sample)) total_words = sum(v.count for v in itervalues(self.vocab)) threshold_count = float(self.sample) * total_words # 根據出現次數計算word節點的詞頻機率 for v in itervalues(self.vocab): prob = (sqrt(v.count / threshold_count) + 1) * (threshold_count / v.count) if self.sample else 1.0 v.sample_probability = min(prob, 1.0) # print v def build_vocab(self, sentences): """ Build vocabulary from a sequence of sentences (can be a once-only generator stream). Each sentence must be a list of unicode strings. """ logger.info("collecting all words and their counts") sentence_no, vocab = -1, {} total_words = 0 # 統計訓練集中每一個詞的出現次數 for sentence_no, sentence in enumerate(sentences): if sentence_no % 10000 == 0: logger.info("PROGRESS: at sentence #%i, processed %i words and %i word types" % ( sentence_no, total_words, len(vocab))) for word in sentence: total_words += 1 if word in vocab: vocab[word].count += 1 else: vocab[word] = Vocab(count=1) logger.info("collected %i word types from a corpus of %i words and %i sentences" % ( len(vocab), total_words, sentence_no + 1)) # assign a unique index to each word # 按照出現的順序給每一個詞index編碼(這裏沒有按照詞頻排序),使得詞彙表vocabulary中的word和index索引能夠互相查詢轉換 self.vocab, self.index2word = {}, [] for word, v in iteritems(vocab): if v.count >= self.min_count: v.index = len(self.vocab) self.index2word.append(word) self.vocab[word] = v # print "word: ", word # print "v:", v logger.info("total %i word types after removing those with count<%s" % (len(self.vocab), self.min_count)) # print self.vocab # print self.index2word # 分層抽樣 if self.hs: # add info about each word's Huffman encoding self.create_binary_tree() if self.negative: # build the table for drawing random words (for negative sampling) self.make_table() # precalculate downsampling thresholds self.precalc_sampling() self.reset_weights() def train(self, sentences, total_words=None, word_count=0, chunksize=100): """ Update the model's neural weights from a sequence of sentences (can be a once-only generator stream). Each sentence must be a list of unicode strings. """ if FAST_VERSION < 0: import warnings warnings.warn( "Cython compilation failed, training will be slow. Do you have Cython installed? `pip install cython`") logger.info("training model with %i workers on %i vocabulary and %i features, " "using 'skipgram'=%s 'hierarchical softmax'=%s 'subsample'=%s and 'negative sampling'=%s" % (self.workers, len(self.vocab), self.layer1_size, self.sg, self.hs, self.sample, self.negative)) if not self.vocab: raise RuntimeError("you must first build vocabulary before training the model") start, next_report = time.time(), [1.0] word_count = [word_count] total_words = total_words or int(sum(v.count * v.sample_probability for v in itervalues(self.vocab))) jobs = Queue( maxsize=2 * self.workers) # buffer ahead only a limited number of jobs.. this is the reason we can't simply use ThreadPool :( lock = threading.Lock() # for shared state (=number of words trained so far, log reports...) def worker_train(): """Train the model, lifting lists of sentences from the jobs queue.""" work = zeros(self.layer1_size, dtype=REAL) # each thread must have its own work memory neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL) while True: job = jobs.get() if job is None: # data finished, exit break # update the learning rate before every job alpha = max(self.min_alpha, self.alpha * (1 - 1.0 * word_count[0] / total_words)) # how many words did we train on? out-of-vocabulary (unknown) words do not count if self.sg: job_words = sum(train_sentence_sg(self, sentence, alpha, work) for sentence in job) else: job_words = sum(train_sentence_cbow(self, sentence, alpha, work, neu1) for sentence in job) with lock: word_count[0] += job_words elapsed = time.time() - start if elapsed >= next_report[0]: logger.info("PROGRESS: at %.2f%% words, alpha %.05f, %.0f words/s" % (100.0 * word_count[0] / total_words, alpha, word_count[0] / elapsed if elapsed else 0.0)) next_report[ 0] = elapsed + 1.0 # don't flood the log, wait at least a second between progress reports workers = [threading.Thread(target=worker_train) for _ in xrange(self.workers)] for thread in workers: thread.daemon = True # make interrupting the process with ctrl+c easier thread.start() def prepare_sentences(): for sentence in sentences: # avoid calling random_sample() where prob >= 1, to speed things up a little: sampled = [self.vocab[word] for word in sentence if word in self.vocab and (self.vocab[word].sample_probability >= 1.0 or self.vocab[ word].sample_probability >= random.random_sample())] yield sampled # convert input strings to Vocab objects (eliding OOV/downsampled words), and start filling the jobs queue for job_no, job in enumerate(utils.grouper(prepare_sentences(), chunksize)): logger.debug("putting job #%i in the queue, qsize=%i" % (job_no, jobs.qsize())) jobs.put(job) logger.info("reached the end of input; waiting to finish %i outstanding jobs" % jobs.qsize()) for _ in xrange(self.workers): jobs.put(None) # give the workers heads up that they can finish -- no more work! for thread in workers: thread.join() elapsed = time.time() - start logger.info("training on %i words took %.1fs, %.0f words/s" % (word_count[0], elapsed, word_count[0] / elapsed if elapsed else 0.0)) return word_count[0] def reset_weights(self): """Reset all projection weights to an initial (untrained) state, but keep the existing vocabulary.""" logger.info("resetting layer weights") random.seed(self.seed) self.syn0 = empty((len(self.vocab), self.layer1_size), dtype=REAL) # randomize weights vector by vector, rather than materializing a huge random matrix in RAM at once for i in xrange(len(self.vocab)): self.syn0[i] = (random.rand(self.layer1_size) - 0.5) / self.layer1_size if self.hs: self.syn1 = zeros((len(self.vocab), self.layer1_size), dtype=REAL) if self.negative: self.syn1neg = zeros((len(self.vocab), self.layer1_size), dtype=REAL) self.syn0norm = None def save_word2vec_format(self, fname, fvocab=None, binary=False): """ Store the input-hidden weight matrix in the same format used by the original C word2vec-tool, for compatibility. """ if fvocab is not None: logger.info("Storing vocabulary in %s" % (fvocab)) with utils.smart_open(fvocab, 'wb') as vout: for word, vocab in sorted(iteritems(self.vocab), key=lambda item: -item[1].count): vout.write(utils.to_utf8("%s %s\n" % (word, vocab.count))) logger.info("storing %sx%s projection weights into %s" % (len(self.vocab), self.layer1_size, fname)) assert (len(self.vocab), self.layer1_size) == self.syn0.shape with utils.smart_open(fname, 'wb') as fout: fout.write(utils.to_utf8("%s %s\n" % self.syn0.shape)) # store in sorted order: most frequent words at the top for word, vocab in sorted(iteritems(self.vocab), key=lambda item: -item[1].count): row = self.syn0[vocab.index] if binary: fout.write(utils.to_utf8(word) + b" " + row.tostring()) else: fout.write(utils.to_utf8("%s %s\n" % (word, ' '.join("%f" % val for val in row)))) @classmethod def load_word2vec_format(cls, fname, fvocab=None, binary=False, norm_only=True): """ Load the input-hidden weight matrix from the original C word2vec-tool format. Note that the information stored in the file is incomplete (the binary tree is missing), so while you can query for word similarity etc., you cannot continue training with a model loaded this way. `binary` is a boolean indicating whether the data is in binary word2vec format. `norm_only` is a boolean indicating whether to only store normalised word2vec vectors in memory. Word counts are read from `fvocab` filename, if set (this is the file generated by `-save-vocab` flag of the original C tool). """ counts = None if fvocab is not None: logger.info("loading word counts from %s" % (fvocab)) counts = {} with utils.smart_open(fvocab) as fin: for line in fin: word, count = utils.to_unicode(line).strip().split() counts[word] = int(count) logger.info("loading projection weights from %s" % (fname)) with utils.smart_open(fname) as fin: header = utils.to_unicode(fin.readline()) vocab_size, layer1_size = map(int, header.split()) # throws for invalid file format result = Word2Vec(size=layer1_size) result.syn0 = zeros((vocab_size, layer1_size), dtype=REAL) if binary: binary_len = dtype(REAL).itemsize * layer1_size for line_no in xrange(vocab_size): # mixed text and binary: read text first, then binary word = [] while True: ch = fin.read(1) if ch == b' ': break if ch != b'\n': # ignore newlines in front of words (some binary files have newline, some don't) word.append(ch) word = utils.to_unicode(b''.join(word)) if counts is None: result.vocab[word] = Vocab(index=line_no, count=vocab_size - line_no) elif word in counts: result.vocab[word] = Vocab(index=line_no, count=counts[word]) else: logger.warning("vocabulary file is incomplete") result.vocab[word] = Vocab(index=line_no, count=None) result.index2word.append(word) result.syn0[line_no] = fromstring(fin.read(binary_len), dtype=REAL) else: for line_no, line in enumerate(fin): parts = utils.to_unicode(line).split() if len(parts) != layer1_size + 1: raise ValueError("invalid vector on line %s (is this really the text format?)" % (line_no)) word, weights = parts[0], map(REAL, parts[1:]) if counts is None: result.vocab[word] = Vocab(index=line_no, count=vocab_size - line_no) elif word in counts: result.vocab[word] = Vocab(index=line_no, count=counts[word]) else: logger.warning("vocabulary file is incomplete") result.vocab[word] = Vocab(index=line_no, count=None) result.index2word.append(word) result.syn0[line_no] = weights logger.info("loaded %s matrix from %s" % (result.syn0.shape, fname)) result.init_sims(norm_only) return result def most_similar(self, positive=[], negative=[], topn=10): """ Find the top-N most similar words. Positive words contribute positively towards the similarity, negative words negatively. This method computes cosine similarity between a simple mean of the projection weight vectors of the given words, and corresponds to the `word-analogy` and `distance` scripts in the original word2vec implementation. Example:: >>> trained_model.most_similar(positive=['woman', 'king'], negative=['man']) [('queen', 0.50882536), ...] """ self.init_sims() if isinstance(positive, string_types) and not negative: # allow calls like most_similar('dog'), as a shorthand for most_similar(['dog']) positive = [positive] # add weights for each word, if not already present; default to 1.0 for positive and -1.0 for negative words positive = [(word, 1.0) if isinstance(word, string_types + (ndarray,)) else word for word in positive] negative = [(word, -1.0) if isinstance(word, string_types + (ndarray,)) else word for word in negative] # compute the weighted average of all words all_words, mean = set(), [] for word, weight in positive + negative: if isinstance(word, ndarray): mean.append(weight * word) elif word in self.vocab: mean.append(weight * self.syn0norm[self.vocab[word].index]) all_words.add(self.vocab[word].index) else: raise KeyError("word '%s' not in vocabulary" % word) if not mean: raise ValueError("cannot compute similarity with no input") mean = matutils.unitvec(array(mean).mean(axis=0)).astype(REAL) dists = dot(self.syn0norm, mean) if not topn: return dists best = argsort(dists)[::-1][:topn + len(all_words)] # ignore (don't return) words from the input result = [(self.index2word[sim], float(dists[sim])) for sim in best if sim not in all_words] return result[:topn] def doesnt_match(self, words): """ Which word from the given list doesn't go with the others? Example:: >>> trained_model.doesnt_match("breakfast cereal dinner lunch".split()) 'cereal' """ self.init_sims() words = [word for word in words if word in self.vocab] # filter out OOV words logger.debug("using words %s" % words) if not words: raise ValueError("cannot select a word from an empty list") vectors = vstack(self.syn0norm[self.vocab[word].index] for word in words).astype(REAL) mean = matutils.unitvec(vectors.mean(axis=0)).astype(REAL) dists = dot(vectors, mean) return sorted(zip(dists, words))[0][1] def __getitem__(self, word): """ Return a word's representations in vector space, as a 1D numpy array. Example:: >>> trained_model['woman'] array([ -1.40128313e-02, ...] """ return self.syn0[self.vocab[word].index] def __contains__(self, word): return word in self.vocab def similarity(self, w1, w2): """ Compute cosine similarity between two words. Example:: >>> trained_model.similarity('woman', 'man') 0.73723527 >>> trained_model.similarity('woman', 'woman') 1.0 """ return dot(matutils.unitvec(self[w1]), matutils.unitvec(self[w2])) def init_sims(self, replace=False): """ Precompute L2-normalized vectors. If `replace` is set, forget the original vectors and only keep the normalized ones = saves lots of memory! Note that you **cannot continue training** after doing a replace. The model becomes effectively read-only = you can call `most_similar`, `similarity` etc., but not `train`. """ if getattr(self, 'syn0norm', None) is None or replace: logger.info("precomputing L2-norms of word weight vectors") if replace: for i in xrange(self.syn0.shape[0]): self.syn0[i, :] /= sqrt((self.syn0[i, :] ** 2).sum(-1)) self.syn0norm = self.syn0 if hasattr(self, 'syn1'): del self.syn1 else: self.syn0norm = (self.syn0 / sqrt((self.syn0 ** 2).sum(-1))[..., newaxis]).astype(REAL) def accuracy(self, questions, restrict_vocab=30000): """ Compute accuracy of the model. `questions` is a filename where lines are 4-tuples of words, split into sections by ": SECTION NAME" lines. See https://code.google.com/p/word2vec/source/browse/trunk/questions-words.txt for an example. The accuracy is reported (=printed to log and returned as a list) for each section separately, plus there's one aggregate summary at the end. Use `restrict_vocab` to ignore all questions containing a word whose frequency is not in the top-N most frequent words (default top 30,000). This method corresponds to the `compute-accuracy` script of the original C word2vec. """ ok_vocab = dict(sorted(iteritems(self.vocab), key=lambda item: -item[1].count)[:restrict_vocab]) ok_index = set(v.index for v in itervalues(ok_vocab)) def log_accuracy(section): correct, incorrect = section['correct'], section['incorrect'] if correct + incorrect > 0: logger.info("%s: %.1f%% (%i/%i)" % (section['section'], 100.0 * correct / (correct + incorrect), correct, correct + incorrect)) sections, section = [], None for line_no, line in enumerate(utils.smart_open(questions)): # TODO: use level3 BLAS (=evaluate multiple questions at once), for speed line = utils.to_unicode(line) if line.startswith(': '): # a new section starts => store the old section if section: sections.append(section) log_accuracy(section) section = {'section': line.lstrip(': ').strip(), 'correct': 0, 'incorrect': 0} else: if not section: raise ValueError("missing section header before line #%i in %s" % (line_no, questions)) try: a, b, c, expected = [word.lower() for word in line.split()] # TODO assumes vocabulary preprocessing uses lowercase, too... except: logger.info("skipping invalid line #%i in %s" % (line_no, questions)) if a not in ok_vocab or b not in ok_vocab or c not in ok_vocab or expected not in ok_vocab: logger.debug("skipping line #%i with OOV words: %s" % (line_no, line)) continue ignore = set(self.vocab[v].index for v in [a, b, c]) # indexes of words to ignore predicted = None # find the most likely prediction, ignoring OOV words and input words for index in argsort(self.most_similar(positive=[b, c], negative=[a], topn=False))[::-1]: if index in ok_index and index not in ignore: predicted = self.index2word[index] if predicted != expected: logger.debug("%s: expected %s, predicted %s" % (line.strip(), expected, predicted)) break section['correct' if predicted == expected else 'incorrect'] += 1 if section: # store the last section, too sections.append(section) log_accuracy(section) total = {'section': 'total', 'correct': sum(s['correct'] for s in sections), 'incorrect': sum(s['incorrect'] for s in sections)} log_accuracy(total) sections.append(total) return sections def __str__(self): return "Word2Vec(vocab=%s, size=%s, alpha=%s)" % (len(self.index2word), self.layer1_size, self.alpha) def save(self, *args, **kwargs): kwargs['ignore'] = kwargs.get('ignore', ['syn0norm']) # don't bother storing the cached normalized vectors super(Word2Vec, self).save(*args, **kwargs) class Sent2Vec(utils.SaveLoad): def __init__(self, sentences, model_file=None, alpha=0.025, window=5, sample=0, seed=1, workers=1, min_alpha=0.0001, sg=1, hs=1, negative=0, cbow_mean=0, iteration=1): self.sg = int(sg) self.table = None # for negative sampling --> this needs a lot of RAM! consider setting back to None before saving self.alpha = float(alpha) self.window = int(window) self.seed = seed self.sample = sample self.workers = workers self.min_alpha = min_alpha self.hs = hs self.negative = negative self.cbow_mean = int(cbow_mean) self.iteration = iteration if model_file and sentences: self.w2v = Word2Vec.load(model_file) self.vocab = self.w2v.vocab self.layer1_size = self.w2v.layer1_size self.reset_sent_vec(sentences) for i in range(iteration): self.train_sent(sentences) def reset_sent_vec(self, sentences): """Reset all projection weights to an initial (untrained) state, but keep the existing vocabulary.""" logger.info("resetting vectors for sentences") random.seed(self.seed) self.sents_len = 0 for sent in sentences: self.sents_len += 1 self.sents = empty((self.sents_len, self.layer1_size), dtype=REAL) # randomize weights vector by vector, rather than materializing a huge random matrix in RAM at once for i in xrange(self.sents_len): self.sents[i] = (random.rand(self.layer1_size) - 0.5) / self.layer1_size def train_sent(self, sentences, total_words=None, word_count=0, sent_count=0, chunksize=100): """ Update the model's neural weights from a sequence of sentences (can be a once-only generator stream). Each sentence must be a list of unicode strings. """ logger.info("training model with %i workers on %i sentences and %i features, " "using 'skipgram'=%s 'hierarchical softmax'=%s 'subsample'=%s and 'negative sampling'=%s" % (self.workers, self.sents_len, self.layer1_size, self.sg, self.hs, self.sample, self.negative)) if not self.vocab: raise RuntimeError("you must first build vocabulary before training the model") start, next_report = time.time(), [1.0] word_count = [word_count] sent_count = [sent_count] total_words = total_words or sum(v.count for v in itervalues(self.vocab)) total_sents = self.sents_len * self.iteration jobs = Queue( maxsize=2 * self.workers) # buffer ahead only a limited number of jobs.. this is the reason we can't simply use ThreadPool :( lock = threading.Lock() # for shared state (=number of words trained so far, log reports...) def worker_train(): """Train the model, lifting lists of sentences from the jobs queue.""" work = zeros(self.layer1_size, dtype=REAL) # each thread must have its own work memory neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL) while True: job = jobs.get() if job is None: # data finished, exit break # update the learning rate before every job alpha = max(self.min_alpha, self.alpha * (1 - 1.0 * word_count[0] / total_words)) if self.sg: job_words = sum(self.train_sent_vec_sg(self.w2v, sent_no, sentence, alpha, work) for sent_no, sentence in job) else: job_words = sum(self.train_sent_vec_cbow(self.w2v, sent_no, sentence, alpha, work, neu1) for sent_no, sentence in job) with lock: word_count[0] += job_words sent_count[0] += chunksize elapsed = time.time() - start if elapsed >= next_report[0]: logger.info("PROGRESS: at %.2f%% sents, alpha %.05f, %.0f words/s" % (100.0 * sent_count[0] / total_sents, alpha, word_count[0] / elapsed if elapsed else 0.0)) next_report[ 0] = elapsed + 1.0 # don't flood the log, wait at least a second between progress reports workers = [threading.Thread(target=worker_train) for _ in xrange(self.workers)] for thread in workers: thread.daemon = True # make interrupting the process with ctrl+c easier thread.start() def prepare_sentences(): for sent_no, sentence in enumerate(sentences): # avoid calling random_sample() where prob >= 1, to speed things up a little: # sampled = [self.vocab[word] for word in sentence # if word in self.vocab and (self.vocab[word].sample_probability >= 1.0 or self.vocab[word].sample_probability >= random.random_sample())] sampled = [self.vocab.get(word, None) for word in sentence] yield (sent_no, sampled) # convert input strings to Vocab objects (eliding OOV/downsampled words), and start filling the jobs queue for job_no, job in enumerate(utils.grouper(prepare_sentences(), chunksize)): logger.debug("putting job #%i in the queue, qsize=%i" % (job_no, jobs.qsize())) jobs.put(job) logger.info("reached the end of input; waiting to finish %i outstanding jobs" % jobs.qsize()) for _ in xrange(self.workers): jobs.put(None) # give the workers heads up that they can finish -- no more work! for thread in workers: thread.join() elapsed = time.time() - start logger.info("training on %i words took %.1fs, %.0f words/s" % (word_count[0], elapsed, word_count[0] / elapsed if elapsed else 0.0)) return word_count[0] def train_sent_vec_cbow(self, model, sent_no, sentence, alpha, work=None, neu1=None): """ Update CBOW model by training on a single sentence. The sentence is a list of Vocab objects (or None, where the corresponding word is not in the vocabulary. Called internally from `Word2Vec.train()`. This is the non-optimized, Python version. If you have cython installed, gensim will use the optimized version from word2vec_inner instead. """ sent_vec = self.sents[sent_no] if self.negative: # precompute negative labels labels = zeros(self.negative + 1) labels[0] = 1. for pos, word in enumerate(sentence): if word is None: continue # OOV word in the input sentence => skip reduced_window = random.randint(self.window) # `b` in the original word2vec code start = max(0, pos - self.window + reduced_window) window_pos = enumerate(sentence[start: pos + self.window + 1 - reduced_window], start) word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)] l1 = np_sum(model.syn0[word2_indices], axis=0) # 1 x layer1_size l1 += sent_vec if word2_indices and self.cbow_mean: l1 /= len(word2_indices) neu1e = zeros(l1.shape) if self.hs: l2a = model.syn1[word.point] # 2d matrix, codelen x layer1_size fa = 1. / (1. + exp(-dot(l1, l2a.T))) # propagate hidden -> output ga = (1. - word.code - fa) * alpha # vector of error gradients multiplied by the learning rate # model.syn1[word.point] += outer(ga, l1) # learn hidden -> output neu1e += dot(ga, l2a) # save error if self.negative: # use this word (label = 1) + `negative` other random words not from this sentence (label = 0) word_indices = [word.index] while len(word_indices) < self.negative + 1: w = model.table[random.randint(model.table.shape[0])] if w != word.index: word_indices.append(w) l2b = model.syn1neg[word_indices] # 2d matrix, k+1 x layer1_size fb = 1. / (1. + exp(-dot(l1, l2b.T))) # propagate hidden -> output gb = (labels - fb) * alpha # vector of error gradients multiplied by the learning rate # model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output neu1e += dot(gb, l2b) # save error # model.syn0[word2_indices] += neu1e # learn input -> hidden, here for all words in the window separately self.sents[sent_no] += neu1e # learn input -> hidden, here for all words in the window separately return len([word for word in sentence if word is not None]) def train_sent_vec_sg(self, model, sent_no, sentence, alpha, work=None): """ Update skip-gram model by training on a single sentence. The sentence is a list of Vocab objects (or None, where the corresponding word is not in the vocabulary. Called internally from `Word2Vec.train()`. This is the non-optimized, Python version. If you have cython installed, gensim will use the optimized version from word2vec_inner instead. """ if self.negative: # precompute negative labels labels = zeros(self.negative + 1) labels[0] = 1.0 for pos, word in enumerate(sentence): if word is None: continue # OOV word in the input sentence => skip reduced_window = random.randint(model.window) # `b` in the original word2vec code # now go over all words from the (reduced) window, predicting each one in turn start = max(0, pos - model.window + reduced_window) for pos2, word2 in enumerate(sentence[start: pos + model.window + 1 - reduced_window], start): # don't train on OOV words and on the `word` itself if word2: # l1 = model.syn0[word.index] l1 = self.sents[sent_no] neu1e = zeros(l1.shape) if self.hs: # work on the entire tree at once, to push as much work into numpy's C routines as possible (performance) l2a = deepcopy(model.syn1[word2.point]) # 2d matrix, codelen x layer1_size fa = 1.0 / (1.0 + exp(-dot(l1, l2a.T))) # propagate hidden -> output ga = (1 - word2.code - fa) * alpha # vector of error gradients multiplied by the learning rate # model.syn1[word2.point] += outer(ga, l1) # learn hidden -> output neu1e += dot(ga, l2a) # save error if self.negative: # use this word (label = 1) + `negative` other random words not from this sentence (label = 0) word_indices = [word2.index] while len(word_indices) < model.negative + 1: w = model.table[random.randint(model.table.shape[0])] if w != word2.index: word_indices.append(w) l2b = model.syn1neg[word_indices] # 2d matrix, k+1 x layer1_size fb = 1. / (1. + exp(-dot(l1, l2b.T))) # propagate hidden -> output gb = (labels - fb) * alpha # vector of error gradients multiplied by the learning rate # model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output neu1e += dot(gb, l2b) # save error # model.syn0[word.index] += neu1e # learn input -> hidden self.sents[sent_no] += neu1e # learn input -> hidden return len([word for word in sentence if word is not None]) def save_sent2vec_format(self, fname): """ Store the input-hidden weight matrix in the same format used by the original C word2vec-tool, for compatibility. """ logger.info("storing %sx%s projection weights into %s" % (self.sents_len, self.layer1_size, fname)) assert (self.sents_len, self.layer1_size) == self.sents.shape with utils.smart_open(fname, 'wb') as fout: fout.write(utils.to_utf8("%s %s\n" % self.sents.shape)) # store in sorted order: most frequent words at the top for sent_no in xrange(self.sents_len): row = self.sents[sent_no] fout.write(utils.to_utf8("sent_%d %s\n" % (sent_no, ' '.join("%f" % val for val in row)))) def similarity(self, sent1, sent2): """ Compute cosine similarity between two sentences. sent1 and sent2 are the indexs in the train file. Example:: >>> trained_model.similarity(0, 0) 1.0 >>> trained_model.similarity(1, 3) 0.73 """ return dot(matutils.unitvec(self.sents[sent1]), matutils.unitvec(self.sents[sent2])) class BrownCorpus(object): """Iterate over sentences from the Brown corpus (part of NLTK data).""" def __init__(self, dirname): self.dirname = dirname def __iter__(self): for fname in os.listdir(self.dirname): fname = os.path.join(self.dirname, fname) if not os.path.isfile(fname): continue for line in utils.smart_open(fname): line = utils.to_unicode(line) # each file line is a single sentence in the Brown corpus # each token is WORD/POS_TAG token_tags = [t.split('/') for t in line.split() if len(t.split('/')) == 2] # ignore words with non-alphabetic tags like ",", "!" etc (punctuation, weird stuff) words = ["%s/%s" % (token.lower(), tag[:2]) for token, tag in token_tags if tag[:2].isalpha()] if not words: # don't bother sending out empty sentences continue yield words class Text8Corpus(object): """Iterate over sentences from the "text8" corpus, unzipped from http://mattmahoney.net/dc/text8.zip .""" def __init__(self, fname): self.fname = fname def __iter__(self): # the entire corpus is one gigantic line -- there are no sentence marks at all # so just split the sequence of tokens arbitrarily: 1 sentence = 1000 tokens sentence, rest, max_sentence_length = [], b'', 1000 with utils.smart_open(self.fname) as fin: while True: text = rest + fin.read(8192) # avoid loading the entire file (=1 line) into RAM if text == rest: # EOF sentence.extend(rest.split()) # return the last chunk of words, too (may be shorter/longer) if sentence: yield sentence break last_token = text.rfind( b' ') # the last token may have been split in two... keep it for the next iteration words, rest = ( utils.to_unicode(text[:last_token]).split(), text[last_token:].strip()) if last_token >= 0 else ( [], text) sentence.extend(words) while len(sentence) >= max_sentence_length: yield sentence[:max_sentence_length] sentence = sentence[max_sentence_length:] class LineSentence(object): """Simple format: one sentence = one line; words already preprocessed and separated by whitespace.""" def __init__(self, source): """ `source` can be either a string or a file object. Example:: sentences = LineSentence('myfile.txt') Or for compressed files:: sentences = LineSentence('compressed_text.txt.bz2') sentences = LineSentence('compressed_text.txt.gz') """ self.source = source def __iter__(self): """Iterate through the lines in the source.""" try: # Assume it is a file-like object and try treating it as such # Things that don't have seek will trigger an exception self.source.seek(0) for line in self.source: yield utils.to_unicode(line).split() except AttributeError: # If it didn't work like a file, use it as a string filename with utils.smart_open(self.source) as fin: for line in fin: yield utils.to_unicode(line).split() # Example: ./word2vec.py ~/workspace/word2vec/text8 ~/workspace/word2vec/questions-words.txt ./text8 if __name__ == "__main__": logging.basicConfig(format='%(asctime)s : %(threadName)s : %(levelname)s : %(message)s', level=logging.INFO) logging.info("running %s" % " ".join(sys.argv)) logging.info("using optimization %s" % FAST_VERSION) # check and process cmdline input program = os.path.basename(sys.argv[0]) if len(sys.argv) < 2: print(globals()['__doc__'] % locals()) sys.exit(1) seterr(all='raise') # don't ignore numpy errors if len(sys.argv) > 3: input_file = sys.argv[1] model_file = sys.argv[2] out_file = sys.argv[3] model = Sent2Vec(LineSentence(input_file), model_file=model_file, iteration=100) model.save_sent2vec_format(out_file) elif len(sys.argv) > 1: input_file = sys.argv[1] model = Word2Vec(LineSentence(input_file), size=100, window=5, min_count=5, workers=8) model.save(input_file + '.model') model.save_word2vec_format(input_file + '.vec') else: pass program = os.path.basename(sys.argv[0]) logging.info("finished running %s" % program)
Relevant Link:
https://www.zhihu.com/question/21661274 https://fb56552f-a-62cb3a1a-s-sites.googlegroups.com/site/deeplearningworkshopnips2014/68.pdf?attachauth=ANoY7cq83cA2A-ZgTWKF9vIxGRQs96O5OGXbt8n_GqRuU_4IellDNS17z_56Wa6aafihhDHuNHM_7d_jitkT27Cy_RnspiY8Dms5w_eBXFrVBFoFqSdzPmUbHaAblYPGHNA3mCAYn4whKO5w9uk7w9BLyMIX-QNco591gprLzPTM_XHLYa5U2YtIBhVptFj4LMedeKki_hxk2UkHCN0_MwrLwAgZneBihpOAWSX8GgRb5-uqUWpq3CI%3D&attredirects=2 https://www.zhihu.com/question/27689129 https://github.com/hassyGo/paragraph-vector https://arxiv.org/pdf/1405.4053.pdf https://github.com/jiyfeng/ParagraphVector/tree/master/ParaVector https://github.com/JonathanRaiman/PVDM https://github.com/thunlp/paragraph2vec https://github.com/dennybritz/deeplearning-papernotes/blob/master/notes/distributed-representations-of-sentences-and-documents.md https://github.com/klb3713/sentence2vec
咱們知道,在NLP問題中詞向量emberdding vector是淺層神經網絡訓練的副產物,可是咱們能夠把這個副產物看做是word在詞嵌入空間的映射。
對於image圖像來講也存在相似的情形,咱們經過構建VGG多層神經卷積網絡,將圖像輸入其中進行訓練,在網絡卷積層的最後一層輸出的激活值本質上是一個權重向量,咱們能夠將其看做是輸入圖像在高維空間上的一種向量化表示,
def VGG_16(): model = Sequential() model.add(ZeroPadding2D((1, 1), input_shape=(3, 224, 224))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(Flatten()) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(4096, activation='relu')) return model
最後一層輸入的activation權重向量,能夠直接被當成一個高維向量,輸入給t-SNE用於可視化展現,這裏咱們使用pre-train好的圖像vector,以及對應的Caltech-101 dataset
# -*- coding: utf-8 -*- import os import random import numpy as np import json import matplotlib.pyplot import cPickle as pickle from matplotlib.pyplot import imshow,show from PIL import Image from sklearn.manifold import TSNE from tqdm import tqdm if __name__ == '__main__': images, pca_features = pickle.load(open('../data/features_caltech101.p', 'r')) for i, f in zip(images, pca_features): print("image: %s, features: %0.2f,%0.2f,%0.2f,%0.2f... " % (i, f[0], f[1], f[2], f[3])) # Although in principle, t-SNE works with any number of images, it's difficult to place that many tiles in a single image. So instead, we will take a random subset of 1000 images and plot those on a t-SNE instead. This step is optional. num_images_to_plot = 6000 ''' It is usually a good idea to first run the vectors through a faster dimensionality reduction technique like principal component analysis to project your data into an intermediate lower-dimensional space before using t-SNE. This improves accuracy, and cuts down on runtime since PCA is more efficient than t-SNE. Since we have already projected our data down with PCA in the previous notebook, we can proceed straight to running the t-SNE on the feature vectors. ''' if len(images) > num_images_to_plot: sort_order = sorted(random.sample(xrange(len(images)), num_images_to_plot)) images = [images[i] for i in sort_order] pca_features = [pca_features[i] for i in sort_order] # Internally, t-SNE uses an iterative approach, making small (or sometimes large) adjustments to the points. By default, t-SNE will go a maximum of 1000 iterations, but in practice, it often terminates early because it has found a locally optimal (good enough) embedding. X = np.array(pca_features) tsne = TSNE(n_components=2, learning_rate=150, perplexity=30, angle=0.2, verbose=2).fit_transform(X) # The variable tsne contains an array of unnormalized 2d points, corresponding to the embedding. In the next cell, we normalize the embedding so that lies entirely in the range (0,1). tx, ty = tsne[:, 0], tsne[:, 1] tx = (tx - np.min(tx)) / (np.max(tx) - np.min(tx)) ty = (ty - np.min(ty)) / (np.max(ty) - np.min(ty)) # Finally, we will compose a new RGB image where the set of images have been drawn according to the t-SNE results. # Adjust width and height to set the size in pixels of the full image, and set max_dim to the pixel size (on the largest size) to scale images to. width = 4000 height = 3000 max_dim = 100 full_image = Image.new('RGB', (width, height)) for img, x, y in tqdm(zip(images, tx, ty)): tile = Image.open(img) rs = max(1, tile.width / max_dim, tile.height / max_dim) tile = tile.resize((int(tile.width / rs), int(tile.height / rs)), Image.ANTIALIAS) full_image.paste(tile, (int((width - max_dim) * x), int((height - max_dim) * y))) matplotlib.pyplot.figure(figsize=(16, 12)) imshow(full_image) #show() # we can save the image to disk: full_image.save("../assets/example-tSNE-caltech101.jpg")
從圖片上能夠看到,摩托車、椅子、飛機、大象被聚類在了一塊兒,這體現了VGG CNN捕獲到了這些事物的高維細節信息,t-SNE將這種原理直觀地展現出來了
# -*- coding: utf-8 -*- import numpy as np from skdata.mnist.views import OfficialImageClassification from matplotlib import pyplot as plt from tsne import bh_sne # load up data data = OfficialImageClassification(x_dtype="float32") x_data = data.all_images y_data = data.all_labels # convert image data to float64 matrix. float64 is need for bh_sne x_data = np.asarray(x_data).astype('float64') x_data = x_data.reshape((x_data.shape[0], -1)) # For speed of computation, only run on a subset n = 2000 x_data = x_data[:n] y_data = y_data[:n] # perform t-SNE embedding vis_data = bh_sne(x_data) # plot the result vis_x = vis_data[:, 0] vis_y = vis_data[:, 1] plt.scatter(vis_x, vis_y, c=y_data, cmap=plt.cm.get_cmap("jet", 10)) plt.colorbar(ticks=range(10)) plt.clim(-0.5, 9.5) plt.show()
用10種顏色區分了【0-9】共10個mnist手寫數字,t-SNE向咱們展現了不一樣的手寫數字在高維空間上存在的空間結構可區分型,這從某種程度上解釋了爲何用CNN這種算法能很好地對Mnist手寫數字問題進行準確分類的內在緣由。
筆者注:只有數據自己內部包含了可區分的因素,應用對應的模型才能完成準確分類這個任務;分類任務中如何對輸入數據進行抽象表示,有時候和選取什麼模型一樣甚至更重要
Relevant Link:
https://github.com/genekogan/image-tSNE https://indico.io/blog/visualizing-with-t-sne/ https://github.com/oreillymedia/t-SNE-tutorial https://drive.google.com/drive/folders/0B3WXSfqxKDkFYm9GMzlnemdEbEE http://www.vision.caltech.edu/Image_Datasets/Caltech101/#Download https://github.com/ml4a/ml4a-guides/blob/master/notebooks/image-tsne.ipynb https://github.com/genekogan/ofxTSNE http://ml4a.github.io/guides/ImageTSNEViewer/ https://github.com/ml4a/ml4a-guides/blob/master/notebooks/image-tsne.ipynb http://ml4a.github.io/guides/ImageTSNELive/ https://github.com/ml4a/ml4a-ofx
import argparse import sys import numpy as np import json import os from os.path import isfile, join import keras from keras.preprocessing import image from keras.applications.imagenet_utils import decode_predictions, preprocess_input from keras.models import Model from sklearn.decomposition import PCA from sklearn.manifold import TSNE from scipy.spatial import distance from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True def process_arguments(args): parser = argparse.ArgumentParser(description='tSNE on audio') parser.add_argument('--images_path', action='store', help='path to directory of images') parser.add_argument('--output_path', action='store', help='path to where to put output json file') parser.add_argument('--num_dimensions', action='store', default=2, help='dimensionality of t-SNE points (default 2)') parser.add_argument('--perplexity', action='store', default=30, help='perplexity of t-SNE (default 30)') parser.add_argument('--learning_rate', action='store', default=150, help='learning rate of t-SNE (default 150)') params = vars(parser.parse_args(args)) return params def get_image(path, input_shape): img = image.load_img(path, target_size=input_shape) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) return x def analyze_images(images_path): # make feature_extractor model = keras.applications.VGG16(weights='imagenet', include_top=True) feat_extractor = Model(input=model.input, output=model.get_layer("fc2").output) input_shape = model.input_shape[1:3] # get images candidate_images = [f for f in os.listdir(images_path) if os.path.splitext(f)[1].lower() in ['.jpg','.png','.jpeg']] # analyze images and grab activations activations = [] images = [] for idx,image_path in enumerate(candidate_images): file_path = join(images_path,image_path) img = get_image(file_path, input_shape); if img is not None: print("getting activations for %s %d/%d" % (image_path,idx,len(candidate_images))) acts = feat_extractor.predict(img)[0] activations.append(acts) images.append(image_path) # run PCA firt print("Running PCA on %d images..." % len(activations)) features = np.array(activations) pca = PCA(n_components=300) pca.fit(features) pca_features = pca.transform(features) return images, pca_features def run_tsne(images_path, output_path, tsne_dimensions, tsne_perplexity, tsne_learning_rate): images, pca_features = analyze_images(images_path) print("Running t-SNE on %d images..." % len(images)) X = np.array(pca_features) tsne = TSNE(n_components=tsne_dimensions, learning_rate=tsne_learning_rate, perplexity=tsne_perplexity, verbose=2).fit_transform(X) # save data to json data = [] for i,f in enumerate(images): point = [ (tsne[i,k] - np.min(tsne[:,k]))/(np.max(tsne[:,k]) - np.min(tsne[:,k])) for k in range(tsne_dimensions) ] data.append({"path":os.path.abspath(join(images_path,images[i])), "point":point}) with open(output_path, 'w') as outfile: json.dump(data, outfile) if __name__ == '__main__': params = process_arguments(sys.argv[1:]) images_path = params['images_path'] output_path = params['output_path'] tsne_dimensions = int(params['num_dimensions']) tsne_perplexity = int(params['perplexity']) tsne_learning_rate = int(params['learning_rate']) run_tsne(images_path, output_path, tsne_dimensions, tsne_perplexity, tsne_learning_rate) print("finished saving %s" % output_path)
從百度上搜索"AV女優",下載1000張圖片
python tSNE-images.py --images_path ../data/av/ --output_path ../module/ImageTSNEViewer/av_points.json
用matplotlib展現在一張大圖上
# -*- coding: utf-8 -*- import json from matplotlib.pyplot import imshow from matplotlib.pyplot import imshow import matplotlib.pyplot from PIL import Image if __name__ == '__main__': # show on Display Board width = 4000 height = 3000 max_dim = 100 full_image = Image.new('RGB', (width, height)) # reading pre-trained image pointer with open('../module/ImageTSNEViewer/av_points.json', 'r') as f: data = json.load(f) for line in data: img = line['path'] x, y = line['point'][0], line['point'][1] print img, x, y tile = Image.open(img) rs = max(1, tile.width / max_dim, tile.height / max_dim) tile = tile.resize((int(tile.width / rs), int(tile.height / rs)), Image.ANTIALIAS) full_image.paste(tile, (int((width - max_dim) * x), int((height - max_dim) * y))) matplotlib.pyplot.figure(figsize=(16, 12)) imshow(full_image) # we can save the image to disk: full_image.save("../assets/example-tSNE-av.jpg")
放大局部細節
能夠看到,VGGnet把圖像裏高維空間的細節信息捕獲到了