對於tsv、csv、txt以及json類型的數據的處理方法通常可使用torchtext中的TabularDataset進行處理;json
數據的要求:app
- tsv: 第一行fields字段名,使用tab隔開,其它行爲數據,每一個字段直接的數據使用tab隔開;
- csv: 第一行fields字段,其它行爲數據
- json: 字典類型,每一行爲一個字典,字典的key爲fields,values爲數據。
本次採用如下tsv格式的數據集:dom
sentiment-analysis-on-movie-reviews.zipui
數據集的格式:spa
注意:若是test數據集中缺乏某些字段,使用torchtext處理時會有問題,所以要保證train val和test數據集要處理的字段必需相同。翻譯
方法一: torchtext
任務:構造一個翻譯類型的數據集3d
inputs:[sequence english] target:[sequence chinese]
from torchtext.data import Field, TabularDataset, BucketIterator import torch batch_size = 6 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') tokenize_x = lambda x: x.split() tokenize_y = lambda y: y TEXT = Field(sequential=True, use_vocab=True, tokenize=tokenize_x, lower=True, batch_first=True, init_token='<BOS>', eos_token='<EOS>') LABEL = Field(sequential=False, use_vocab=False, tokenize=tokenize_y, batch_first=True, init_token=None, eos_token=None) # fields = {'english': ('en', ENGLISH), 'chinese': ('cn', CHINESE)} # The first of element tuple was tsv's fields_name fields = [("PhraseId", None), ("SentenceId", None), ("Phrase", TEXT), ("Sentiment", LABEL)] train_data, test_data = TabularDataset.splits(path='data', train='movie-sentiment_train.tsv', test='movie-sentiment_test.tsv', format='tsv', skip_header=True, fields=fields) TEXT.build_vocab(train_data, max_size=10000, min_freq=2) VOCAB_SIZE = len(TEXT.vocab) # The operation of vocabulary print("vocabulary size: ", VOCAB_SIZE) print(TEXT.vocab.freqs) print(TEXT.vocab.itos[:10]) for i, v in enumerate(TEXT.vocab.stoi): if i == 10: break print(v) print(TEXT.vocab.stoi['apple']) print('<BOS> indx is ', TEXT.vocab.stoi['<BOS>']) print('<EOS> indx is ', TEXT.vocab.stoi['<EOS>']) UNK_STR = TEXT.unk_token PAD_STR = TEXT.pad_token UNK_IDX = TEXT.vocab.stoi[UNK_STR] PAD_IDX = TEXT.vocab.stoi[PAD_STR] print(f'{UNK_STR} index is {UNK_IDX}') print(f'{PAD_STR} index is {PAD_IDX}') # The operation of datasets print(len(train_data)) print(train_data[0].__dict__.keys()) print(train_data[0].__dict__.values()) # vars return attribute of object print(vars(train_data.examples[0])) print(train_data[0].Phrase) print(train_data[0].Sentiment) """ batch_sizes: Tuple of batch sizes to use for the different splits, or None to use the same batch_size for all splits. """ train_iterator, test_iterator = BucketIterator.splits((train_data, test_data), batch_size=32, batch_sizes=None, device=device, repeat=False, # shuffle=True, sort_key=lambda x: len(x.Phrase), sort=False, sort_within_batch=True) for batch in train_iterator: print(batch.Phrase.shape) print([TEXT.vocab.itos[idx] for idx in batch.Phrase[0]]) print(batch.Sentiment) break
若是隻有一個文本數據須要處理,將splits方法去除,修改如下初始化參數,修改的代碼以下:code
fields = [("PhraseId", None), ("SentenceId", None), ("Phrase", TEXT), ("Sentiment", LABEL)] train_data = TabularDataset(path='data/movie-sentiment_train.tsv', format='tsv', skip_header=True, fields=fields) train_iterator = BucketIterator(train_data, batch_size=batch_size, device=device, shuffle=False, repeat = False, sort_key=lambda x: len(x.Phrase), sort_within_batch=False)
fields是否須要use_vocab爲True,便是否須要創建一個字典:orm
對於inputs數據而言,都須要進行詞典的創建,而對於labels而言,若是labels是數字類型的數據(實際是string類型),一般在iterator會使用int()強制轉換成longTensor()類型,若是labels不是數字類型的數據,須要創建一個詞典,這樣在iterator會字段轉換成longTensor類型。blog
關於TabularDataset中fieds字段傳入list和dict的區別:
list
構造fields時必須按照數據集中fields字段的順序依次構造,優勢: 數據集第一行能夠不寫字段名,缺點:train test val數據集全部字段必須徹底相同。
TabularDataset中skip_header字段要根據數據集的第一行是否有fields名稱設置成True或者False。
fields = [("PhraseId", None), ("SentenceId", None), ("Phrase", TEXT), ("Sentiment", LABEL)]
dict
構造fields時能夠根據本身的須要選擇性的選擇字段,優勢:train test val數據集全部字段能夠不徹底相同,缺點:數據集的第一行必須有字段名稱。
TabularDataset中skip_header字段必須是False。
fields = {'Phrase': ('Phrase', TEXT), 'Sentiment': ('Sentiment', LABEL)}
BucketIterator中sort和shuffle問題:
shuffle參數用因而否打亂每一個batch的取出順序,推薦使用默認參數,即train數據集打亂,其它數據集不打亂;
sort_key=lambda x: len(x.Phrase): 按照何種方式排序;
sort: 對全部數據集進行降序排序;推薦False.
sort_within_batch:對每一個batch進行升序排序;推薦使用True.
方法二:手撕代碼
任務:構造一個翻譯類型的數據集
inputs:[english, chinese] target:[(english, en_len, chinese, cn_len), (...)]
步驟:
- 分詞生成兩維的列表
- 分別建立詞典
- 根據詞典使用索引替換英文和中文詞
- 構造batch
- 根據英文句子個數和batchSize構造batch的索引組
- 根據建立的batch索引,構造batch數據,並返回每句話的長度list
import torch import numpy as np import nltk import jieba from collections import Counter UNK_IDX = 0 PAD_IDX = 1 batch_size = 64 train_file = 'data/translate_train.txt' dev_file = 'data/translate_dev.txt' """ 數據格式: english \t chinese 讀取英文中文翻譯文件, 句子開頭和結尾分別加上 <BOS> <EOS> 返回兩個列表 """ def load_data(in_file): cn = [] en = [] with open(in_file, 'r', encoding='utf-8') as f: for line in f: line = line.strip().split("\t") en.append(['BOS'] + nltk.word_tokenize(line[0].lower()) + ['EOS']) # cn.append(['BOS'] + [c for c in line[1]] + ['EOS']) cn.append(['BOS'] + jieba.lcut(line[1]) + ['EOS']) return en, cn """ 建立詞典 """ def build_dict(sentences, max_words=50000): word_count = Counter() for sentence in sentences: for s in sentence: word_count[s] += 1 ls = word_count.most_common(max_words) total_words = len(ls) + 2 word_dict = {w[0]: index for index, w in enumerate(ls, 2)} word_dict['UNK'] = UNK_IDX word_dict['PAD'] = PAD_IDX return word_dict, total_words # 把句子變成索引 def encode(en_sentences, cn_sentences, en_dict, cn_dict, sort_by_len=True): """ Encode the sequences. """ length = len(en_sentences) # 將句子的詞轉換成詞典對應的索引 out_en_sentences = [[en_dict.get(w, 0) for w in sent] for sent in en_sentences] out_cn_sentences = [[cn_dict.get(w, 0) for w in sent] for sent in cn_sentences] def len_argsort(seq): return sorted(range(len(seq)), key=lambda x: len(seq[x])) if sort_by_len: sorted_index = len_argsort(out_en_sentences) out_en_sentences = [out_en_sentences[i] for i in sorted_index] out_cn_sentences = [out_cn_sentences[i] for i in sorted_index] return out_en_sentences, out_cn_sentences def get_minibatches(n, minibatch_size, shuffle=False): idx_list = np.arange(0, n, minibatch_size) # [0, 1, ..., n-1] if shuffle: np.random.shuffle(idx_list) minibatches = [] for idx in idx_list: minibatches.append(np.arange(idx, min(idx + minibatch_size, n))) return minibatches def prepare_data(seqs, padding_idx): lengths = [len(seq) for seq in seqs] n_samples = len(seqs) max_len = np.max(lengths) x = np.full((n_samples, max_len), padding_idx).astype('int32') x_lengths = np.array(lengths).astype("int32") for idx, seq in enumerate(seqs): x[idx, :lengths[idx]] = seq return x, x_lengths #x_mask def gen_examples(en_sentences, cn_sentences, batch_size): minibatches = get_minibatches(len(en_sentences), batch_size) all_ex = [] for minibatch in minibatches: mb_en_sentences = [en_sentences[t] for t in minibatch] mb_cn_sentences = [cn_sentences[t] for t in minibatch] mb_x, mb_x_len = prepare_data(mb_en_sentences, PAD_IDX) mb_y, mb_y_len = prepare_data(mb_cn_sentences, PAD_IDX) all_ex.append((mb_x, mb_x_len, mb_y, mb_y_len)) return all_ex train_en, train_cn = load_data(train_file) dev_en, dev_cn = load_data(dev_file) en_dict, en_total_words = build_dict(train_en) cn_dict, cn_total_words = build_dict(train_cn) inv_en_dict = {v: k for k, v in en_dict.items()} inv_cn_dict = {v: k for k, v in cn_dict.items()} train_en, train_cn = encode(train_en, train_cn, en_dict, cn_dict) dev_en, dev_cn = encode(dev_en, dev_cn, en_dict, cn_dict) print(" ".join([inv_cn_dict[i] for i in train_cn[100]])) print(" ".join([inv_en_dict[i] for i in train_en[100]])) train_data = gen_examples(train_en, train_cn, batch_size) dev_data = gen_examples(dev_en, dev_cn, batch_size) print(len(train_data)) print(train_data[0])