python 學習常見問題筆記

一、for...if...構建Listhtml

segs = [v for v in segs if not str(v).isdigit()]#去數字

http://www.javashuo.com/article/p-uqwrutlh-bh.htmlpython

for if 基本語法以及示例git

http://www.javashuo.com/article/p-muwqnkba-br.html算法

二、python之lambda、filter、map、reduce的用法說明spring

http://www.javashuo.com/article/p-yugycuny-cc.htmlapache

三、pandas庫DataFrame的分組,拼接,統計運算等用法網絡

https://blog.csdn.net/cymy001/article/details/78300900app

四、jieba分詞 介紹及入門示例dom

http://www.javashuo.com/article/p-waqxkwrt-cq.html機器學習

jieba 進階版:http://www.javashuo.com/article/p-rsnzvfnd-cz.html

五、詞袋模型

https://baike.baidu.com/item/%E8%AF%8D%E8%A2%8B%E6%A8%A1%E5%9E%8B/22776998?fr=aladdin

六、用docsim/doc2vec/LSH比較兩個文檔之間的類似度

https://blog.csdn.net/vs412237401/article/details/52238248

https://blog.csdn.net/qq_16633405/article/details/80578804

七、python 文件操做

https://blog.csdn.net/qq_37383691/article/details/76060972

w:以寫方式打開,
a:以追加模式打開 (從 EOF 開始, 必要時建立新文件)
r+:以讀寫模式打開
w+:以讀寫模式打開 (參見 w )
a+:以讀寫模式打開 (參見 a )
rb:以二進制讀模式打開
wb:以二進制寫模式打開 (參見 w )
ab:以二進制追加模式打開 (參見 a )
rb+:以二進制讀寫模式打開 (參見 r+ )
wb+:以二進制讀寫模式打開 (參見 w+ )
ab+:以二進制讀寫模式打開 (參見 a+ )fp.read([size]) 

八、LSHForest 進行短文本類似性計算

  LSH︱python實現局部敏感隨機投影森林——LSHForest/sklearn(一) 介紹了一些概念

  用docsim/doc2vec/LSH比較兩個文檔之間的類似度 

  LSHForest進行文本類似性計算 有示例代碼和數據

九、TF-IDF提取行業關鍵詞

  TF-IDF提取行業關鍵詞

十、scikit-learn

  apache官方文檔

十一、基於jieba、TfidfVectorizer、LogisticRegression的文檔分類

  基於jieba、TfidfVectorizer、LogisticRegression的文檔分類

十二、CountVectorizer與TfidfVectorizer

  CountVectorizer與TfidfVectorizer   參數詳解

1三、Python將多個list合併爲1個list的方法

  一、可使用"+"號完成操做 c=a+b
  二、使用extend方法 a.extend(b)

  Python將多個list合併爲1個list的方法

1四、python-判斷字符串以什麼開頭或結尾

  item.endswith('.mp4')

  item.startswith('demo')

1五、機器學習那些事——文本挖掘中的特徵提取

1六、無監督的文本分類

  文章:http://blogspring.cn/view/234

  源碼:https://blog.csdn.net/lhxsir/article/details/83310136

  

import random
import jieba
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
import gensim
from gensim.models import Word2Vec
from sklearn.preprocessing import scale
import multiprocessing
#加載停用詞
stopwords=pd.read_csv('D://input_py//day07//stopwords.txt',index_col=False,quoting=3,sep="\t",names=['stopword'], encoding='utf-8')
stopwords=stopwords['stopword'].values
#加載語料
laogong_df = pd.read_csv('D://input_py//day07//beilaogongda.csv', encoding='utf-8', sep=',')
laopo_df = pd.read_csv('D://input_py//day07//beilaogongda.csv', encoding='utf-8', sep=',')
erzi_df = pd.read_csv('D://input_py//day07//beierzida.csv', encoding='utf-8', sep=',')
nver_df = pd.read_csv('D://input_py//day07//beinverda.csv', encoding='utf-8', sep=',')
#刪除語料的nan行
laogong_df.dropna(inplace=True)
laopo_df.dropna(inplace=True)
erzi_df.dropna(inplace=True)
nver_df.dropna(inplace=True)
#轉換
laogong = laogong_df.segment.values.tolist()
laopo = laopo_df.segment.values.tolist()
erzi = erzi_df.segment.values.tolist()
nver = nver_df.segment.values.tolist()

# 定義分詞函數preprocess_text
def preprocess_text(content_lines, sentences):
    for line in content_lines:
        try:
            segs=jieba.lcut(line)
            segs = [v for v in segs if not str(v).isdigit()]#去數字
            segs = list(filter(lambda x:x.strip(), segs))   #去左右空格
            segs = list(filter(lambda x:len(x)>1, segs)) #長度爲1的字符
            segs = list(filter(lambda x:x not in stopwords, segs)) #去掉停用詞
            sentences.append(" ".join(segs))
        except Exception:
            print(line)
            continue

sentences = []
preprocess_text(laogong, sentences)
preprocess_text(laopo, sentences)
preprocess_text(erzi, sentences)
preprocess_text(nver, sentences)

random.shuffle(sentences)
# 控制檯輸出前10條數據
for sentence in sentences[:10]:
    print(sentence)

# 將文本中的詞語轉換爲詞頻矩陣 矩陣元素a[i][j] 表示j詞在i類文本下的詞頻
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5)
# 統計每一個詞語的tf-idf權值
transformer = TfidfTransformer()
# 第一個fit_transform是計算tf-idf 第二個fit_transform是將文本轉爲詞頻矩陣
tfidf = transformer.fit_transform(vectorizer.fit_transform(sentences))
# 獲取詞袋模型中的全部詞語
word = vectorizer.get_feature_names()
# 將tf-idf矩陣抽取出來,元素w[i][j]表示j詞在i類文本中的tf-idf權重
weight = tfidf.toarray()
# 查看特徵大小
print ('Features length: ' + str(len(word)))

# TF-IDF 的中文文本 K-means 聚類
numClass=4  # 聚類分幾簇
clf = KMeans(n_clusters=numClass, max_iter=10000, init="k-means++", tol=1e-6)  #這裏也能夠選擇隨機初始化init="random"
pca = PCA(n_components=10)  # 降維
TnewData = pca.fit_transform(weight)  # 載入N維
s = clf.fit(TnewData)

# 定義聚類結果可視化函數
def plot_cluster(result,newData,numClass):
    plt.figure(2)
    Lab = [[] for i in range(numClass)]
    index = 0
    for labi in result:
        Lab[labi].append(index)
        index += 1
    color = ['oy', 'ob', 'og', 'cs', 'ms', 'bs', 'ks', 'ys', 'yv', 'mv', 'bv', 'kv', 'gv', 'y^', 'm^', 'b^', 'k^',
             'g^'] * 3
    for i in range(numClass):
        x1 = []
        y1 = []
        for ind1 in newData[Lab[i]]:
            # print ind1
            try:
                y1.append(ind1[1])
                x1.append(ind1[0])
            except:
                pass
        plt.plot(x1, y1, color[i])

    # 繪製初始中心點
    x1 = []
    y1 = []
    for ind1 in clf.cluster_centers_:
        try:
            y1.append(ind1[1])
            x1.append(ind1[0])
        except:
            pass
    plt.plot(x1, y1, "rv") #繪製中心
    plt.show()

# 對數據降維到2維,繪製聚類結果圖
# pca = PCA(n_components=2)  # 輸出2維
# newData = pca.fit_transform(weight)  # 載入N維
# result = list(clf.predict(TnewData))
# plot_cluster(result,newData,numClass)

# 先用 PCA 進行降維,再使用 TSNE
from sklearn.manifold import TSNE
newData = PCA(n_components=4).fit_transform(weight)  # 載入N維
newData =TSNE(2).fit_transform(newData)
result = list(clf.predict(TnewData))
plot_cluster(result,newData,numClass)

  

1七、使用K-means及TF-IDF算法對中文文本聚類並可視化

1八、python jieba分詞(結巴分詞)、提取詞,加載詞,修改詞頻,定義詞庫

1九、樸素貝葉斯和 SVM 文本分類

  SVM(迴歸分析):支持向量機(英語:support vector machine,常簡稱爲SVM,又名支持向量網絡)

  

  

s

相關文章
相關標籤/搜索