利用NLTK進行分句分詞

1.輸入一個段落,分紅句子(Punkt句子分割器) import nltk import nltk.data def splitSentence(paragraph): tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') sentences = tokenizer.tokenize(paragraph)
相關文章
相關標籤/搜索