IK Analyzer是基於lucene實現的分詞開源框架,下載路徑:http://code.google.com/p/ik-analyzer/downloads/listjava
須要在項目中引入:apache
IKAnalyzer.cfg.xml框架
IKAnalyzer2012.jar工具
lucene-core-3.6.0.jargoogle
stopword.dic.net
什麼都不用改code
示例代碼以下(使用IK Analyzer): xml
package com.haha.test; import java.io.IOException; import java.io.StringReader; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.wltea.analyzer.lucene.IKAnalyzer; public class Test2 { public static void main(String[] args) throws IOException { String text="基於java語言開發的輕量級的中文分詞工具包"; //建立分詞對象 Analyzer anal=new IKAnalyzer(true); StringReader reader=new StringReader(text); //分詞 TokenStream ts=anal.tokenStream("", reader); CharTermAttribute term=ts.getAttribute(CharTermAttribute.class); //遍歷分詞數據 while(ts.incrementToken()){ System.out.print(term.toString()+"|"); } reader.close(); System.out.println(); } }
使用(lucene)實現:對象
package com.haha.test; import java.io.IOException; import java.io.StringReader; import org.wltea.analyzer.core.IKSegmenter; import org.wltea.analyzer.core.Lexeme; public class Test3 { public static void main(String[] args) throws IOException { String text="基於java語言開發的輕量級的中文分詞工具包"; StringReader sr=new StringReader(text); IKSegmenter ik=new IKSegmenter(sr, true); Lexeme lex=null; while((lex=ik.next())!=null){ System.out.print(lex.getLexemeText()+"|"); } } }
參考文檔 : https://blog.csdn.net/lijun7788/article/details/7719166blog