使用Lucene索引和檢索POI數據

一、簡介html

關於空間數據搜索,之前寫過《使用Solr進行空間搜索》這篇文章,是基於Solr的GIS數據的索引和檢索。git

Solr和ElasticSearch這二者都是基於Lucene實現的,二者均可以進行空間搜索(Spatial Search),在有些場景,咱們須要把Lucene嵌入到已有的系統提供數據索引和檢索的功能,這篇文章介紹下用Lucene如何索引帶有經緯度的POI信息並進行檢索。github

二、環境數據數據庫

Lucene版本:5.3.1測試

POI數據庫:Base_Station測試數據,每條數據主要是ID,經緯度和地址。ui

三、實現this

基本變量定義,這裏對「地址」信息進行了分詞,分詞使用了Lucene自帶的smartcnSmartChineseAnalyzer。spa

    private String indexPath = "D:/IndexPoiData";
    private IndexWriter indexWriter = null;
    private SmartChineseAnalyzer analyzer = new SmartChineseAnalyzer(true);

    private IndexSearcher indexSearcher = null;

    // Field Name
    private static final String IDFieldName = "id";
    private static final String AddressFieldName = "address";
    private static final String LatFieldName = "lat";
    private static final String LngFieldName = "lng";
    private static final String GeoFieldName = "geoField";
    
    // Spatial index and search
    private SpatialContext ctx;
    private SpatialStrategy strategy;

    public PoiIndexService() throws IOException {
        init();
    }

    public PoiIndexService(String indexPath) throws IOException {
        this.indexPath = indexPath;
        init();
    }
    
    protected void init() throws IOException {
        Directory directory = new SimpleFSDirectory(Paths.get(indexPath));
        IndexWriterConfig config = new IndexWriterConfig(analyzer);
        indexWriter = new IndexWriter(directory, config);

        DirectoryReader ireader = DirectoryReader.open(directory);
        indexSearcher = new IndexSearcher(ireader);

        // Typical geospatial context
        // These can also be constructed from SpatialContextFactory
        ctx = SpatialContext.GEO;

        int maxLevels = 11; // results in sub-meter precision for geohash
        // This can also be constructed from SpatialPrefixTreeFactory
        SpatialPrefixTree grid = new GeohashPrefixTree(ctx, maxLevels);

        strategy = new RecursivePrefixTreeStrategy(grid, GeoFieldName);
    }

索引數據code

    public boolean indexPoiDataList(List<PoiData> dataList) {
        try {
            if (dataList != null && dataList.size() > 0) {
                List<Document> docs = new ArrayList<>();
                for (PoiData data : dataList) {
                    Document doc = new Document();
                    doc.add(new LongField(IDFieldName, data.getId(), Field.Store.YES));
                    doc.add(new DoubleField(LatFieldName, data.getLat(), Field.Store.YES));
                    doc.add(new DoubleField(LngFieldName, data.getLng(), Field.Store.YES));
                    doc.add(new TextField(AddressFieldName, data.getAddress(), Field.Store.YES));
                    Point point = ctx.makePoint(data.getLng(),data.getLat());
                    for (Field f : strategy.createIndexableFields(point)) {
                        doc.add(f);
                    }
                    docs.add(doc);
                }
                indexWriter.addDocuments(docs);
                indexWriter.commit();
                return true;
            }
            return false;
        } catch (Exception e) {
            log.error(e.toString());
            return false;
        }
    }

這裏的PoiData是個普通的POJO。htm

檢索圓形範圍內的數據,按距離從近到遠排序:

    public List<PoiData> searchPoiInCircle(double lng, double lat, double radius){
        List<PoiData> results= new ArrayList<>();
        Shape circle = ctx.makeCircle(lng, lat, DistanceUtils.dist2Degrees(radius, DistanceUtils.EARTH_MEAN_RADIUS_KM));
        SpatialArgs args = new SpatialArgs(SpatialOperation.Intersects, circle);
        Query query = strategy.makeQuery(args);
        Point pt = ctx.makePoint(lng, lat);
        ValueSource valueSource = strategy.makeDistanceValueSource(pt, DistanceUtils.DEG_TO_KM);//the distance (in km)
        Sort distSort = null;
        TopDocs docs = null;
        try {
            //false = asc dist
            distSort = new Sort(valueSource.getSortField(false)).rewrite(indexSearcher);
            docs = indexSearcher.search(query, 10, distSort);
        } catch (IOException e) {
            log.error(e.toString());
        }
        
        if(docs!=null){
            ScoreDoc[] scoreDocs = docs.scoreDocs;
            printDocs(scoreDocs);
            results = getPoiDatasFromDoc(scoreDocs);
        }
        
        return results;
    }

    private List<PoiData> getPoiDatasFromDoc(ScoreDoc[] scoreDocs){
        List<PoiData> datas = new ArrayList<>();
        if (scoreDocs != null) {
            //System.out.println("總數:" + scoreDocs.length);
            for (int i = 0; i < scoreDocs.length; i++) {
                try {
                    Document hitDoc = indexSearcher.doc(scoreDocs[i].doc);
                    PoiData data = new PoiData();
                    data.setId(Long.parseLong((hitDoc.get(IDFieldName))));
                    data.setLng(Double.parseDouble(hitDoc.get(LngFieldName)));
                    data.setLat(Double.parseDouble(hitDoc.get(LatFieldName)));
                    data.setAddress(hitDoc.get(AddressFieldName));
                    datas.add(data);
                } catch (IOException e) {
                    log.error(e.toString());
                }
            }
        }
        
        return datas;
    }

搜索矩形範圍內的數據:

    public List<PoiData> searchPoiInRectangle(double minLng, double minLat, double maxLng, double maxLat) {
        List<PoiData> results= new ArrayList<>();
        Point lowerLeftPoint = ctx.makePoint(minLng, minLat);
        Point upperRightPoint = ctx.makePoint(maxLng, maxLat);
        Shape rect = ctx.makeRectangle(lowerLeftPoint, upperRightPoint);
        SpatialArgs args = new SpatialArgs(SpatialOperation.Intersects, rect);
        Query query = strategy.makeQuery(args);
        TopDocs docs = null;
        try {
            docs = indexSearcher.search(query, 10);
        } catch (IOException e) {
            log.error(e.toString());
        }
        
        if(docs!=null){
            ScoreDoc[] scoreDocs = docs.scoreDocs;
            printDocs(scoreDocs);
            results = getPoiDatasFromDoc(scoreDocs);
        }
        
        return results;
    } 

搜索某個範圍內並根據地址關鍵字信息來檢索POI:

public List<PoiData>searchPoByRangeAndAddress(doublelng, doublelat, double range, String address){
        List<PoiData> results= newArrayList<>();
        SpatialArgsargs = newSpatialArgs(SpatialOperation.Intersects,
        ctx.makeCircle(lng, lat, DistanceUtils.dist2Degrees(range, DistanceUtils.EARTH_MEAN_RADIUS_KM)));
        Query geoQuery = strategy.makeQuery(args);
        
        QueryBuilder builder = newQueryBuilder(analyzer);
        Query addQuery = builder.createPhraseQuery(AddressFieldName, address);
        
        BooleanQuery.BuilderboolBuilder = newBooleanQuery.Builder();
        boolBuilder.add(addQuery, Occur.SHOULD);
        boolBuilder.add(geoQuery,Occur.MUST);
        
        Query query = boolBuilder.build();
        
        TopDocs docs = null;
        try {
            docs = indexSearcher.search(query, 10);
        } catch (IOException e) {
            log.error(e.toString());
        }
        
        if(docs!=null){
            ScoreDoc[] scoreDocs = docs.scoreDocs;
            printDocs(scoreDocs);
            results = getPoiDatasFromDoc(scoreDocs);
        }
        
        return results;
    }

四、關於分詞

POI的地址屬性和描述屬性都須要作分詞才能更好的進行檢索和搜索。

簡單對比了幾種分詞效果:

原文:

這是一個lucene中文分詞的例子,你能夠直接運行它!Chinese Analyer can analysis english text too.中國農業銀行(農行)和建設銀行(建行),江蘇南京江寧上元大街12號。東南大學是一所985高校。

分詞結果:

smartcn SmartChineseAnalyzer

這\是\一個\lucen\中文\分\詞\的\例子\你\能夠\直接\運行\它\chines\analy\can\analysi\english\text\too\中國\農業\銀行\農行\和\建設\銀行\建行\江蘇\南京\江\寧\上\元\大街\12\號\東南\大學\是\一\所\985\高校\

MMSegAnalyzer ComplexAnalyzer

這是\一個\lucene\中文\分詞\的\例子\你\能夠\直接\運行\它\chinese\analyer\can\analysis\english\text\too\中國農業\銀行\農行\和\建設銀行\建\行\江蘇南京\江\寧\上\元\大街\12\號\東南大學\是一\所\985\高校\

IKAnalyzer

這是\一個\lucene\中文\分詞\的\例子\你\能夠\直接\運行\它\chinese\analyer\can\analysis\english\text\too.\中國農業銀行\農行\和\建設銀行\建行\江蘇\南京\江寧\上元\大街\12號\東南大學\是\一所\985\高校\

分詞效果對比:

1)Smartcn不能正確的分出有些英文單詞,有些中文單詞也被分紅單個字。

2)MMSegAnalyzer能正確的分出英文和中文,但對於相似「江寧」這樣的地名和「建行」等信息不是很準確。MMSegAnalyzer支持自定義詞庫,詞庫能夠大大提升分詞的準確性。

3)IKAnalyzer能正確的分出英文和中文,中文分詞比較不錯,但也有些小問題,好比單詞too和最後的點號分在了一塊兒。IKAnalyzer也支持自定義詞庫,可是要擴展一些源碼。

總結:使用Lucene強大的數據索引和檢索能力能夠爲一些帶有經緯度和須要分詞檢索的數據提供搜索功能。

 

代碼託管在GitHub上:https://github.com/luxiaoxun/Code4Java

相關文章
相關標籤/搜索