Solr:Schema設計

本文已挪至  http://www.zhoujingen.cn/blog/8546.htmlhtml

 

Solr將數據以結構化的方式存入系統中,存儲的過程當中能夠對數據創建索引,這個結構的定義就是經過schema.xml來配置的。java

<?xml version="1.0" encoding="UTF-8" ?>
<!--
 Licensed to the Apache Software Foundation (ASF) under one or more
 contributor license agreements.  See the NOTICE file distributed with
 this work for additional information regarding copyright ownership.
 The ASF licenses this file to You under the Apache License, Version 2.0
 (the "License"); you may not use this file except in compliance with
 the License.  You may obtain a copy of the License at

     http://www.apache.org/licenses/LICENSE-2.0

 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
-->

<!--  
 This is the Solr schema file. This file should be named "schema.xml" and
 should be in the conf directory under the solr home
 (i.e. ./solr/conf/schema.xml by default) 
 or located where the classloader for the Solr webapp can find it.

 This example schema is the recommended starting point for users.
 It should be kept correct and concise, usable out-of-the-box.

 For more information, on how to customize this file, please see
 http://wiki.apache.org/solr/SchemaXml
-->

<schema name="example" version="1.5">
  <!-- attribute "name" is the name of this schema and is only used for display purposes.
       version="x.y" is Solr's version number for the schema syntax and 
       semantics.  It should not normally be changed by applications.

       1.0: multiValued attribute did not exist, all fields are multiValued 
            by nature
       1.1: multiValued attribute introduced, false by default 
       1.2: omitTermFreqAndPositions attribute introduced, true by default 
            except for text fields.
       1.3: removed optional field compress feature
       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
            behavior when a single string produces multiple tokens.  Defaults 
            to off for version >= 1.4
       1.5: omitNorms defaults to true for primitive field types 
            (int, float, boolean, string...)
     -->


   <!-- Valid attributes for fields:
     name: mandatory - the name for the field
     type: mandatory - the name of a field type from the 
       <types> fieldType section
     indexed: true if this field should be indexed (searchable or sortable)
     stored: true if this field should be retrievable
     docValues: true if this field should have doc values. Doc values are
       useful for faceting, grouping, sorting and function queries. Although not
       required, doc values will make the index faster to load, more
       NRT-friendly and more memory-efficient. They however come with some
       limitations: they are currently only supported by StrField, UUIDField
       and all Trie*Fields, and depending on the field type, they might
       require the field to be single-valued, be required or have a default
       value (check the documentation of the field type you're interested in
       for more information)
     multiValued: true if this field may contain multiple values per document
     omitNorms: (expert) set to true to omit the norms associated with
       this field (this disables length normalization and index-time
       boosting for the field, and saves some memory).  Only full-text
       fields or fields that need an index-time boost need norms.
       Norms are omitted for primitive (non-analyzed) types by default.
     termVectors: [false] set to true to store the term vector for a
       given field.
       When using MoreLikeThis, fields used for similarity should be
       stored for best performance.
     termPositions: Store position information with the term vector.  
       This will increase storage costs.
     termOffsets: Store offset information with the term vector. This 
       will increase storage costs.
     required: The field is required.  It will throw an error if the
       value does not exist
     default: a value that should be used if no value is specified
       when adding a document.
   -->

   <!-- field names should consist of alphanumeric or underscore characters only and
      not start with a digit.  This is not currently strictly enforced,
      but other field names will not have first class support from all components
      and back compatibility is not guaranteed.  Names with both leading and
      trailing underscores (e.g. _version_) are reserved.
   -->

   <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml
      or Solr won't start. _version_ and update log are required for SolrCloud
   --> 
   <field name="_version_" type="long" indexed="true" stored="true"/>
   
   <!-- points to the root document of a block of nested documents. Required for nested
      document support, may be removed otherwise
   -->
   <field name="_root_" type="string" indexed="true" stored="false"/>

   <!-- Only remove the "id" field if you have a very good reason to. While not strictly
     required, it is highly recommended. A <uniqueKey> is present in almost all Solr 
     installations. See the <uniqueKey> declaration below where <uniqueKey> is set to "id".
     Do NOT change the type and apply index-time analysis to the <uniqueKey> as it will likely 
     make routing in SolrCloud and document replacement in general fail. Limited _query_ time
     analysis is possible as long as the indexing process is guaranteed to index the term
     in a compatible way. Any analysis applied to the <uniqueKey> should _not_ produce multiple
     tokens
   -->   
   <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> 

   <!-- Dynamic field definitions allow using convention over configuration
       for fields via the specification of patterns to match field names. 
       EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
       RESTRICTION: the glob-like pattern in the name attribute must have
       a "*" only at the start or the end.  -->
   
   <dynamicField name="*_i"  type="int"    indexed="true"  stored="true"/>
   <dynamicField name="*_is" type="int"    indexed="true"  stored="true"  multiValued="true"/>
   <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" />
   <dynamicField name="*_ss" type="string"  indexed="true"  stored="true" multiValued="true"/>
   <dynamicField name="*_l"  type="long"   indexed="true"  stored="true"/>
   <dynamicField name="*_ls" type="long"   indexed="true"  stored="true"  multiValued="true"/>
   <dynamicField name="*_t"  type="text_general"    indexed="true"  stored="true"/>
   <dynamicField name="*_txt" type="text_general"   indexed="true"  stored="true" multiValued="true"/>
   <dynamicField name="*_en"  type="text_en"    indexed="true"  stored="true" multiValued="true"/>
   <dynamicField name="*_b"  type="boolean" indexed="true" stored="true"/>
   <dynamicField name="*_bs" type="boolean" indexed="true" stored="true"  multiValued="true"/>
   <dynamicField name="*_f"  type="float"  indexed="true"  stored="true"/>
   <dynamicField name="*_fs" type="float"  indexed="true"  stored="true"  multiValued="true"/>
   <dynamicField name="*_d"  type="double" indexed="true"  stored="true"/>
   <dynamicField name="*_ds" type="double" indexed="true"  stored="true"  multiValued="true"/>

   <!-- Type used to index the lat and lon components for the "location" FieldType -->
   <dynamicField name="*_coordinate"  type="tdouble" indexed="true"  stored="false" />

   <dynamicField name="*_dt"  type="date"    indexed="true"  stored="true"/>
   <dynamicField name="*_dts" type="date"    indexed="true"  stored="true" multiValued="true"/>
   <dynamicField name="*_p"  type="location" indexed="true" stored="true"/>

   <!-- some trie-coded dynamic fields for faster range queries -->
   <dynamicField name="*_ti" type="tint"    indexed="true"  stored="true"/>
   <dynamicField name="*_tl" type="tlong"   indexed="true"  stored="true"/>
   <dynamicField name="*_tf" type="tfloat"  indexed="true"  stored="true"/>
   <dynamicField name="*_td" type="tdouble" indexed="true"  stored="true"/>
   <dynamicField name="*_tdt" type="tdate"  indexed="true"  stored="true"/>

   <dynamicField name="*_c"   type="currency" indexed="true"  stored="true"/>

   <dynamicField name="ignored_*" type="ignored" multiValued="true"/>
   <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/>

   <dynamicField name="random_*" type="random" />

   <!-- uncomment the following to ignore any fields that don't already match an existing 
        field name or dynamic field, rather than reporting them as an error. 
        alternately, change the type="ignored" to some other type e.g. "text" if you want 
        unknown fields indexed and/or stored by default --> 
   <!--dynamicField name="*" type="ignored" multiValued="true" /-->

 <!-- Field to use to determine and enforce document uniqueness. 
      Unless this field is marked with required="false", it will be a required field
   -->
 <uniqueKey>id</uniqueKey>

  <!-- copyField commands copy one field to another at the time a document
        is added to the index.  It's used either to index the same field differently,
        or to add multiple fields to the same field for easier/faster searching.  -->

  <!--
   <copyField source="title" dest="text"/>
   <copyField source="body" dest="text"/>
  -->
  
    <!-- field type definitions. The "name" attribute is
       just a label to be used by field definitions.  The "class"
       attribute and any other attributes determine the real
       behavior of the fieldType.
         Class names starting with "solr" refer to java classes in a
       standard package such as org.apache.solr.analysis
    -->

    <!-- The StrField type is not analyzed, but indexed/stored verbatim.
       It supports doc values but in that case the field needs to be
       single-valued and either required or have a default value.
      -->
    <fieldType name="string" class="solr.StrField" sortMissingLast="true" />

    <!-- boolean type: "true" or "false" -->
    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>

    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
         currently supported on types that are sorted internally as strings
         and on numeric types.
       This includes "string","boolean", and, as of 3.5 (and 4.x),
       int, float, long, date, double, including the "Trie" variants.
       - If sortMissingLast="true", then a sort on this field will cause documents
         without the field to come after documents with the field,
         regardless of the requested sort order (asc or desc).
       - If sortMissingFirst="true", then a sort on this field will cause documents
         without the field to come before documents with the field,
         regardless of the requested sort order.
       - If sortMissingLast="false" and sortMissingFirst="false" (the default),
         then default lucene sorting will be used which places docs without the
         field first in an ascending sort and last in a descending sort.
    -->    

    <!--
      Default numeric field types. For faster range queries, consider the tint/tfloat/tlong/tdouble types.

      These fields support doc values, but they require the field to be
      single-valued and either be required or have a default value.
    -->
    <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
    <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0"/>
    <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
    <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0"/>

    <!--
     Numeric field types that index each value at various levels of precision
     to accelerate range queries when the number of values between the range
     endpoints is large. See the javadoc for NumericRangeQuery for internal
     implementation details.

     Smaller precisionStep values (specified in bits) will lead to more tokens
     indexed per value, slightly larger index size, and faster range queries.
     A precisionStep of 0 disables indexing at different precision levels.
    -->
    <fieldType name="tint" class="solr.TrieIntField" precisionStep="8" positionIncrementGap="0"/>
    <fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" positionIncrementGap="0"/>
    <fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" positionIncrementGap="0"/>
    <fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0"/>

    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
         is a more restricted form of the canonical representation of dateTime
         http://www.w3.org/TR/xmlschema-2/#dateTime    
         The trailing "Z" designates UTC time and is mandatory.
         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
         All other components are mandatory.

         Expressions can also be used to denote calculations that should be
         performed relative to "NOW" to determine the value, ie...

               NOW/HOUR
                  ... Round to the start of the current hour
               NOW-1DAY
                  ... Exactly 1 day prior to now
               NOW/DAY+6MONTHS+3DAYS
                  ... 6 months and 3 days in the future from the start of
                      the current day
                      
         Consult the TrieDateField javadocs for more information.

         Note: For faster range queries, consider the tdate type
      -->
    <fieldType name="date" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0"/>

    <!-- A Trie based date field for faster date range queries and date faceting. -->
    <fieldType name="tdate" class="solr.TrieDateField" precisionStep="6" positionIncrementGap="0"/>


    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
    <fieldType name="binary" class="solr.BinaryField"/>

    <!-- The "RandomSortField" is not used to store or search any
         data.  You can declare fields of this type it in your schema
         to generate pseudo-random orderings of your docs for sorting 
         or function purposes.  The ordering is generated based on the field
         name and the version of the index. As long as the index version
         remains unchanged, and the same field name is reused,
         the ordering of the docs will be consistent.  
         If you want different psuedo-random orderings of documents,
         for the same version of the index, use a dynamicField and
         change the field name in the request.
     -->
    <fieldType name="random" class="solr.RandomSortField" indexed="true" />

    <!-- solr.TextField allows the specification of custom text analyzers
         specified as a tokenizer and a list of token filters. Different
         analyzers may be specified for indexing and querying.

         The optional positionIncrementGap puts space between multiple fields of
         this type on the same document, with the purpose of preventing false phrase
         matching across fields.

         For more info on customizing your analyzer chain, please see
         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
     -->

    <!-- One can also specify an existing Analyzer class that has a
         default constructor via the class attribute on the analyzer element.
         Example:
    <fieldType name="text_greek" class="solr.TextField">
      <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
    </fieldType>
    -->

    <!-- A text field that only splits on whitespace for exact matching of words -->
    <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
      <analyzer>
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
      </analyzer>
    </fieldType>

    <!-- A general text field that has reasonable, generic
         cross-language defaults: it tokenizes with StandardTokenizer,
   removes stop words from case-insensitive "stopwords.txt"
   (empty by default), and down cases.  At query time only, it
   also applies synonyms. -->
    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
        <!-- in this example, we will only use synonyms at query time
        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        -->
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
    </fieldType>

    <!-- A text field with defaults appropriate for English: it
         tokenizes with StandardTokenizer, removes English stop words
         (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
         finally applies Porter's stemming.  The query time analyzer
         also applies synonyms from synonyms.txt. -->
    <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <!-- in this example, we will only use synonyms at query time
        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        -->
        <!-- Case insensitive stop word removal.
        -->
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
                />
        <filter class="solr.LowerCaseFilterFactory"/>
  <filter class="solr.EnglishPossessiveFilterFactory"/>
        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
  <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
        <filter class="solr.EnglishMinimalStemFilterFactory"/>
  -->
        <filter class="solr.PorterStemFilterFactory"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
                />
        <filter class="solr.LowerCaseFilterFactory"/>
  <filter class="solr.EnglishPossessiveFilterFactory"/>
        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
  <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
        <filter class="solr.EnglishMinimalStemFilterFactory"/>
  -->
        <filter class="solr.PorterStemFilterFactory"/>
      </analyzer>
    </fieldType>

    <!-- A text field with defaults appropriate for English, plus
   aggressive word-splitting and autophrase features enabled.
   This field is just like text_en, except it adds
   WordDelimiterFilter to enable splitting and matching of
   words on case-change, alpha numeric boundaries, and
   non-alphanumeric chars.  This means certain compound word
   cases will work, for example query "wi fi" will match
   document "WiFi" or "wi-fi".
        -->
    <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
      <analyzer type="index">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <!-- in this example, we will only use synonyms at query time
        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        -->
        <!-- Case insensitive stop word removal.
        -->
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
                />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
        <filter class="solr.PorterStemFilterFactory"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
                />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
        <filter class="solr.PorterStemFilterFactory"/>
      </analyzer>
    </fieldType>

    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
    <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
      <analyzer>
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
        <filter class="solr.EnglishMinimalStemFilterFactory"/>
        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
             possible with WordDelimiterFilter in conjuncton with stemming. -->
        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
      </analyzer>
    </fieldType>

    <!-- Just like text_general except it reverses the characters of
   each token, to enable more efficient leading wildcard queries. -->
    <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true"
           maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
    </fieldType>

    <!-- This is an example of using the KeywordTokenizer along
         With various TokenFilterFactories to produce a sortable field
         that does not include some properties of the source text
      -->
    <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
      <analyzer>
        <!-- KeywordTokenizer does no actual tokenizing, so the entire
             input string is preserved as a single token
          -->
        <tokenizer class="solr.KeywordTokenizerFactory"/>
        <!-- The LowerCase TokenFilter does what you expect, which can be
             when you want your sorting to be case insensitive
          -->
        <filter class="solr.LowerCaseFilterFactory" />
        <!-- The TrimFilter removes any leading or trailing whitespace -->
        <filter class="solr.TrimFilterFactory" />
        <!-- The PatternReplaceFilter gives you the flexibility to use
             Java Regular expression to replace any sequence of characters
             matching a pattern with an arbitrary replacement string, 
             which may include back references to portions of the original
             string matched by the pattern.
             
             See the Java Regular Expression documentation for more
             information on pattern and replacement string syntax.
             
             http://docs.oracle.com/javase/7/docs/api/java/util/regex/package-summary.html
          -->
        <filter class="solr.PatternReplaceFilterFactory"
                pattern="([^a-z])" replacement="" replace="all"
        />
      </analyzer>
    </fieldType>

    <!-- lowercases the entire field value, keeping it as a single token.  -->
    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
      <analyzer>
        <tokenizer class="solr.KeywordTokenizerFactory"/>
        <filter class="solr.LowerCaseFilterFactory" />
      </analyzer>
    </fieldType>

    <!-- since fields of this type are by default not stored or indexed,
         any data added to them will be ignored outright.  --> 
    <fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />

    <!-- This point type indexes the coordinates as separate fields (subFields)
      If subFieldType is defined, it references a type, and a dynamic field
      definition is created matching *___<typename>.  Alternately, if 
      subFieldSuffix is defined, that is used to create the subFields.
      Example: if subFieldType="double", then the coordinates would be
        indexed in fields myloc_0___double,myloc_1___double.
      Example: if subFieldSuffix="_d" then the coordinates would be indexed
        in fields myloc_0_d,myloc_1_d
      The subFields are an implementation detail of the fieldType, and end
      users normally should not need to know about them.
     -->
    <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>

    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
    <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/>

    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.
      For more information about this and other Spatial fields new to Solr 4, see:
      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
    -->
    <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
        geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />

    <!-- Spatial rectangle (bounding box) field. It supports most spatial predicates, and has
     special relevancy modes: score=overlapRatio|area|area2D (local-param to the query).  DocValues is recommended for
     relevancy. -->
    <fieldType name="bbox" class="solr.BBoxField"
               geo="true" distanceUnits="kilometers" numberType="_bbox_coord" />
    <fieldType name="_bbox_coord" class="solr.TrieDoubleField" precisionStep="8" docValues="true" stored="false"/>

   <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
        Parameters:
          defaultCurrency: Specifies the default currency if none specified. Defaults to "USD"
          precisionStep:   Specifies the precisionStep for the TrieLong field used for the amount
          providerClass:   Lets you plug in other exchange provider backend:
                           solr.FileExchangeRateProvider is the default and takes one parameter:
                             currencyConfig: name of an xml file holding exchange rates
                           solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:
                             ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)
                             refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)
   -->
    <fieldType name="currency" class="solr.CurrencyField" precisionStep="8" defaultCurrency="USD" currencyConfig="currency.xml" />

</schema>
View Code

 

schema.xml位於solr/conf/目錄下,相似於數據表配置文件,定義了加入索引的數據的數據類型,主要包括type、fields和其餘的一些缺省設置。Solr的schema配置是很是靈活和豐富,下面將對此進行詳細介紹。mysql

基本的schema配置

先來看一個簡單的schema配置:git

<?xml version="1.0" encoding="UTF-8" ?>
<schema name="user" version="1.5">
   <field name="_version_" type="long" indexed="true" stored="true"/>
   <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
   <field name="name" type="text_general" indexed="true" stored="true"/>
   <uniqueKey>id</uniqueKey>

   <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
   <fieldType name="string" class="solr.StrField" sortMissingLast="true" />
   <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer class="solr.StandardTokenizerFactory"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.StandardTokenizerFactory"/>
      </analyzer>
    </fieldType>
</schema>

schema.xml 配置文件的根元素就是 schema, 有個 name 屬性, name 屬性值能夠隨便配,根元素沒什麼好說的, schema 元素下主要有兩個標籤元素即 field 和fieldType,field 表示域,用來定義域, fieldType 用來定義域類型。web

常規Field設置

在fields結點內定義具體的字段(相似數據庫中的字段),就是filed,filed定義包括name,type(爲以前定義過的各類FieldType),indexed(是否被索引),stored(是否被儲存),multiValued(是否有多個值)等等。 
例: sql

<fields> 
  <field name="id" type="integer" indexed="true" stored="true" required="true" /> 
  <field name="name" type="text" indexed="true" stored="true" /> 
  <field name="summary" type="text" indexed="true" stored="true" /> 
  <field name="author" type="string" indexed="true" stored="true" /> 
  <field name="date" type="date" indexed="false" stored="true" /> 
  <field name="content" type="text" indexed="true" stored="false" /> 
  <field name="keywords" type="keyword_text" indexed="true" stored="false" multiValued="true" /> 
  <field name="all" type="text" indexed="true" stored="false" multiValued="true"/> 
</fields> 

Field即結構化中得某一個字段,field屬性說明:數據庫

  • name:屬性的名稱,這裏有個特殊的屬性「_version_」是必須添加的。
  • type:字段的數據結構類型,所用到的類型須要在fieldType中設置。
  • default:默認值。
  • indexed:是否建立索引
  • stored:是否存儲原始數據(若是不須要存儲相應字段值,儘可能設爲false)
  • docValues:表示此域是否須要添加一個 docValues 域,這對 facet 查詢, group 分組,排序, function 查詢有好處,儘管這個屬性不是必須的,但他能加快索引數據加載,對 NRT 近實時搜索比較友好,且更節省內存,但它也有一些限制,好比當前docValues 域只支持 strField,UUIDField,Trie*Field 等域,且要求域的域值是單值不能是多值域
  • solrMissingFirst/solrMissingLast:查詢結果排序的過程當中,若是發現這個字段的值不存在,則排在前面/後面,忽略排序的條件
  • multValued:是否有多個值,好比說一個用戶的全部好友id。(對可能存在多值的字段儘可能設置爲true,避免建索引時拋出錯誤)
  • omitNorms:此屬性若設置爲 true ,即表示將忽略域值的長度標準化,忽略在索引過程當中對當前域的權重設置,且會節省內存。只有全文本域或者你須要在索引建立過程當中設置域的權重時才須要把這個值設爲 false, 對於基本數據類型且不分詞的域如intFeild,longField,StrField 等默認此屬性值就是 true, 不然默認就是 false.
  • required:添加文檔時,該字段必須存在,相似mysql的not null
  • termVectors: 設置爲 true 即表示須要爲該 field 存儲項向量信息,當你須要MoreLikeThis 功能時,則須要將此屬性值設爲 true ,這樣會帶來一些性能提高。
  • termPositions: 是否存儲 Term 的起始位置信息,這會增大索引的體積,但高亮功能須要依賴此項設置,不然沒法高亮
  • termOffsets: 表示是否存儲索引的位置偏移量,高亮功能須要此項配置,當你使用SpanQuery 時,此項配置會影響匹配的結果集

field的定義至關重要,有幾個技巧需注意一下express

  1. 將全部只用於搜索的,而不須要做爲查詢結果的field(特別是一些比較大的field)的stored設置爲false。
  2. 將不須要被用於搜索的,而只是做爲查詢結果返回的field的indexed設置爲false。
  3. 刪除全部沒必要要的copyField聲明,根據須要決定是否進行存儲。
  4. 爲了索引字段的最小化和搜索的效率,將全部的 text fields的index都設置成false,而後使用copyField將他們都複製到一個總的 text field上,而後對他進行搜索。

field 裏還有兩個比較難理解的域,是 Solr 擴展的,在 Lucene 中沒有的概念,即dynamicField 動態域和 copyField 複製域: apache

正常數據結構一個是須要考慮中文分詞,二個是考慮是否索引,是否分詞,是否存儲等等。下面的示範用到了三種類型的數據:json

  1. 字段須要分詞、須要索引、須要存儲,如:網頁中的標題、內容等字段。
  2. 字段須要索引,但不須要分詞,須要存儲,如:網頁的發佈時間等內容。
  3. 字段不須要索引,不須要分詞,但須要存儲,如:引用的圖片位置。

不存在不須要索引、也不須要分詞,也不須要存儲的字段,由於這樣的字段在Lucene中無心義。

示範配置:

<?xml version="1.0" ?>
<schema name="news" version="1.1">
    <fields>
        <!--下面三個字段須要分詞,索引,存儲 -->
        <!-- 發佈者 -->
        <field name="webUser" type="text_mm4j" indexed="true" stored="true"/>
        <!-- 標題 -->
        <field name="webTitle" type="text_mm4j" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true"/>
        <!-- 內容 -->
        <field name="webContent" type="text_mm4j" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true"/>
 
        <!--下面須要索引,不分詞,須要存儲 -->
        <!-- 來源ID -->
        <field name="webId" type="int" indexed="true" stored="true"/>
        <!-- 主鍵ObjectID -->
        <field name="objectId" type="string" indexed="true" stored="true" required="true" multiValued="false" />
        <!-- 論壇類型(txt/pic/video) -->
        <field name="webType" type="string" indexed="true" stored="true"/>
        <!-- 發佈時間 -->
        <field name="webTime" type="date" indexed="true" stored="true"/>
 
        <!--下面信息僅存儲 -->
        <!-- 網站描述 -->
        <field name="webCommit" type="string" indexed="false" stored="true"/>
        <!-- 網址 -->
        <field name="webUrl" type="string" indexed="false" stored="true"/>
        <!-- 生成網頁地址 -->
        <field name="webHtml" type="string" indexed="false" stored="true"/>
        <!-- 視頻 -->
        <field name="webVideo" type="string" indexed="false" stored="true"/>
        <!-- 圖片 -->
        <field name="webImage" type="string" indexed="false" stored="true" multiValued="true"/>
 
        <!--下面信息爲區別數據類型,索引,不分詞,存儲 -->
        <!-- 索引類型,bbs/news/blog -->
        <field name="indexType" type="string" indexed="true" stored="true"/>
        <!-- 拷貝字段 ,索引不存儲 -->
        <field name="text" type="text_mm4j" indexed="true" stored="false" multiValued="true"/>
        <field name="_version_" type="long" indexed="true" stored="true"/>
    </fields>
 
    <copyField source="webUser" dest="text"/>
    <copyField source="webTitle" dest="text"/>
    <copyField source="webContent" dest="text"/>
 
    <uniqueKey>objectId</uniqueKey>
 
    <defaultSearchField>text</defaultSearchField>
 
    <solrQueryParser defaultOperator="OR"/>
 
    <types>
        <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
        <fieldtype name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
        <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="date" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
            <analyzer type="index">
                <tokenizer class="solr.StandardTokenizerFactory"/>
                <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
                <!-- in this example, we will only use synonyms at query time
                <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
                -->
                <filter class="solr.LowerCaseFilterFactory"/>
            </analyzer>
            <analyzer type="query">
                <tokenizer class="solr.StandardTokenizerFactory"/>
                <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
                <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
                <filter class="solr.LowerCaseFilterFactory"/>
            </analyzer>
        </fieldType>
        <fieldType name="text_ik" class="solr.TextField">
            <analyzer type="index" class="org.wltea.analyzer.lucene.IKAnalyzer"/>
            <analyzer type="query" class="org.wltea.analyzer.lucene.IKAnalyzer"/>
        </fieldType>
        <fieldType name="text_mm4j" class="solr.TextField" >
            <analyzer type="index">
                <tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="simple" dicPath="C:/solr/mm4jdic"/>
                <!--
                <tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="simple" dicPath="/usr/local/solr/mm4jdic"/>
                -->
                <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
            </analyzer>
            <analyzer type="query">
                <tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="simple" dicPath="C:/solr/mm4jdic"/>
                <!--
                <tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="simple" dicPath="/usr/local/solr/mm4jdic"/>
                -->
                <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
            </analyzer>
        </fieldType>
    </types>
</schema>
 

 

動態字段dynamicField

動態域的屬性配置跟普通的 field 差很少就很少說了,惟一有點區別就是 name 的屬性值,能夠用通配符,這樣就能夠模糊匹配多個域啦,這樣設計的目的就是不用頻繁的去修改咱們的 schema.xml 中的 field 配置去增長 field 域啦,好比以前有個 link_s域,某一天你想再增長一個 url_s 域,那你就須要去修改 schema.xml 配置文件,因爲schema.xml 修改事後須要重啓 tomcat 才能生效,重啓即意味着程序的中斷,這每每是不可接受的。因此引入動態域來避免頻繁添加修改域,但前提是你的域須要符合你提早定義的動態域的域名稱命名規則哦。

<dynamicField name="*_i" type="int" indexed="true" stored="true"/>

冗餘複製字段copyField

建議創建一個拷貝字段,將全部的 全文本 字段複製到一個字段中,以便進行統一的檢索。

假若有一個文章schema,一開始業務系統搜索的時候主要是搜索文章的內容,後來我但願搜索的時候能同時去搜索文章的標題,使用copyField,將標題和內容冗餘爲一個字段。

例如:

<field name="title" type="text_general" indexed="true" stored="true"/>
<field name="content" type="text_general" indexed="true" stored="true"/>
<copyField source="title" dest="text"/>
<copyField source="content" dest="text"/>

拷貝字段就是查詢的時候不用再輸入:title:張三 and content:張三的我的簡介。直接能夠輸入"張三"就能夠將「名字」含「張三」或者「簡介」中含「張三」的查詢出來。他將須要查詢的內容放在了一個字段中,而且默認查詢該字段設爲該字段就好了。  

要注意的是,若是你只是複製單個域,那麼若是你被複制域自己就是多值域,那麼目標域也是多值域,這毋庸置疑,那若是你複製的是多個域,只要其中有一個域是多值域,那麼目標域就必定是多值域,這點必定要謹記。

field 說完了,接着說說 fieldType 元素,它用來定義域類型, solr 內置的域類型有StrField , BoolField , TrieIntField , TrieFloatField , TrieLongField ,TrieDoubleField , TrieDateField , BinaryField , RandomSortField , TextField等,其餘更多域類型請本身查閱 Solr API 文檔。 

常規Field Type設置

    <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
    <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0"/>
    <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
    <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0"/>
  • StrField: 這是一個不分詞的字符串域,它支持 docValues 域,但當爲其添加了docValues 域,則要求只能是單值域且該域必須存在或者該域有默認值
  • BoolField : boolean 域,對應 true/false
  • TrieIntField, TrieFloatField, TrieLongField, TrieDoubleField 這幾個都是默認的數字域, precisionStep 屬性通常用於數字範圍查詢, precisionStep 值越小,則索引時該域的域值分出的 token 個數越多,會增大硬盤上索引的體積,但它會加快數字範圍檢索的響應速度, positionIncrementGap 屬性表示若是當前域是多值域時,多個值之間的間距,單值域,設置此項無心義。
  • TrieDateField :顯然這是一個日期域類型,不過遺憾的是它支持 1995-12-31T23:59:59Z 這種格式的日期,比較坑爹,爲此我自定義了一個 TrieCNDateField 域類型,用於支持國人比較喜歡的 yyyy-MM-dd HH:mm:ss 格式的日期。源碼請參見個人上一篇博客。
  • BinaryField :通過 base64 編碼的字符串域類型,即你須要把 binary 數據進行base64 編碼才能被 solr 進行索引。
  • RandomSortField :隨機排序域類型,當你須要實現僞隨機排序時,請使用此域類型。
  • TextField :是用的最多的一種域類型,它須要進行分詞,因此它通常須要配置分詞器。至於具體它如何配置 IK 分詞器,這裏就不展開了。

field type是對field類型的詳細描述:

  • name:類型的名稱,對應field中的type
  • class:類型對應的java對象, solr默認提供大概20多種類型
  • positionIncrementGap:當field設置multValued爲true時,用來分隔多個值之間的間隙大小
  • autoGeneratePhraseQueries:有點相似找近義詞或者自動糾錯的設置,例如能夠將 wi fi自動轉爲 wifi或wi-fi,若是不設置這個屬性則須要在查詢時強制加上引號,例如 ‘wi fi’ 

 

fieldType 元素還有一些額外的屬性也須要注意下,好比sortMissingFirst,sortMissingLast 等:  

 

  • sortMissingLast 表示若是域值爲 null, 在根據當前域進行排序時,把包含 null 值的document 排在最後一位
  • sortMissingFirst :與 sortMissingLast 對應的,不言自明瞭,你應該懂的
  • docValues :表示是否爲 docValues 域,通常排序, group,facet 時會用到docValues 域。

 

在FieldType定義的時候最重要的就是定義這個類型的數據在創建索引和進行查詢的時候要使用的分析器analyzer,包括分詞和過濾。必要的時候fieldType還須要本身定義這個類型的數據在創建索引和進行查詢的時候要使用的分析器analyzer,包括分詞和過濾。

例如: 

<fieldType name="text" class="solr.TextField" positionIncrementGap="100"> 
      <analyzer type="index"> 
        <tokenizer class="solr.WhitespaceTokenizerFactory"/> 
        <!-- in this example, we will only use synonyms at query time 
        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/> 
        --> 
        <!-- Case insensitive stop word removal. 
             enablePositionIncrements=true ensures that a 'gap' is left to 
             allow for accurate phrase queries. 
        --> 
        <filter class="solr.StopFilterFactory" 
                ignoreCase="true" 
                words="stopwords.txt" 
                enablePositionIncrements="true" 
                /> 
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> 
        <filter class="solr.LowerCaseFilterFactory"/> 
        <filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt"/> 
        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> 
      </analyzer> 
     …… 
</fieldType> 

在index的analyzer中使用 solr.WhitespaceTokenizerFactory這個分詞包,就是空格分詞,而後使用 solr.StopFilterFactory,solr.WordDelimiterFilterFactory,solr.LowerCaseFilterFactory,solr.EnglishPorterFilterFactory,solr.RemoveDuplicatesTokenFilterFactory 這幾個過濾器。在向索引庫中添加text類型的索引的時候,Solr會首先用空格進行分詞,而後把分詞結果依次使用指定的過濾器進行過濾,最後剩下的結果纔會加入到索引庫中以備查詢。Solr的analysis包並無帶支持中文分詞的包。

uniqueKey 元素

最後須要說的就是 uniqueKey 元素,它用來配置 document 的惟一標識域,即 solr是用此域來決定增量導入時是否重複導入,若是 id 同樣,則不會重複導入,或者當你更新索引時,你能夠根據指定的 uniqueKey 域,來肯定一個 document ,而後對該document 進行更新。總之,它是用來惟一肯定一個 document 的,跟數據庫表裏的主鍵 id 概念相似,前提是你 uniqueKey 裏配置的域名稱你須要提早使用 field 元素進行定義。

schema.xml裏有一個uniqueKey,的配置,這裏將id字段做爲索引文檔的惟一標識符,很是重要。

 <uniqueKey>id</uniqueKey> 

 

Schema設計

  1. 決定須要哪些查詢
  2. 決定每一個查詢須要哪些實體
  3. 對每一個實體,反規範化全部相關的數據
  4. 忽略不包含在查詢結果中的字段
相關文章
相關標籤/搜索