What is schema.xml ?

One of the configuration files that describe each implementation Solr is schema.xml file. It describes one of the most important things of the implementation – the structure of the data index. The information contained in this file allow you to control how Solr behaves when indexing the data, or when making queries. Schema.xml is not only the very structure of the index, is also detailed information about data types that have a large influence on the behavior Solr, and usually are treated with neglect. This entry will try to bring some insight about schema.xml.

Schema.xml file consists of several parts:

  • version,
  • type definitions,
  • field definitions,
  • copyField section,
  • additional definitions.

Version

The first thing we come across in the schema.xml file is the version. This is the information for Solr how to treat some of the attributes in schema.xml file. The definition is as follows:

<schema name="example" version="1.3">

Please note that this is not the definition of the version from the perspective of your project. At this point Solr supports four versions of a schema.xml file:

  • 1.0 – multiValued attribute does not exist, all fields are multivalued by default.
  • 1.1 – introduced multiValued attribute, the default attribute value is false.
  • 1.2 – introduced omitTermFreqAndPositions attribute, the default value is true for all fields, besides text fields.
  • 1.3 – removed the possibility of an optional compression of fields.

Type definitions

Type definitions can be logically divided into two separate sections – the simple types and complex types. Simple types as opposed to the complex types do not have a defined filters and tokenizer.

Simple types

First thing we see in the schema.xml file after version are types definition. Each type is described as a number of attributes defining the behavior of that type. First, some attributes that describe each type and are mandatory:

  • name – name of the type (required attribute).
  • class – class that is responsible for the implementation. Please note that classes are delivered from standard Solr packaged will have names with ‘solr’ prefix.

Besides the two mentioned above, types can have the following optional attributes:

  • sortMissingLast – attribute specifying how values in a field based on this type should be treated in case of sorting. When set to true documents without value in a field of this type will always be at the end of the results list regardless of sort order. The default attribute value is false. Attribute can be used only for types that are considered by Lucene as a string.
  • sortMissingFirst – attribute specifying how values in a field based on this type should be treated in case of sorting. When set to true documents without value in a field of this type will always be at the first positions of the results list regardles of sort order. The default attribute value is false. Attribute can be used only for types that are considered by Lucene as a string.
  • omitNorms – attribute specifying whether field normalization should take place.
  • omitTermFreqAndPositions – attribute specifying whether term frequency and term positions should be calculated.
  • indexed – attribute specifying whether the field based on this type will keep their original values.
  • positionIncrementGap – attribute specifying how many position Lucene should skip.

It is worth remembering that in the default settings sortMissingLast and sortMissingFirst attributes Lucene will apply behavior of placing a document with blank field values at the beginning of the ascending sort, and at the end of the list of results for descending sorting.

One more options for simple types, but only those based on Trie*Field classes:

  • precisionStep – attribute specifying the number of bits of precision. The greater the number of bits, the faster the queries based on numerical ranges. This however, also increases the size of the index, as more values are indexed. Set attribute value to 0 to disable the functionality of indexing at various precisions.

An example of a simple type defined:

<fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>

Complex types

In addition to simple types, schema.xml file may include types consisting of a tokenizer and filters. Tokenizer is responsible for dividing the contents of the field in the tokens, while the filters are responsible for further token analysis. For example, the type that is responsible for dealing with the texts in Polish, would consist of a tokenizer in charge of the division of words based on whitespace, commas and periods. Filters for that type could be responsible for bringing generated tokens to lowercase, further division of tokens (for example on the basis of dashes), and then bringing tokens to the basic form.

Complex types, like simple types, have their name (name attribute) and the class which is responsible for implementation (class attribute). They can also be characterized by other attributes as described in the case of simple types (on the same basis). In addition, however, complex types can have a definition of tokenizer and filters to be used at the stage of indexing, and at the stage of query. As most of you know, for a given phase (indexing, or query) there can can be many filters defined but only one tokenizer. For example, just looks like a text type definition look like in the example provided with Solr:

<fieldType name="text" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
   <analyzer type="index">
      <tokenizer class="solr.WhitespaceTokenizerFactory"/>
      <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
      <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
      <filter class="solr.LowerCaseFilterFactory"/>
      <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
      <filter class="solr.PorterStemFilterFactory"/>
   </analyzer>
   <analyzer type="query">
      <tokenizer class="solr.WhitespaceTokenizerFactory"/>
      <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
      <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
      <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
      <filter class="solr.LowerCaseFilterFactory"/>
      <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
      <filter class="solr.PorterStemFilterFactory"/>
   </analyzer>
</fieldType>

It is worth noting that there is an additional attribute for the text field type:

  • autoGeneratePhraseQueries

This attribute is responsible for telling filters how to behave when dividing tokens. Some filters (such as WordDelimiterFilter) can divide tokens into a set of tokens. Setting the attribute to true (default value) will automatically generate phrase queries. This means that WordDelimiterFilter will divide the word “wi-fi” into two tokens “wi” and “fi”. With autoGeneratePhraseQueries set to true query sent to Lucene will look like "field:wi fi", while with set to false Lucene query will look like field:wi OR field:fi. However, please note, that this attribute only behaves well with tokenizers based on white spaces.

Returning to the type definition. As you can see, I gave an example which has two main sections:

<analyzer type="index">

and

<analyzer type="query">

The first section is responsible for the definition of the type, which will be used for indexing documents, the second section is responsible for the definition of type used for queries to fields based on this type. Note that if you want to use the same definitions for indexing and query phase, you can opt out of the two sections. Then our definition will look like this:

<fieldType name="text" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
   <tokenizer class="solr.WhitespaceTokenizerFactory"/>
   <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
   <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
   <filter class="solr.LowerCaseFilterFactory"/>
   <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
   <filter class="solr.PorterStemFilterFactory"/>
</fieldType>

As I mentioned in the definition of each complex type there is a tokenizer and a series of filters (though not necessarily). I will not describe each filter and tokenizer available in Solr. This information is available at the following address: http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters.

At the end I wanted to add an important thing. Starting from 1.4 Solr tokenizer does not need to be the first mechanism that deals with the analysis of the field. Solr 1.4 introduced new filters – CharFilters that operate on the field before tokenizer and transmit the result to the tokenizer. It is worth to know because it might come in useful.

Multi-dimensional types

At the end I left myself a little addition – a novelty in Solr 1.4 – multi-dimensional fields – fields consisting of a number of other fields. Generally speaking, the assumption of this type of field was simple – to store in Solr pairs of values, triples or more related data, such as georaphical point coordinates. In practice this is realized by means of dynamic fields, but let me not get into the implementation details. The sample type definition that will consist two fields:

<fieldType name="location" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>

In addition to standard attributes: name and class there are two others:

  • dimension – the number of dimensions (used by the class attribute solr.PointType).
  • subFieldSuffix – suffix, which will be added to the dynamic fields created by that type. It is important to remember that the field based on the presented type will create three fields in the index – the actual field (for example named mylocation and two additional dynamic fields).

Field Definitions

Definitions of the fields is another section in the schema.xml file, the section, which in theory should be of interest to us the most during the design of Solr index. As a rule, we find here two kinds of field definitions:

  1. Static Fields
  2. Dynamic Fields

These fields are treated differently by the Solr. The first type of fields, are fields that are available under one name. Dynamic fields are fields that are available under many names – actually their name are a simple regular expression (name starting or ending with a ‘*’ sign). Please note that Solr first selects the static field, then the dynamic field. In addition, if the field name matches more than one definition, Solr will select a field with a longer name pattern.

Returning to the definition of the fields (both static and dynamic), they consist of the following attributes:

  • name – the name of the field (required attribute).
  • type – type of field, which is one of the pre-defined types (required attribute).
  • indexed – if a field is to be indexed (set to true, if you want to search or sort on this field).
  • stored – whether you want to store the original values (set to true, if we want to retrieve the original value of the field).
  • omitNorms – whether you want norms to be ignored (set to true for the fields for which You will apply the full-text search).
  • termVectors – set to true in the case when we want to keep so called term vectors. The default parameter value is false. Some features require setting this parameter to true (eg MoreLikeThis or FastVectorHighlighting).
  • termPositions – set to true, if You want to keep term positions with the term vector. Setting to true will cause the index to expand its size.
  • termOffsets – set to true, if You want to keep term offsets together with term vector. Setting to true will cause the index to expand its size.
  • default – the default value to be given to the field when the document was not given any value in this field.

The following examples of definitions of fields:

<field name="id" type="string" indexed="true" stored="true" required="true" />
<field name="includes" type="text" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" />
<field name="timestamp" type="date" indexed="true" stored="true" default="NOW" multiValued="false"/>
<dynamicField name="*_i" type="int" indexed="true" stored="true"/>

And finally, additional information to remember. In addition to the attributes listed above in the fields definition, we can overwrite the attributes that have been defined for type (eg whether a field is to be multiValued – the above example for a field called timestamp). Sometimes, this functionality can be useful if you need a specific field whose type is slightly different from other types (as in the example – only multiValued attribute). Of course, keep in mind the limitations imposed on the individual attributes associated with types.

CopyField section

In short, this section is responsible for copying the contents of fields to other fields. We define the field which value should be copied, and the destination field. Please note that copying takes place before the field value is analyzed. Example copyField definition:

<copyField source="category" dest="text"/>

For the sake of accuracy, occurring attributes mean:

  • source – the source field,
  • dest – the destination field.

Additional definitions

1. Unique key definition

The definition of a unique key that makes possible to unambiguously identify the document. Defining a unique key is not necessary, but is recommended. Sample definition:

<uniqueKey>id</uniqueKey>

2. Default search field definition

The Section is responsible for defining a default search field, which Solr use in case You have not given any field. Sample definition:

<defaultSearchField>content</defaultSearchField>

3. Default logical operator definition

This section is responsible for the definition of default logical operator that will be used. Sample definition looks as follows:

<solrQueryParser defaultOperator="OR" />

Possible values are: OR and AND.

4. Defining similarity

Finally we define the similarity that we will use. It is rather a topic for another post, but you must know that if necessary You can change the default similarity (currently in Solr trunk there are already two classes of similarity). The sample definition is as follows:

<similarity class="pl.solr.similarity.CustomSimilarity" />

A few words at the end

Information presented above should give some insight on what schema.xml file is and what correspond to the different sections in this file. Soon I will try to write what You should avoid when designing the index.

This post is also available in: Polish

This entry was posted on Monday, August 16th, 2010 at 16:07 and is filed under About Solr. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

7 Responses to “What is schema.xml ?”

  1. Kuldip Gohil Says:

    Nicely Explained!

    Thank you
    Kuldip Gohil

  2. Martin Sánchez Says:

    Thank you for the explanation, great job!

  3. Jsitoe Says:

    This was extremely helpful for me.
    Your explanation is really very good.
    Keep posting on Solr

  4. Carlos Says:

    Hola, quisiera saber que es necesario cambiar para que solr 4.0 pueda hacer busquedas que no sean exactas, por ejemplo, si busco casa, que arroje resultados como: CASA, Casa, CaSa, CAS, etc

  5. Marco Frese Says:

    Thanks! I’m a freshman in solr but your information was a very good for understanding of basics. And schema.xml is a critical point of solr. You done a great job!

  6. sujeet Says:

    Thanks, I am a new in solr. Your information was very good for understanding of basic.

  7. Shubham Says:

    Nice Article!! I am starting with SOLR …your article is very explanatory.