Solr 4.1: Stored fields compression

Despite the fact that Lucene and Solr 4.0 is still very fresh we decided that its time to take a look at the changes in the approaching 4.1 version. One of those changes will be stored fields compression to decrease the size of the index size, when we use such fields. Let’s take a look and see how that works.

Some theory

In case our index consists of many stored fields they can be consuming most space when compared to other information in the index. How to know how much space the stored fields take ? Its easy – just go to the directory that holds your index and check how much space the files with .fdt extension takes. Despite the fact, that stored fields don’t influence the search performance directly, your I/O subsystem and its cache can be forced to work much harder because of the larger amount of data on the disk. Because of that your queries can be executed longer and you may need more time to index your data.

With the incoming release of Lucene 4.1 stored fields will be compressed with the use of LZ4 algorithm (, which should decrease the size of the index when we use high number of stored fields, but also shouldn’t be CPU demanding when it comes to compression and decompression.

Test data

For the discussed functionality tests we’ve used Polish Wikipedia articles data, from 2012.11.10 ( Unpacked XML file was about 4.7GB on disk.

Index structure

We’ve used the following index structure to index the above data:

DIH configuration

We’ve used the following DIH configuration in order to index Wikipedia data:

Indexing time

In both cases indexing time was very similar, for the same amount of documents (there was 1.301.394 documents after indexing). In case of Solr 4.0 indexing took 14 minutes and 33 seconds. In case of Solr 4.1 indexing took 14 minutes and 43 seconds. As you can see Solr 4.1 was slightly slower, but because I made the tests on my laptop, we can assume that the indexing performance is very similar.

Index size

The size of the index is what interest us the most in this case. In case of Solr 4.0 the index created with the Wikipedia data was about 5.1GB5.464.809.863 bytes. In case of Solr 4.1 the index weighted approximately 3.24GB3.480.457.399 bytes. So when comparing index created by Solr 4.0 to the one created by Solr 4.1 we got about 35% smaller index.

Wrapping up

You can clearly see, that the gain from compressing stored fields is quite big. Despite the fact that we need additional CPU cycles for compression handling we benefit from less I/O subsystem pressure and we can be sure that the gain will be greater than the loss of a few CPU cycles. After seeing this I’m not wondering with the stored fields compressing is turned on by default in Lucene 4.1 and thus in Solr 4.1 too. However if you would like to turn off that behavior you’ll need to implement your own codec – one that doesn’t use compression, at least for now. However you don’t need to fork Lucene code to do that and this again shows how powerful the flexible indexing is.

15 thoughts on “Solr 4.1: Stored fields compression

  • 19 November 2012 at 19:40

    Thanks for introducing this new feature in 4.1.

    I download 4.1, but seems the index size is similar as in 3.6. – 11.6 GB.

    So I am guessing in order to use fields compression, I need to configure solr to make it use this feature, is it right? If so, can u provide us an example configuration?

    Thanks very much 🙂

    • 19 November 2012 at 19:59

      I suppose that is because you are still using Lucene 4.0 index format. Look at your solrconfig.xml file and find the following:


      And change it to:


  • 20 November 2012 at 19:40

    Thanks for your reply, I checked my solrconfig.xml file, the value of luceneMatchVersion is LUCENE_41.

    i also checked, seems CompressingStoredFieldsFormat is already set as default. So should no change in solrconfig.xml.

    The index size of 69696 emails is decreased from 5.83 GB in 3.6 to 5.26 in Solr 4.1. – Only 9.7% smaller.

    Also I found some string files in solr 4.1:

    Some files named as _4e_Lucene41_0.doc, 24 files, totoally 239 mb.

    Do you have any idea what these files for?

    Thanks a lot 🙂

  • 20 November 2012 at 21:00

    Yeap its turned on by default, but only from Lucene 4.1, that’s why I asked about match version in solrconfig.xml.

    As for the 10% smaller index – that depends on how much stored fields you have and how much data is in there. So if you don’t have much data in stored fields there will be minimal difference in size of the indices with 3.6 and 4.1.

    As for the files – those are files written with Lucene41 codec.

  • 21 November 2012 at 16:01

    Thanks and sorry to ask you another question 🙂
    It mentions that we can set compression mode:

    Also we can see this three modes in CompressingStoredFieldsFormat, CompressionMode.

    How we can set compression mode in Solr 4.1?
    As I want to set the mode to HIGH_COMPRESSION.

  • 23 November 2012 at 01:45

    @gr0: Nice post! I’m very happy that people find stored fields compression useful!

    @Jeffery: To do this, you need to create a custom codec. Here is how you could do it with the current state of Lucene trunk (you might need to modify the code a bit for the release):

    public class MyCustomCodec extends FilterCodec {

    private static final String CODEC_NAME = “MyCustomCodec”;
    private static final Codec DELEGATE = new Lucene41Codec();
    private static final String STORED_FIELDS_FORMAT_NAME = “MyCustomStoredFields”;
    private static final CompressionMode COMPRESSION_MODE = CompressionMode.HIGH_COMPRESSION;
    private static final int CHUNK_SIZE = 1 << 16;

    private final StoredFieldsFormat storedFieldsFormat;

    public MyCustomCodec() {
    this.storedFieldsFormat = new CompressingStoredFieldsFormat(

    public StoredFieldsFormat storedFieldsFormat() {
    return storedFieldsFormat;


    But please note that CompressingStoredFieldsFormat is experimental and might change in incompatible ways in the next releases.

  • 16 January 2013 at 14:55

    If we want to use the 4.1 compression, do we have to re-index the existing 4.0 indexes?

    • 16 January 2013 at 15:08

      No you don’t. The transition to new stored fields format will come with time as your segments will be merged. However if your index is not changed often, than the best way would be to reindex your data.

  • 23 January 2013 at 17:59

    We experimented with compressing things manually in solr 3.x some time ago. Basically we were using solr as a key value store for storing big blobs of xml. Basically there were only a few fields, id, timestamp, and blob.

    Using gzip compression on the blob, our index size was about 200GB where the raw xml dump we indexed from was a around 50GB (gzip compressed). Each blob was around 5KB on average (uncompressed). So, there is a bit of a problem here, as you can see where simply compressing each blob isn’t nearly as efficient as one would hope.

    The reason for this is the dictionary used for compression in gzip is stored along with the blob. Additionally, the dictionary is specific only to the blob and it gets stored along with the blob.

    Java allows you to use a custom dictionary and we so we generated our own based on frequency counts for things like tag names, namespace uris, and value strings. This cut our index size nearly in half to about 100GB relative to simply gzipping each blob. That’s still substantial but it includes indexes and it got us in a pretty good state.

    Maybe in a future version, a more plugable solution could be provided so that people can customize how their data is compressed?

    • 23 January 2013 at 21:53

      Actually it is pluggable solution – stored fields compression its a part of the new, default Lucene 4.1 codec. So if you want to change it, you can develop your own codec and use it for indexing. Of course it would be perfect if we could just plug in a new compression method without developing the whole codec, but maybe that will come in the future.

  • 25 April 2013 at 06:20

    Here i am using DSE-3.0. I have a column name=”DateTime” type=”timestamp” in cassandra column family.I have to index it in solr and want to range query(fq) from DateTime. like this–>DateTime:[2011-04-12 TO 2011-05-23].Have you any idea,how it would be indexed in solr.Please help me.

    • 27 April 2013 at 12:36

      Use the date type to index it, like the one used in example schema provided with Solr:

      However, please remember that you’ll need to provide the date like this:

      DateTime:[2011-04-12T00:00:00.000Z TO 2011-05-24T00:00:00Z]

      You can find more information here:

  • 5 July 2014 at 19:16

    Can yoy please let me know steps about configuration compression of index fields (.pos .ftd files data) in Solr 4.6 version

    • 6 July 2014 at 08:16

      In Solr 4.6 stored fields compression is turned on by default and you don’t have to configure it.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.