Data Import Handler – How to import data from SQL databases (part 3)

In previous episodes  (part 1 i part 2) we were able to import data from a database in a both wyas full and incremental. Today is the time for a short summary.

Setting dataSource

Recall the line with our setup:

<dataSource
    driver="org.postgresql.Driver"
    url="jdbc:postgresql://localhost:5432/wikipedia"
    user="wikipedia"
    password="secret" />

These are not all attributes that can appear. For readability let’s mention them all:

  • name – the name of the source – you can define many different sources and refer to them by attribute “DataSource” tag “entity
  • driver – JDBC driver class name
  • url – JDBC database url
  • user – database user name (if not defined, or empty, the connection to the database occurs without a pair user/password)
  • password – user password
  • jndiName – instead of giving elements: driver/url/user/password, you can specify the JNDI name under which the data source implementation (javax.sql.DataSource) is made available by the container (eg Jetty/Tomcat)

Advanced arguments:

  • batchSize (default: 500) – sets the maximum number (or rather a suggestion for the driver) records retrieved from the database in one query to the database. Changing this parameter can help in situations where queries return to much results. It may not help, since implementation of this mechanism depends on the JDBC driver.
  • convertType (default: false) – Applies an additional conversion from the field type returned by the database to the field type defined in the schema.xml. The default value seems to be safer, because it does not cause extra, magical conversion. However, in special cases (eg BLOB fields), that conversion is one of the ways of solving the problem.
  • maxRows (default: 0 – no limit) – sets the maximum number of results returned by a query to the database.
  • readOnly – set the connection to the database in read mode. In principle, this could mean that the driver will be able to perform additional optimizations. At the same time it means the default (!) transactionIsolation setting the TRANSACTION_READ_UNCOMMITTED, holdability the CLOSE_CURSORS_AT_COMMIT, autoCommit to true.
  • autoCommit – set autocommit transaction after each query.
  • transactionIsolation (TRANSACTION_READ_UNCOMMITTED, TRANSACTION_READ_COMMITTED, TRANSACTION_REPEATABLE_READ, TRANSACTION_SERIALIZABLE, TRANSACTION_NONE) – sets the transaction isolation (ie, the visibility of the data changed within a transaction)
  • holdability (CLOSE_CURSORS_AT_COMMIT, HOLD_CURSORS_OVER_COMMIT) – defines how the results will closed (ResultSet) when the transaction is closed
  • – The important thing is that there may be any other attributes. All of them will be forwarded by DIH to the JDBC driver, which allows you to define special behavior defined by a specific JDBC driver.
  • type – type of source. The default value (JdbcDataSource) is sufficient, so the tag can forgotten (I remind you of him on the occasion of the definition of non-SQLowych source)

The „entity” element

Let us now turn to the description of the “entity” item.

As a reminder:

<entity name="page" query="SELECT page_id as id, page_title as name from page">
    <entity name="revision" query="select rev_id from revision where rev_page=${page.id}">
        <entity name="pagecontent" query="select old_text as text from pagecontent where old_id=${revision.rev_id}">
        </entity>
    </entity>
</entity>

And all the attributes:

Primary:

  • name – the name of the entity
  • query – SQL query used to retrieve data associated with that entity.
  • deltaQuery – query responsible for returning the IDs of those records that have changed since the last crawl (full or incremental) – the last crawl time is provided by DIH in the variable: ${dataimporter.last_index_time}. This query is used by Solr to find those records that have changed.
  • parentDeltaQuery – query requesting data for the parent entity record. With these queries Solr is able to retrieve all the data that make up the document, regardless of the entity from which they originate. This is necessary because the indexing engine is not able to modify the indexed data – so we need to index the entire document, regardless of the fact that some data has not changed.
  • deletedPkQuery – provides identifiers of deleted items.
  • deltaImportQuery – query requesting data for a given record identified by ID that is avaiable as a DIH variable: ${dataimport.delta.id}.
  • dataSource – the name of the source, the definitions used in several sources (see dataSource.name)

and advanced:

  • processor – SQLEntityProcessor by default. Element whose function is to provide the data source further elements to a crawl. In the case of databases, usually the default implementation is sufficient
  • transformer – the data retrieved from the source can be further modified before transmission to a crawl. In particular, the transformer may return additional records, which makes it a very powerful tool
  • rootEntity – default true for entity element below the document element. This marks the element, which is treated as a root, that is, it will be used to create new items in the index
  • threads the number of threads used in the service component entity
  • onError (abort, skip, continue) – a way to respond to issues: to stop working (abort, the default behavior), ignoring the document (skip), ignore the error (continue)
  • preImportDeleteQuery – used instead of “*:*” to delete data from the index. (Note: The query to the index, does not query database) – makes sense only in the main entity element
  • postImportDeleteQuery – used after a full import. (Like preImportDeleteQuery query to the index) – makes sense only in the main entity element
  • pk – primary key (database, not to be confused with the unique key of the document) – is relevant only in incremental indexing, if we let DIH deltaImportQuery guess based on the query

In the text above the word “guess” appeared. DIH is trying to streamline the work, by adopting reasonable defaults. For example, as mentioned above, during the incremental import is able to try to determine deltaImportQuery. Actually, it was the only behavior in earlier versions, it was realized, that the generated queries does not always work. Hence, I suggest caution and the limited principle of trust 🙂

Another thing is the ability to override the definition of the field in a situation where the column names returned by the query correspond to the names of fields in the schema.xml.  (Hand up: who noted that the above example is not a copy of the second part but is using that mechanism?)

Yet another example of that DIH is very flexible is to draw attention to the fact that having a structure:

${dataimporter.last_index_time}

we can write the full import of this definition that when the import has already been carried out, it will be preserved as an incremental import! I think this functionality, “came” a little by accident 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *