Lucene 4 Essentials for Text Search and Indexing

by

Here’s a short-ish introduction to the Lucene search engine which shows you how to use the current API to develop search over a collection of texts. Most of this post is excerpted from Text Processing in Java, Chapter 7, Text Search with Lucene.

Lucene Overview

Apache Lucene is a search library written in Java. It’s popular in both academic and commercial settings due to its performance, configurability, and generous licensing terms. The Lucene home page is http://lucene.apache.org.

Lucene provides search over documents. A document is essentially a collection of fields. A field consists of a field name that is a string and one or more field values. Lucene does not in any way constrain document structures. Fields are constrained to store only one kind of data, either binary, numeric, or text data. There are two ways to store text data: string fields store the entire item as one string; text fields store the data as a series of tokens. Lucene provides many ways to break a piece of text into tokens as well as hooks that allow you to write custom tokenizers. Lucene has a highly expressive search API that takes a search query and returns a set of documents ranked by relevancy with documents most similar to the query having the highest score.

The Lucene API consists of a core library and many contributed libraries. The top-level package is org.apache.lucene, which is abbreviated as oal in this article. As of Lucene 4, the Lucene distribution contains approximately two dozen package-specific jars, e.g.: lucene-core-4.7.0.jar, lucene-analyzers-common-4.7.0.jar, lucene-misc-4.7.0.jar. This cuts down on the size of an application at a small cost to the complexity of the build file.

A Lucene Index is an Inverted Index

Lucene manages an index over a dynamic collection of documents and provides very rapid updates to the index as documents are added to and deleted from the collection. An index may store a heterogeneous set of documents, with any number of different fields that may vary by document in arbitrary ways. Lucene indexes terms, which means that Lucene search is search over terms. A term combines a field name with a token. The terms created from the non-text fields in the document are pairs consisting of the field name and field value. The terms created from text fields are pairs of field name and token.

The Lucene index provides a mapping from terms to documents.  This is called an inverted index because it reverses the usual mapping of a document to the terms it contains.  The inverted index provides the mechanism for scoring search results: if a number of search terms all map to the same document, then that document is likely to be relevant.

Here are three entries from an index over part of the The Federalist Papers, a collection of 85 political essays which contains roughly 190,000 word instances over a vocabulary of about 8,000 words.  A field called text holds the contents of each essay, which have been tokenized into words, all lowercase, no punctuation.  The inverted index entries for the terms consisting of field name text and tokens abilities, able, and abolish are:

term document
text:abilities 2, 17, 21, 22, 35, 64, 76
text:able 1, 3, 4, 7, 8, 9, 10, 11, 12, 15, 16, 21, 23, 24,
25, 27, 29, 30, 33, 34, 35, 36, 37, 38, 41, 43, 46, 49,
51, 52, 58, 62, 63, 64, 67, 68, 70, 71, 75, 78, 85
text:abolish 10, 14, 26, 31, 39, 49, 45, 70, 71, 78, 81

Note that the document numbers here are Lucene’s internal references to the document.  These ids are not stable; Lucene manages the document id as it manages the index and the internal numbering may change as documents are added to and deleted from the index.

Lucene Versions and Version Numbers

The current Apache Lucene Java release is version 4.7, where 4 is the major version number and 7 is the minor version number.  The Apache odometer rolled over to 4.6.0 in November, 2013 and just hit 4.7.0 on February 26, 2014.  At the time that I wrote the Lucene chapter of Text Processing in Java, the current version was 4.5.  Minor version updates maintain backwards compatibility for the given major version therefore, all the example programs in the book compile and run under version 4.7 as well.

The behavior of many Lucene components has changed over time.  In particular, the index file format is subject to change from release to release as different methods of indexing and compressing the data are implemented.   To address this, the Enum class oal.util.Version was introduced in Lucene 3.  A Version instance identifies the major and minor versions of Lucene. For example, LUCENE_45 identifies version 4.5.  The Lucene version is supplied to the constructor of the components in an application.  As of Lucene 4.7, older versions have been deprecated, so the compiler issues a warning when older versions are specified.  This means that the examples from the book, which specify version 4.5, generate a compiler warning when compiled under version 4.7.

There’s no requirement that all components in an application be of the same version however, for components used for both search and indexing, it is critical that the Lucene version is the same in the code that is called at indexing time and the code that is called at search time.  A Lucene index knows what version of Lucene was used to create it (by using the Lucene Version enum constant).  Lucene is backward compatible with respect to searching and maintaining old index versions because it includes classes that can read and write all versions of the index up through the current release.

Lucene Indexes Fields

Conceptually, Lucene provides indexing and search over documents, but implementation-wise, all indexing and search is carried out over fields.  A document is a collection of fields.  Each field has three parts: name, type, and value.  At search time, the supplied field name restricts the search to particular fields.

For example, a MEDLINE citation can be represented as a series of fields: one field for the name of the article, another field for name of the journal in which it was published, another field for the authors of the article, a pub-date field for the date of publication, a field for the text of the article’s abstract, and another field for the list of topic keywords drawn from Medical Subject Headings (MeSH).  Each of these fields is given a different name, and at search time, the client could specify that it was searching for authors or titles or both, potentially restricting to a date range and set of journals by constructing search terms for the appropriate fields and values.

The Lucene API for fields has changed across the major versions of Lucene as the functionality and organization of the underlying Lucene index have evolved.  Lucene 4 introduces a new interface oal.index.IndexableField, which is implemented by class oal.document.Field.  Lucene 4 also introduces datatype-specific subclasses of Field that encapsulate indexing and storage details for common use cases.  For example, to index integer values, use class oal.document.IntField, and to index simple unanalyzed strings (keywords), use oal.document.StringField.  These so-called sugar subclasses are all final subclasses.

The field type is an object that implements oal.index.IndexableFieldType.  Values may be text, binary, or numeric.  The value of a field can be indexed for search or stored for retrieval or both.  The value of an indexed field is processed into terms that are stored in the inverted index.  The raw value of a stored field is stored in the index in a non-inverted manner.  Storing the raw values allows you to retrieve them at search time but may consume substantial space.

Indexing and storage options are specified via setter methods on oal.document.FieldType.  These include the method setIndexed(boolean), which specifies whether or not to index a field, and the method setTokenized(boolean), which specifies whether or not the value should be tokenized.  The method setOmitNorms(boolean) controls how Lucene computes term frequency.  Lucene’s default behavior is to represent term frequency as a proportion by computing the ratio of the number of times a term occurs to the total number of terms in the document, instead of storing a simple term frequency count.  To do this calculation it stores a normalizing factor for each field that is indexed.  Calling method setOmitNorms with value true turns this off and the raw term frequency is used instead.

Some indexing choices are interdependent.  Lucene checks the values on a FieldType object at Field construction time and throws an IllegalArgumentException if the FieldType has inconsistent values.  For example, a field must be either indexed or stored or both, so indexed and/or stored must be true.  If indexed is false, then stored must be true and all other indexing options should be set to false.  The following code fragment defines a custom FieldType and then creates a Field of this type:

FieldType myFieldType = new FieldType();
myFieldType.setIndexed(true);
myFieldType.setOmitNorms(true);
myFieldType.setIndexOptions(IndexOptions.DOCS_AND_FREQS);
myFieldType.setStored(false);
myFieldType.setTokenized(true);
myFieldType.freeze();
Field myField = new Field("field name",
                          "field value",
                          myFieldType);

The source code of the subclasses of Field provides good examples of how to define a FieldType.

Analyzing Text into Tokens

Search and indexing over text fields require processing text data into tokens.  The package oal.analysis contains the base classes for tokenizing and indexing text.  Processing may consist of a sequence  of transformations , e.g., whitespace tokenization, case normalization, stop-listing, and stemming.

The abstract class oal.analysis.TokenStream breaks the incoming text into a sequence of tokens that are retrieved using an iterator-like pattern.  TokenStream has two subclasses: oal.analysis.Tokenizer and oal.analysis.TokenFilter.   A Tokenizer takes a java.io.Reader as input whereas a TokenFilter takes another oal.analysis.TokenStream as input.  This allows us to chain together tokenizers such that the initial tokenizer gets its input from a reader and the others operate on tokens from the preceding TokenStream in the chain.

An oal.analysis.Analyzer supplies the indexing and searching processes with TokenStreams on a per-field basis.  It maps field names to tokenizers and may also supply a default analyzer for unknown field names.  Lucene includes many analysis modules that provide concrete implementations of different kinds of analyzers.  As of Lucene 4, these modules are bundled into separate jarfiles.  There are several dozen language-specific analysis packages, from oal.analysis.ar for Arabic to oal.analysis.tr for Turkish.  The package oal.analysis.core provides several general-purpose analyzers, tokenizers, and tokenizer factory classes.

The abstract class oal.analysis.Analyzer contains methods used to extract terms from input text.  Concrete subclasses of Analyzer must override the method createComponents, which returns an object of the nested class TokenStreamComponents that defines the tokenization process and provides access to initial and file components of the processing pipeline.  The initial component is a Tokenizer that handles the input source.  The final component is an instance of TokenFilter and it is the TokenStream returned by the method Analyzer.tokenStream(String,Reader).  Here is an example of a custom Analyzer that tokenizes its inputs into individual words with all letters lowercase.

Analyzer analyzer = new Analyzer() { 
    @Override protected TokenStreamComponents 
    createComponents(String fieldName, Reader reader) {
        Tokenizer source = 
            new StandardTokenizer(VERSION,reader);
        TokenStream filter = 
            new LowerCaseFilter(VERSION,source);
        return new TokenStreamComponents(source, filter);
    }
};

Note that the constructors for the oal.analysis.standard.StandardTokenizer and oal.analysis.core.LowerCaseFilter objects require a Version argument.  Further note that package oal.analysis.standard is distributed in the jarfile lucene-analyzers-common-4.x.y.jar, where x and y are the minor version and release number.

Indexing Documents

Document indexing consists of first constructing a document that contains the fields to be indexed or stored, then adding that document to the index.  The key classes involved in indexing are oal.index.IndexWriter, which is responsible for adding documents to an index, and oal.store.Directory, which is the storage abstraction used for the index itself.  Directories provide an interface that’s similar to an operating system’s file system.  A Directory contains any number of sub-indexes called segments.  Maintaining the index as a set of segments allows Lucene to rapidly update and delete documents from the index.

The following example shows how to create a Lucene index given a directory containing a set of data files.  The data in this example is taking from the 20 Newsgroups corpus a set of roughly 20,000 messages posted to 20 different newsgroups.  This code is excerpted from the example program in section 7.11 of Text Processing in Java that shows how to do document classification using Lucene.  We’ll get to document classification in a later post.  Right now we just want to go over how to build the index.

void buildIndex(File indexDir, File dataDir) 
    throws IOException, FileNotFoundException {

    Directory fsDir = FSDirectory.open(indexDir);
    IndexWriterConfig iwConf 
        = new IndexWriterConfig(VERSION,mAnalyzer);
    iwConf.setOpenMode(IndexWriterConfig.OpenMode.CREATE);
    IndexWriter indexWriter
        = new IndexWriter(fsDir,iwConf);

    File[] groupsDir = dataDir.listFiles();
    for (File group : groupsDir) {
        String groupName = group.getName();
        File[] posts = group.listFiles();
        for (File postFile : posts) {
            String number = postFile.getName();
            NewsPost post = parse(postFile, groupName, number);
            Document d = new Document();
            d.add(new StringField("category",
                                  post.group(),Store.YES));
            d.add(new TextField("text",
                                post.subject(),Store.NO));
            d.add(new TextField("text",
                                post.body(),Store.NO));
            indexWriter.addDocument(d);
        }

    }
    indexWriter.commit();
    indexWriter.close();
}

The method buildIndex walks over the training data directory and parses the newsgroup messages into a NewsPost object, which is a simple domain model of a newsgroup post, consisting of the newsgroup name, subject, body, and filename (a string of numbers).  We treat each post as a Lucene document consisting of two fields: a StringField named category for the newsgroup name and a TextField named text that holds the message subject and message body.  Method buildIndex takes two java.io.File arguments: the directory for the Lucene index and a directory for one set of newsgroup posts where the directory name is the same as the Usenet newsgroup name, e.g.: rec.sport.baseball.

The first four statements in buildIndex create a oal.store.FSDirectory that stores the index as a file-system directory and an IndexWriter which does the work of analyzing documents and adding them to the index.  The behavior of the IndexWriter is specified by an IndexWriterConfig object.  Documents are indexed via the IndexWriter‘s default Analyzer.  At the outset of the program we define a static final Lucene Version enum constant named VERSION. and a Lucene Analyzer named mAnalyzer.  We supply these as arguments to the IndexWriterConfig constructor.  We call the setOpenMode method with the enum constant IndexWriterConfig.OpenMode.CREATE, which causes the index writer to create a new index or overwrite an existing one.

A for loop iterates over all files in the data directory.  Each file is first parsed into a Java NewsPost and a corresponding Lucene document is created and fields are added to the document.  This document has two fields: category and text.  Newsgroup names are stored as simple unanalyzed strings.  The field named text stores both the message subject and message body.  A document may have multiple values for a given field.  Search over that field will be over all values for that field, however phrase search over that field will not match across different field values.  When theIndexWriter.addDocument(Document) method is called, the document is added to the index and the constituent fields are indexed and stored accordingly.

The final two statements invoke the IndexWriter‘s commit and close methods, respectively.  The commit method writes all pending changes to the directory and syncs all referenced index files so that the changes are visible to index readers.  Lucene now implements a two-phase commit so that if the commit succeeds, the changes to the index will survive a crash or power loss.  A commit may be expensive, and part of performance tuning is determining when to commit as well as how often and how aggressively to merge the segments of the index.

Document Search and Search Ranking

The Lucene search API takes a search query and returns a set of documents ranked by relevancy with documents most similar to the query having the highest score.  Lucene provides a highly configurable hybrid form of search that combines exact boolean searches with softer, more relevance-ranking-oriented vector-space search methods.  All searches are field specific, because Lucene indexes terms and a term is composed of a field name and a token.

Search Queries

Lucene specifies a language in which queries may be expressed.  For instance, computer NOT java produces a query that specifies the term computer must appear in the default field and the term java must not appear.  Queries may specify fields, as in text:java, which requires the term java to appear in the text field of a document.

The query syntax includes basic term and field specifications, modifiers for wildcard, fuzzy, proximity, or range searches, and boolean operators for requiring a term to be present, absent, or for combining queries with logical operators.  Finally, sub-queries may be boosted by providing numeric values to raise or lower their prominence relative to other parts of the query. The full syntax specification is in the package level javadoc for package oal.queryparser.classic.  A bonus feature of Text Processing in Java is a quick one-page reference guide to Lucene’s search query syntax (see Figure 7.2).

Queries may be constructed programmatically using the dozen or so built-in implementations of the oal.search.Query abstract base class.  The most basic kind of query is a search for a single token on a single field, i.e., a single term.  This query is implemented in Lucene’s oal.search.TermQuery class.  A term query is constructed from a Term object, which is constructed from a field name and text for the term, both specified as strings.

Search Scoring

Lucene’s default search scoring algorithm weights results using TF—IDF, term frequency—inverse document frequency.  Term frequency means that high-frequency terms within a document have higher weight than do low-frequency terms.  Inverse document frequency means that terms that occur frequently across many documents in a collection of documents are less likely to be meaningful descriptors of any given document in a corpus and are therefore down-weighted.  This filters out common words.

As of Lucene 4, the API provides alternative scoring algorithms and a pluggable architecture that allows developers to build their own custom scoring models.

Search Example

The following program illustrates the basic sequence of search operations.  This program is a simplified version of the Lucene search example program included with the book.  When run from the command line, this program takes three arguments: the path to a Lucene index; a query string; the maximum number of results to return.  It runs the specified query against the specified index and prints out the rank, score, and internal document id for all documents that match the query.

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.Version;

import java.io.File;
import java.io.IOException;

public class LuceneSearch {
    public static void main(String[] args) 
        throws ParseException, CorruptIndexException,
               IOException {

        File indexDir = new File(args[0]);
        String query = args[1];
        int maxHits = Integer.parseInt(args[2]);

        Directory fsDir = FSDirectory.open(indexDir);
        DirectoryReader reader = DirectoryReader.open(fsDir);
        IndexSearcher searcher = new IndexSearcher(reader);

        Analyzer stdAn 
            = new StandardAnalyzer(Version.LUCENE_45);
        QueryParser parser 
            = new QueryParser(Version.LUCENE_45,"text",stdAn);
        Query q= parser.parse(query);

        TopDocs hits = searcher.search(q,maxHits);
        ScoreDoc[] scoreDocs = hits.scoreDocs;
        System.out.println("hits=" + scoreDocs.length);
        System.out.println("Hits (rank,score,docId)");
        for (int n = 0; n < scoreDocs.length; ++n) {
            ScoreDoc sd = scoreDocs[n];
            float score = sd.score;
            int docId = sd.doc;
            System.out.printf("%3d %4.2f %d\n",
                              n, score, docId);
        }
        reader.close();
    }
}

The main method is declared to throw a Lucene corrupt index exception if the index isn’t well formed, a Lucene parse exception if the query isn’t well formed, and a general Java I/O exception if there is a problem reading from or writing to the index directory.  It starts off by reading in command-line arguments, then it creates a Lucene directory, index reader, index searcher, and query parser, and then it uses the query parser to parse the query.

Lucene uses instances of the aptly named IndexReader to read data from an index, in this example, we use an instance of class oal.index.DirectoryReader.  The oal.search.IndexSearcher class performs the actual search.  Every index searcher wraps an index reader to get a handle on the indexed data.  Once we have an index searcher, we can supply queries to it and enumerate results in order of their score.

The class oal.queryparser.classic.QueryParserBase provides methods to parse a query string into a oal.search.Query object or throw a oal.queryparser.classic.ParseException if it is not.  All tokens are analyzed using the specified Analyzer.  This example is designed to search an index for Lucene version 4.5 that was created using an analyzer of class oal.analysis.standard.StandardAnalyzer which contains text a field named text.

It is important to use the same analyzer in the query parser as is used in the creation of the index.  If they don’t match, queries that should succeed will fail should the different analyzers produce differing tokenizations given the same input.  For instance, if we apply stemming to the contents of text field text during indexing and reduce the word codes to code, but we don’t apply stemming to the query, then a search for the word codes in field text will fail. The search term is text:codes but the index contains only the term text:code.  Note that package oal.queryparser.classic is distributed in the jarfile lucene-queryparser-4.x.y.jar.

The actual search is done via a call to the IndexSearcher‘s search method, which takes two arguments: the query and an upper bound on the number of hits to return.  It returns an instance of the Lucene class oal.search.TopDocs.  The TopDocs result provides access to an array of search results.  Each result is an instance of the Lucene class oal.search.ScoreDoc, which encapsulates a document reference with a floating point score.  The array of search results is sorted in decreasing order of score, with higher scores representing better matches.  For each ScoreDoc object, we get the score from the public member variable score.  We get the document reference number (Lucene’s internal identifier for the doc) from the public member variable doc.  The document identifier is used to retrieve the document from the searcher.  Here, we’re just using the id as a diagnostic to illustrate the basics of search and so we just print the number, even though it is only an internal reference.  Finally, we close the IndexReader.

Discussion and Further Reading

This post tries to cover the essentials of Lucene 4 in a very short amount of space.  In order to do this, this post contains minimal amounts of examples, and links to the Lucene javadocs have been substituted for detailed explanation of how classes and methods behave.  Putting these code fragments together into a full application is left as an exercise to the reader.

Back when Lucene was at version 2.4 I wrote my first blog post titled “Lucene 2.4 in 60 Seconds.”  A more accurate title would have been “Lucene 2.4 in 60 Seconds or More.”  Likewise, a more accurate title for this post would be “The Essential Essentials of Text Search and Indexing with Lucene 4″ but that’s just not very snappy.  For more on the essentials of search and indexing with Lucene, please check out Chapter Seven of Text Processing in Java, (and all the other chapters as well).

To cover all of Lucene would take an entire book, and there are many good ones out there. My favorite is Lucene in Action, Second Edition, which, alas, only covers version 3.

Tags:

2 Responses to “Lucene 4 Essentials for Text Search and Indexing”

  1. Lucene 2.4 in 60 seconds | LingPipe Blog Says:

    […] Lucene 4 Essentials for Text Search and Indexing […]

  2. Lucene Tutorial updated for Lucene 3.6 | LingPipe Blog Says:

    […] Lucene 4 Essentials for Text Search and Indexing […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 822 other followers