Using Luke the Lucene Index Browser to develop Search Queries

by

Luke is a GUI tool written in Java that allows you to browse the contents of a Lucene index, examine individual documents, and run queries over the index. Whether you’re developing with PyLucene, Lucene.NET, or Lucene Core, Luke is your friend.

Downloading, running Luke

Downloads are available from http://code.google.com/p/luke/ You can download the source or just the standalone binary .jar file. Right now we’re using the just-released Luke 4.0.0-ALPHA standalone binary. To run Luke, just fire up the jar file from the command line:

> java -jar lukeall-4.0.0-ALPHA.jar

Once launched, Luke opens with a dialog menu to open an index. We’re using an index called fed-papers.idx, which is an index over the Federalist Papers, a set of 85 op-ed pieces written by James Madison, Alexander Hamilton, and/or John Jay. This is taken from the example in our recently updated Lucene 3 tutorial. To build the index yourself, download the data and accompanying source code and follow the instructions in the tutorial. Each of the 85 papers is treated as a document. The text of the document has been indexed using Lucene’s StandardAnalyzer and is stored in a field named text.

The following is a screenshot of the Luke GUI upon first opening the index. We’ve circled the main controls on the top left corner of the GUI. Luke opens with the Overview tab pane.

All 85 documents have a similar structure. They start with an identifying number, e.g. FEDERALIST No. 1. The opening salutation is: To the People of the State of New York:. Looking at the top terms in the index, we see that there are 85 documents which contain the words {federalist, people, state, york} as well as common function words. The StandardAnalyzer has stop-listed other function words, including {to, of, the, no}. Otherwise, these too would be among the top terms with a document frequency of 85.

Exploring Document Indexing

The Document tab pane lets you examine individual documents in the index. In the screenshot below we have selected the 47th document in the index, FEDERALIST No. 48. Circled in red are the series of controls used to get to this point. First we chose the Document tab pane, and then used the “Browse by document number” control to choose document 47. The bottom half of the window shows the document fields.

Clicking on the “Reconstruct & Edit” control opens a new pop-up window which allows us to inspect the document contents on a field-by-field basis. Below we show side-by-side screenshots of the text field. On the left we see the raw text as stored; on the right we see the result of tokenization and indexing via the Lucene StandardAnalyzer. Lucene’s StandardAnalyzer includes a StandardTokenizer, StandardFilter, Tokenization happens first. This removes punctuation and assigns each token a position. Then the tokens are lower-cased and stop-listed. LowerCaseFilter and StopFilter.

We have circled the text of the opening salutation: To the People of the State of New York:. It consists of 9 words followed by 1 punctuation symbol. The tokenizer strips out the final colon. The stop-list filter removes the tokens at positions {1, 2, 4, 5, 7}. The text of tokens which have been indexed are displayed as is. Tokens which were stop-listed are missing from the index, so Luke displays the string null followed by the number of consecutive stop-listed tokens.

Exploring Search

The Search tab pane packs a lot of controls. The annotated screenshot below shows the results of running a search over the index.

In the top right quadrant is an embedded tab pane (tabs circled in red). On the Analysis tab, we have selected the StandardAnalyzer from a pull-down list of Analyzers and selected the default search field from a pull-down list of all fields in the index. Given this, Luke constructs the QueryParser used to parse queries entered into the text box in the upper left quadrant. We entered the words: “Powers of the Judiciary” into this text box, circled in red. Directly below is it the parsed query, also circled in red. Clicking on the “Search” control runs this search over the index. The bottom pane displays the ranked results.

Luke passes the input string from the search text box to the QueryParser which parses it using the specified analyzer and creates a set of search terms, one term per token, over the specified default field. In this example, the StandardAnalyzer tokenizes, lowercases, and stop-lists these words, resulting in two search terms text:powers and text:judiciary. The result of this search pulls back all documents which contain the words powers and/or judiciary.

To drill down on Lucene search, we use the Explain Structure control, circled in red on the following screenshot.

When clicked, this pops up another pane which shows all details of the query constructed from the search expression. The structure of the query used in this example is:

lucene.BooleanQuery
  clauses=2, maxClauses=1024
  Clause 0: SHOULD
    lucene.TermQuery
      Term: field='text' text='powers'
  Clause 1: SHOULD
    lucene.TermQuery
      Term: field='text' text='judiciary'

The search expression can be any legal search string according to the Lucene Query Parser Syntax. To search for the phrase powers of the judiciary we need to enclose it in double quotes. But this new search produces no results, which is clearly wrong, since this phrase occurs in the title of FEDERALIST No. 80 (see search results above).

In the query details we see that the input has been parsed into text:"powers judiciary". The Explain Structure feature returns the following:

lucene.PhraseQuery, slop=0
  pos: [0,1]
  Term 0: field='text' text='powers'
  Term 1: field='text' text='judiciary'

The problem here is that the token positions specified don’t allow for the intervening stop-listed words. To correct this, we need to adjust Luke’s default settings on the QueryParser. The screenshot below shows both the controls changed and the query results.

First we go to the QueryParser tab in the set of search controls in the top right quadrant, then we click the checkbox labeled Enable positionIncrements. Now the parsed query looks like this: text:"powers ? ? judiciary", which translates to the following programmatic query:

lucene.PhraseQuery, slop=0
  pos: [0,3]
  Term 0: field='text' text='powers'
  Term 1: field='text' text='judiciary'

Finally, we select the single search result, and click on the Explain control (circled in red). This pops up another window which explains the query score (outlined in red). Here is the text of the explanation:

0.0847  weight(text:"powers ? ? judiciary" in 79) [DefaultSimilarity], result of:
  0.0847  fieldWeight in 79, product of:
    1.0000  tf(freq=1.0), with freq of:
      1.0000  phraseFreq=1.0
    3.6129  idf(), sum of:
      1.3483  idf(docFreq=59, maxDocs=85)
      2.2646  idf(docFreq=23, maxDocs=85)
    0.0234  fieldNorm(doc=79)

Using the Lucene XML Query Parser

The Lucene API contains many kinds of queries beyond those generated by the QueryParser. You can use Luke to develop these queries as well, via the Lucene XML Query Parser. It is almost impossible to find any on-line documentation on this most excellent contribution to Lucene by Mark Harwood. Luckily, it is distributed with Lucene source tgz files. For Lucene 3.6 the path to this documentation is: lucene-3.6.0/contrib/xml-query-parser/docs/index.html.

Recast in as a Lucene XML Query, our original search term:powers term:judiciary becomes:

 <BooleanQuery fieldName="text">
         <Clause occurs="should">
             <TermQuery>powers</TermQuery>
         </Clause>
         <Clause occurs="should">
               <TermQuery>judiciary</TermQuery>
         </Clause>
 </BooleanQuery>

The screenshot below shows the results of pasting the above XML into the search expression textbox with the QueryParser control Use XML Query Parser turned on. (Note: currently this works in Luke 3.5 but not in Luke 4.0-ALPHA. We hate the bleeding edge).

The XML rewrites to exactly the same query as our first query.

Big deal, you might be saying right about now. That’s a whole lotta chopping for not much kindling. Au contraire! One of the reasons for developing the XML Query Syntax is this:

To bridge the growing gap between Lucene query/filtering functionality and the set of functionality accessible through the standard Lucene QueryParser syntax.

For example, a while back, Mark Miller blogged about Lucene SpanQueries. This is similar to but not the same as PhraseQuery. We can use Luke to compare and contrast the difference between the two. Here’s our earlier phrase query Powers of the Judiciary recast as a span query.

  <SpanNear slop="3" inOrder="true" fieldName="text">		
      <SpanTerm>powers</SpanTerm>
      <SpanTerm>judiciary</SpanTerm>
  </SpanNear>

The screenshot below shows the results of running this query.

FEDERALIST No. 48 scores slightly better on this query than does FEDERALIST No. 80. Understanding why is left as an exercise to the reader.

5 Responses to “Using Luke the Lucene Index Browser to develop Search Queries”

  1. Using Luke the Lucene Index Browser to develop Search Queries « Another Word For It Says:

    […] Using Luke the Lucene Index Browser to develop Search Queries […]

  2. Mark Says:

    Re Luke, XML QP and Lucene 4.0 issues – see https://groups.google.com/forum/?fromgroups#!topic/luke-discuss/Uuy4V_HQllQ for a fix. Glad you find it useful :)

  3. A Peek behind the Neo4j Lucene Index Curtain | Max De Marzi Says:

    […] a look at this blog post for a more detailed look at Luke’s […]

  4. irurumuhammad Says:

    hello sir.. I’m new to lucene, and now I’m using lucene 4.6.0

    I already build an index with lucene, and I want to check my index with luke, but when I insert my lucene index directory into luke, I got an error, and I can’t read my index..

    Do you have any idea, why my index can’t read with luke? and do you know how to solve it? Thanks before

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 824 other followers