This book teaches you how to master the subtle art of multilingual text processing and prevent text data corruption. Data corruption is the all-too-common problem of words that are garbled into strings of question marks, black diamonds, or random glyphs. In Japanese this is called mojibake (“character change”), written 文字化け, but on your browser it might look like this: ����� or this: æ–‡å—åŒ–ã‘. When this happens, pinpointing the source of error can be surprisingly difficult and time-consuming. The information and example programs in this book make it easy.
This book also provides an introduction to natural language processing using Lucene and Solr. It covers the tools and techniques necessary for managing large collections of text data, whether they come from news feeds, databases, or legacy documents. Each chapter contains executable programs that can also be used for text data forensics.
- Unicode code points
- Character encodings from ASCII and Big5 to UTF-8 and UTF-32LE
- Character normalization using International Components for Unicode (ICU)
- Java I/O, including working directly with zip, gzip, and tar files
- Regular expressions in Java
- Transporting text data via HTTP
- Parsing and generating XML, HTML, and JSON
- Using Lucene 4 for natural language search and text classification
- Search, spelling correction, and clustering with Solr 4
Other books on text processing presuppose much of the material covered in this book. They gloss over the details of transforming text from one format to another and assume perfect input data. The messy reality of raw text will have you reaching for this book again and again.