Package com.lucene.analysis

API and code to convert text into indexable tokens.

See:
          Description

Class Summary
Analyzer An Analyzer builds TokenStreams, which analyze text.
LetterTokenizer A LetterTokenizer is a tokenizer that divides text at non-letters.
LowerCaseFilter Normalizes token text to lower case.
LowerCaseTokenizer LowerCaseTokenizer performs the function of LetterTokenizer and LowerCaseFilter together.
PorterStemFilter Transforms the token stream as per the Porter stemming algorithm.
PorterStemmer Stemmer, implementing the Porter Stemming Algorithm The Stemmer class transforms a word into its root form.
SimpleAnalyzer An Analyzer that filters LetterTokenizer with LowerCaseFilter.
StopAnalyzer Filters LetterTokenizer with LowerCaseFilter and StopFilter.
StopFilter Removes stop words from a token stream.
Token A Token is an occurence of a term from the text of a field.
TokenFilter A TokenFilter is a TokenStream whose input is another token stream.
Tokenizer A Tokenizer is a TokenStream whose input is a Reader.
TokenStream A TokenStream enumerates the sequence of tokens, either from fields of a document or from query text.
 

Package com.lucene.analysis Description

API and code to convert text into indexable tokens.