Uses of Class
org.apache.lucene.analysis.TokenStream
Packages that use TokenStream
Package
Description
Text analysis.
Fast, general-purpose grammar-based tokenizer
StandardTokenizer implements the Word Break rules from the
Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29.The logical representation of a
Document for indexing and
searching.Code to maintain and access indices.
Some utility classes.
Utility classes for working with token streams as graphs.
-
Uses of TokenStream in org.apache.lucene.analysis
Subclasses of TokenStream in org.apache.lucene.analysisModifier and TypeClassDescriptionfinal classThis class can be used if the token attributes of a TokenStream are intended to be consumed more than once.classAbstract base class for TokenFilters that may remove tokens.classAn abstract TokenFilter that exposes its input stream as a graphclassNormalizes token text to lower case.classRemoves stop words from a token stream.classA TokenFilter is a TokenStream whose input is another TokenStream.classA Tokenizer is a TokenStream whose input is a Reader.Fields in org.apache.lucene.analysis declared as TokenStreamModifier and TypeFieldDescriptionprotected final TokenStreamTokenFilter.inputThe source of tokens for this filter.protected final TokenStreamAnalyzer.TokenStreamComponents.sinkSink tokenstream, such as the outer tokenfilter decorating the chain.Methods in org.apache.lucene.analysis that return TokenStreamModifier and TypeMethodDescriptionabstract TokenStreamTokenFilterFactory.create(TokenStream input) Transform the specified input TokenStreamAnalyzer.TokenStreamComponents.getTokenStream()Returns the sinkTokenStreamprotected TokenStreamAnalyzer.normalize(String fieldName, TokenStream in) Wrap the givenTokenStreamin order to apply normalization filters.protected final TokenStreamAnalyzerWrapper.normalize(String fieldName, TokenStream in) TokenFilterFactory.normalize(TokenStream input) Normalize the specified input TokenStream While the default implementation returns input unchanged, filters that should be applied at normalization time can delegate tocreatemethod.final TokenStreamAnalyzer.tokenStream(String fieldName, Reader reader) Returns a TokenStream suitable forfieldName, tokenizing the contents ofreader.final TokenStreamAnalyzer.tokenStream(String fieldName, String text) Returns a TokenStream suitable forfieldName, tokenizing the contents oftext.static TokenStreamAutomatonToTokenStream.toTokenStream(Automaton automaton) converts an automaton into a TokenStream.TokenFilter.unwrap()protected TokenStreamAnalyzerWrapper.wrapTokenStreamForNormalization(String fieldName, TokenStream in) Wraps / alters the given TokenStream for normalization purposes, taken from the wrapped Analyzer, to form new components.protected final TokenStreamDelegatingAnalyzerWrapper.wrapTokenStreamForNormalization(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis with parameters of type TokenStreamModifier and TypeMethodDescriptionabstract TokenStreamTokenFilterFactory.create(TokenStream input) Transform the specified input TokenStreamprotected TokenStreamAnalyzer.normalize(String fieldName, TokenStream in) Wrap the givenTokenStreamin order to apply normalization filters.protected final TokenStreamAnalyzerWrapper.normalize(String fieldName, TokenStream in) TokenFilterFactory.normalize(TokenStream input) Normalize the specified input TokenStream While the default implementation returns input unchanged, filters that should be applied at normalization time can delegate tocreatemethod.TokenStreamToAutomaton.toAutomaton(TokenStream in) Pulls the graph (includingPositionLengthAttribute) from the providedTokenStream, and creates the corresponding automaton where arcs are bytes (or Unicode code points if unicodeArcs = true) from each term.protected TokenStreamAnalyzerWrapper.wrapTokenStreamForNormalization(String fieldName, TokenStream in) Wraps / alters the given TokenStream for normalization purposes, taken from the wrapped Analyzer, to form new components.protected final TokenStreamDelegatingAnalyzerWrapper.wrapTokenStreamForNormalization(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis with parameters of type TokenStreamModifierConstructorDescriptionCachingTokenFilter(TokenStream input) Create a new CachingTokenFilter aroundinput.Create a newFilteringTokenFilter.GraphTokenFilter(TokenStream input) Create a new GraphTokenFilterCreate a new LowerCaseFilter, that normalizes token text to lower case.StopFilter(TokenStream in, CharArraySet stopWords) Constructs a filter which removes words from the input TokenStream that are named in the Set.protectedTokenFilter(TokenStream input) Construct a token stream filtering the given input.TokenStreamComponents(Consumer<Reader> source, TokenStream result) Creates a newAnalyzer.TokenStreamComponentsinstance.TokenStreamComponents(Tokenizer tokenizer, TokenStream result) Creates a newAnalyzer.TokenStreamComponentsinstance -
Uses of TokenStream in org.apache.lucene.analysis.standard
Subclasses of TokenStream in org.apache.lucene.analysis.standardModifier and TypeClassDescriptionfinal classA grammar-based tokenizer constructed with JFlex.Methods in org.apache.lucene.analysis.standard that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamStandardAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.standard with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamStandardAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.document
Methods in org.apache.lucene.document that return TokenStreamModifier and TypeMethodDescriptionFeatureField.tokenStream(Analyzer analyzer, TokenStream reuse) Field.tokenStream(Analyzer analyzer, TokenStream reuse) ShapeDocValuesField.tokenStream(Analyzer analyzer, TokenStream reuse) TokenStreams are not yet supportedField.tokenStreamValue()The TokenStream for this field to be used when indexing, or null.Methods in org.apache.lucene.document with parameters of type TokenStreamModifier and TypeMethodDescriptionvoidField.setTokenStream(TokenStream tokenStream) Expert: sets the token stream to be used for indexing.FeatureField.tokenStream(Analyzer analyzer, TokenStream reuse) Field.tokenStream(Analyzer analyzer, TokenStream reuse) ShapeDocValuesField.tokenStream(Analyzer analyzer, TokenStream reuse) TokenStreams are not yet supportedConstructors in org.apache.lucene.document with parameters of type TokenStreamModifierConstructorDescriptionField(String name, TokenStream tokenStream, IndexableFieldType type) Create field with TokenStream value.TextField(String name, TokenStream stream) Creates a new un-stored TextField with TokenStream value. -
Uses of TokenStream in org.apache.lucene.index
Methods in org.apache.lucene.index that return TokenStreamModifier and TypeMethodDescriptionIndexableField.tokenStream(Analyzer analyzer, TokenStream reuse) Creates the TokenStream used for indexing this field.Methods in org.apache.lucene.index with parameters of type TokenStreamModifier and TypeMethodDescriptionIndexableField.tokenStream(Analyzer analyzer, TokenStream reuse) Creates the TokenStream used for indexing this field. -
Uses of TokenStream in org.apache.lucene.util
Methods in org.apache.lucene.util with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected QueryQueryBuilder.analyzeBoolean(String field, TokenStream stream) Creates simple boolean query from the cached tokenstream contentsprotected QueryQueryBuilder.analyzeGraphBoolean(String field, TokenStream source, BooleanClause.Occur operator) Creates a boolean query from a graph token stream.protected QueryQueryBuilder.analyzeGraphPhrase(TokenStream source, String field, int phraseSlop) Creates graph phrase query from the tokenstream contentsprotected QueryQueryBuilder.analyzeMultiBoolean(String field, TokenStream stream, BooleanClause.Occur operator) Creates complex boolean query from the cached tokenstream contentsprotected QueryQueryBuilder.analyzeMultiPhrase(String field, TokenStream stream, int slop) Creates complex phrase query from the cached tokenstream contentsprotected QueryQueryBuilder.analyzePhrase(String field, TokenStream stream, int slop) Creates simple phrase query from the cached tokenstream contentsprotected QueryQueryBuilder.analyzeTerm(String field, TokenStream stream) Creates simple term query from the cached tokenstream contentsprotected QueryQueryBuilder.createFieldQuery(TokenStream source, BooleanClause.Occur operator, String field, boolean quoted, int phraseSlop) Creates a query from a token stream. -
Uses of TokenStream in org.apache.lucene.util.graph
Methods in org.apache.lucene.util.graph that return types with arguments of type TokenStreamModifier and TypeMethodDescriptionGraphTokenStreamFiniteStrings.getFiniteStrings()Get all finite strings from the automaton.GraphTokenStreamFiniteStrings.getFiniteStrings(int startState, int endState) Get all finite strings that start atstartStateand end atendState.Constructors in org.apache.lucene.util.graph with parameters of type TokenStream