tag:blogger.com,1999:blog-3207985.post6018985197462239668..comments2024-03-27T00:43:21.490-04:00Comments on CodeBits - Tested Complex Code!: Installation and Configuration of LocalSolr.Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-3207985.post-85586883536309278102009-12-14T19:20:20.942-05:002009-12-14T19:20:20.942-05:00I've recently got LocalSolr talking nicely to ...I've recently got LocalSolr talking nicely to Solr 1.4.. Here's my step by step<br /><br />http://craftyfella.blogspot.com/2009/12/installing-localsolr-onto-solr-14.htmlDavid Crafthttps://www.blogger.com/profile/05571714270796700086noreply@blogger.comtag:blogger.com,1999:blog-3207985.post-15329483863686565582009-04-14T11:11:00.000-04:002009-04-14T11:11:00.000-04:00Hello,
I have configure localsolr using given step...Hello,<br />I have configure localsolr using given steps, unfortunately it is not working for me..<br />it not giving any exception in log.<br />below is the schema.xml and solrconfig.xml<br /><B>schema.xml</B> <?xml version="1.0" encoding="UTF-8" ?><br /> <!--<br /> Licensed to the Apache Software Foundation (ASF) under one or more<br /> contributor license agreements. See the NOTICE file distributed with<br /> this work for additional information regarding copyright ownership.<br /> The ASF licenses this file to You under the Apache License, Version 2.0<br /> (the "License"); you may not use this file except in compliance with<br /> the License. You may obtain a copy of the License at<br /> http://www.apache.org/licenses/LICENSE-2.0<br /> Unless required by applicable law or agreed to in writing, software<br /> distributed under the License is distributed on an "AS IS" BASIS,<br /> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br /> See the License for the specific language governing permissions and<br /> limitations under the License.<br /> --><br /><!-- <br /> This is the Solr schema file. This file should be named "schema.xml" and<br /> should be in the conf directory under the solr home<br /> (i.e. ./solr/conf/schema.xml by default) <br /> or located where the classloader for the Solr webapp can find it.<br /> This example schema is the recommended starting point for users.<br /> It should be kept correct and concise, usable out-of-the-box.<br /> For more information, on how to customize this file, please see<br /> http://wiki.apache.org/solr/SchemaXml<br /> NOTE: this schema includes many optional features and should not<br /> be used for benchmarking.<br /> --><br /><schema name="example" version="1.2"><br /> <!-- attribute "name" is the name of this schema and is only used for display purposes.<br /> Applications should change this to reflect the nature of the search collection.<br /> version="1.2" is Solr's version number for the schema syntax and semantics. It should<br /> not normally be changed by applications.<br /> 1.0: multiValued attribute did not exist, all fields are multiValued by nature<br /> 1.1: multiValued attribute introduced, false by default <br /> 1.2: omitTf attribute introduced, true by default --><br /> <types><br /> <!-- field type definitions. The "name" attribute is<br /> just a label to be used by field definitions. The "class"<br /> attribute and any other attributes determine the real<br /> behavior of the fieldType.<br /> Class names starting with "solr" refer to java classes in the<br /> org.apache.solr.analysis package.<br /> --><br /> <!-- The StrField type is not analyzed, but indexed/stored verbatim. <br /> - StrField and TextField support an optional compressThreshold which<br /> limits compression (if enabled in the derived fields) to values which<br /> exceed a certain size (in characters).<br /> --><br /> <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/><br /> <!-- boolean type: "true" or "false" --><br /> <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/><br /> <!-- The optional sortMissingLast and sortMissingFirst attributes are<br /> currently supported on types that are sorted internally as strings.<br /> - If sortMissingLast="true", then a sort on this field will cause documents<br /> without the field to come after documents with the field,<br /> regardless of the requested sort order (asc or desc).<br /> - If sortMissingFirst="true", then a sort on this field will cause documents<br /> without the field to come before documents with the field,<br /> regardless of the requested sort order.<br /> - If sortMissingLast="false" and sortMissingFirst="false" (the default),<br /> then default lucene sorting will be used which places docs without the<br /> field first in an ascending sort and last in a descending sort.<br /> --> <br /><br /> <!-- numeric field types that store and index the text<br /> value verbatim (and hence don't support range queries, since the<br /> lexicographic ordering isn't equal to the numeric ordering) --><br /> <fieldType name="integer" class="solr.IntField" omitNorms="true"/><br /> <fieldType name="long" class="solr.LongField" omitNorms="true"/><br /> <fieldType name="float" class="solr.FloatField" omitNorms="true"/><br /> <fieldType name="double" class="solr.DoubleField" omitNorms="true"/><br /> <br /> <!-- Numeric field types that manipulate the value into<br /> a string value that isn't human-readable in its internal form,<br /> but with a lexicographic ordering the same as the numeric ordering,<br /> so that range queries work correctly. --><br /> <fieldType name="sint" class="solr.SortableIntField" sortMissingLast="true" omitNorms="true"/><br /> <fieldType name="slong" class="solr.SortableLongField" sortMissingLast="true" omitNorms="true"/><br /> <fieldType name="sfloat" class="solr.SortableFloatField" sortMissingLast="true" omitNorms="true"/><br /> <fieldType name="sdouble" class="solr.SortableDoubleField" sortMissingLast="true" omitNorms="true"/><br /><br /> <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and<br /> is a more restricted form of the canonical representation of dateTime<br /> http://www.w3.org/TR/xmlschema-2/#dateTime <br /> The trailing "Z" designates UTC time and is mandatory.<br /> Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z<br /> All other components are mandatory.<br /> Expressions can also be used to denote calculations that should be<br /> performed relative to "NOW" to determine the value, ie...<br /> NOW/HOUR<br /> ... Round to the start of the current hour<br /> NOW-1DAY<br /> ... Exactly 1 day prior to now<br /> NOW/DAY+6MONTHS+3DAYS<br /> ... 6 months and 3 days in the future from the start of<br /> the current day<br /> <br /> Consult the DateField javadocs for more information.<br /> --><br /> <fieldType name="date" class="solr.DateField" sortMissingLast="true" omitNorms="true"/><br /> <!--<br /> Numeric field types that manipulate the value into trie encoded strings which are not<br /> human readable in the internal form. Range searches on such fields use the fast Trie Range Queries<br /> which are much faster than range searches on the SortableNumberField types.<br /> For the fast range search to work, trie fields must be indexed. Trie fields are <b>not</b> sortable<br /> in numerical order. Also, they cannot be used in function queries. If one needs sorting as well as<br /> fast range search, one should create a copy field specifically for sorting. Same workaround is<br /> suggested for using trie fields in function queries as well.<br /> For each number being added to this field, multiple terms are generated as per the algorithm described in<br /> org.apache.lucene.search.trie package description. The possible number of terms depend on the precisionStep<br /> attribute and increase dramatically with higher precision steps (factor 2**precisionStep). The default<br /> value of precisionStep is 8.<br /> <br /> Note that if you use a precisionStep of 32 for int/float and 64 for long/double, then multiple terms<br /> will not be generated, range search will be no faster than any other number field,<br /> but sorting will be possible.<br /> --><br /> <fieldType name="tint" class="solr.TrieField" type="integer" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /> <fieldType name="tfloat" class="solr.TrieField" type="float" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /> <fieldType name="tlong" class="solr.TrieField" type="long" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /> <fieldType name="tdouble" class="solr.TrieField" type="double" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /> <fieldType name="tdouble4" class="solr.TrieField" type="double" precisionStep="4" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /> <!--<br /> This date field manipulates the value into a trie encoded strings for fast range searches. They follow the<br /> same format and semantics as the normal DateField and support the date math syntax except that they are<br /> not sortable and cannot be used in function queries.<br /> --><br /> <fieldType name="tdate" class="solr.TrieField" type="date" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /><br /> <!-- The "RandomSortField" is not used to store or search any<br /> data. You can declare fields of this type it in your schema<br /> to generate psuedo-random orderings of your docs for sorting <br /> purposes. The ordering is generated based on the field name <br /> and the version of the index, As long as the index version<br /> remains unchanged, and the same field name is reused,<br /> the ordering of the docs will be consistent. <br /> If you want differend psuedo-random orderings of documents,<br /> for the same version of the index, use a dynamicField and<br /> change the name<br /> --><br /> <fieldType name="random" class="solr.RandomSortField" indexed="true" /><br /> <!-- solr.TextField allows the specification of custom text analyzers<br /> specified as a tokenizer and a list of token filters. Different<br /> analyzers may be specified for indexing and querying.<br /> The optional positionIncrementGap puts space between multiple fields of<br /> this type on the same document, with the purpose of preventing false phrase<br /> matching across fields.<br /> For more info on customizing your analyzer chain, please see<br /> http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters<br /> --><br /> <!-- One can also specify an existing Analyzer class that has a<br /> default constructor via the class attribute on the analyzer element<br /> <fieldType name="text_greek" class="solr.TextField"><br /> <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/><br /> </fieldType><br /> --><br /> <!-- A text field that only splits on whitespace for exact matching of words --><br /> <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100"><br /> <analyzer><br /> <tokenizer class="solr.WhitespaceTokenizerFactory"/><br /> </analyzer><br /> </fieldType><br /> <!-- A text field that uses WordDelimiterFilter to enable splitting and matching of<br /> words on case-change, alpha numeric boundaries, and non-alphanumeric chars,<br /> so that a query of "wifi" or "wi fi" could match a document containing "Wi-Fi".<br /> Synonyms and stopwords are customized by external files, and stemming is enabled.<br /> Duplicate tokens at the same position (which may result from Stemmed Synonyms or<br /> WordDelim parts) are removed.<br /> --><br /> <fieldType name="text" class="solr.TextField" positionIncrementGap="100"><br /> <analyzer type="index"><br /> <tokenizer class="solr.WhitespaceTokenizerFactory"/><br /> <!-- in this example, we will only use synonyms at query time<br /> <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/><br /> --><br /> <!-- Case insensitive stop word removal.<br /> add enablePositionIncrements=true in both the index and query<br /> analyzers to leave a 'gap' for more accurate phrase queries.<br /> --><br /> <filter class="solr.StopFilterFactory"<br /> ignoreCase="true"<br /> words="stopwords.txt"<br /> enablePositionIncrements="true"<br /> /><br /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/><br /> <filter class="solr.LowerCaseFilterFactory"/><br /> <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/><br /> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/><br /> </analyzer><br /> <analyzer type="query"><br /> <tokenizer class="solr.WhitespaceTokenizerFactory"/><br /> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/><br /> <filter class="solr.StopFilterFactory"<br /> ignoreCase="true"<br /> words="stopwords.txt"<br /> enablePositionIncrements="true"<br /> /><br /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/><br /> <filter class="solr.LowerCaseFilterFactory"/><br /> <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/><br /> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/><br /> </analyzer><br /> </fieldType><br /> <!-- Less flexible matching, but less false matches. Probably not ideal for product names,<br /> but may be good for SKUs. Can insert dashes in the wrong place and still match. --><br /> <fieldType name="textTight" class="solr.TextField" positionIncrementGap="100" ><br /> <analyzer><br /> <tokenizer class="solr.WhitespaceTokenizerFactory"/><br /> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/><br /> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/><br /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/><br /> <filter class="solr.LowerCaseFilterFactory"/><br /> <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/><br /> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/><br /> </analyzer><br /> </fieldType><br /> <!--<br /> Setup simple analysis for spell checking<br /> --><br /> <fieldType name="textSpell" class="solr.TextField" positionIncrementGap="100" ><br /> <analyzer><br /> <tokenizer class="solr.StandardTokenizerFactory"/><br /> <filter class="solr.LowerCaseFilterFactory"/><br /> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/><br /> </analyzer><br /> </fieldType><br /> <!-- charFilter + "CharStream aware" WhitespaceTokenizer --><br /> <!--<br /> <fieldType name="textCharNorm" class="solr.TextField" positionIncrementGap="100" ><br /> <analyzer><br /> <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/><br /> <tokenizer class="solr.CharStreamAwareWhitespaceTokenizerFactory"/><br /> </analyzer><br /> </fieldType><br /> --><br /> <!-- This is an example of using the KeywordTokenizer along<br /> With various TokenFilterFactories to produce a sortable field<br /> that does not include some properties of the source text<br /> --><br /> <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true"><br /> <analyzer><br /> <!-- KeywordTokenizer does no actual tokenizing, so the entire<br /> input string is preserved as a single token<br /> --><br /> <tokenizer class="solr.KeywordTokenizerFactory"/><br /> <!-- The LowerCase TokenFilter does what you expect, which can be<br /> when you want your sorting to be case insensitive<br /> --><br /> <filter class="solr.LowerCaseFilterFactory" /><br /> <!-- The TrimFilter removes any leading or trailing whitespace --><br /> <filter class="solr.TrimFilterFactory" /><br /> <!-- The PatternReplaceFilter gives you the flexibility to use<br /> Java Regular expression to replace any sequence of characters<br /> matching a pattern with an arbitrary replacement string, <br /> which may include back refrences to portions of the orriginal<br /> string matched by the pattern.<br /> <br /> See the Java Regular Expression documentation for more<br /> infomation on pattern and replacement string syntax.<br /> <br /> http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html<br /> --><br /> <filter class="solr.PatternReplaceFilterFactory"<br /> pattern="([^a-z])" replacement="" replace="all"<br /> /><br /> </analyzer><br /> </fieldType><br /> <br /> <fieldtype name="phonetic" stored="false" indexed="true" class="solr.TextField" ><br /> <analyzer><br /> <tokenizer class="solr.StandardTokenizerFactory"/><br /> <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/><br /> </analyzer><br /> </fieldtype> <br /><br /> <!-- since fields of this type are by default not stored or indexed, any data added to <br /> them will be ignored outright <br /> --> <br /> <fieldtype name="ignored" stored="false" indexed="false" class="solr.StrField" /> <br /> </types><br /><br /> <fields> <br /> <!-- general --><br /> <field name="id" type="integer" indexed="true" stored="true" required="true"/><br /> <field name="name" type="alphaOnlySort" indexed="true" stored="true" required="true"/><br /> <field name="text" type="alphaOnlySort" indexed="true" stored="true" required="true"/><br /> <field name="lat" type="sdouble" indexed="true" stored="true"/><br /> <field name="lng" type="sdouble" indexed="true" stored="true"/><br /> <dynamicField name="_local*" type="sdouble" indexed="true" stored="true"/> <br /> </fields><br /> <!-- Field to use to determine and enforce document uniqueness. <br /> Unless this field is marked with required="false", it will be a required field<br /> --><br /> <!-- field to use to determine and enforce document uniqueness. --><br /> <uniqueKey>id</uniqueKey><br /> <!-- field for the QueryParser to use when an explicit fieldname is absent --><br /> <defaultSearchField>name</defaultSearchField><br /> <!-- SolrQueryParser configuration: defaultOperator="AND|OR" --><br /> <solrQueryParser defaultOperator="OR"/><br /> </schema><br /><B>solrconfig.xml</B> <?xml version="1.0" encoding="UTF-8" ?><br /> <!--<br /> Licensed to the Apache Software Foundation (ASF) under one or more<br /> contributor license agreements. See the NOTICE file distributed with<br /> this work for additional information regarding copyright ownership.<br /> The ASF licenses this file to You under the Apache License, Version 2.0<br /> (the "License"); you may not use this file except in compliance with<br /> the License. You may obtain a copy of the License at<br /> http://www.apache.org/licenses/LICENSE-2.0<br /> Unless required by applicable law or agreed to in writing, software<br /> distributed under the License is distributed on an "AS IS" BASIS,<br /> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br /> See the License for the specific language governing permissions and<br /> limitations under the License.<br /> --><br /><config><br /> <!-- Set this to 'false' if you want solr to continue working after it has <br /> encountered an severe configuration error. In a production environment, <br /> you may want solr to keep working even if one handler is mis-configured.<br /> You may also set this to false using by setting the system property:<br /> -Dsolr.abortOnConfigurationError=false<br /> --><br /> <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError><br /> <!-- Used to specify an alternate directory to hold all index data<br /> other than the default ./data under the Solr home.<br /> If replication is in use, this should match the replication configuration. --><br /> <!-- dataDir>/mnt/htdocs/apache-tomcat-6.0.18/solr/data</dataDir --><br /><br /> <indexDefaults><br /> <!-- Values here affect all index writers and act as a default unless overridden. --><br /> <useCompoundFile>false</useCompoundFile><br /> <mergeFactor>10000</mergeFactor><br /> <!--<br /> If both ramBufferSizeMB and maxBufferedDocs is set, then Lucene will flush based on whichever limit is hit first.<br /> --><br /> <!--<maxBufferedDocs>1000</maxBufferedDocs>--><br /> <!-- Tell Lucene when to flush documents to disk.<br /> Giving Lucene more memory for indexing means faster indexing at the cost of more RAM<br /> If both ramBufferSizeMB and maxBufferedDocs is set, then Lucene will flush based on whichever limit is hit first.<br /> --><br /> <ramBufferSizeMB>512</ramBufferSizeMB><br /> <maxMergeDocs>2147483647</maxMergeDocs><br /> <maxFieldLength>10000</maxFieldLength><br /> <writeLockTimeout>1000</writeLockTimeout><br /> <commitLockTimeout>10000</commitLockTimeout><br /> <!--<br /> Expert: Turn on Lucene's auto commit capability.<br /> This causes intermediate segment flushes to write a new lucene<br /> index descriptor, enabling it to be opened by an external<br /> IndexReader.<br /> NOTE: Despite the name, this value does not have any relation to Solr's autoCommit functionality<br /> --><br /> <!--<luceneAutoCommit>false</luceneAutoCommit>--><br /> <!--<br /> Expert:<br /> The Merge Policy in Lucene controls how merging is handled by Lucene. The default in 2.3 is the LogByteSizeMergePolicy, previous<br /> versions used LogDocMergePolicy.<br /> LogByteSizeMergePolicy chooses segments to merge based on their size. The Lucene 2.2 default, LogDocMergePolicy chose when<br /> to merge based on number of documents<br /> Other implementations of MergePolicy must have a no-argument constructor<br /> --><br /> <!--<mergePolicy>org.apache.lucene.index.LogByteSizeMergePolicy</mergePolicy>--><br /> <!--<br /> Expert:<br /> The Merge Scheduler in Lucene controls how merges are performed. The ConcurrentMergeScheduler (Lucene 2.3 default)<br /> can perform merges in the background using separate threads. The SerialMergeScheduler (Lucene 2.2 default) does not.<br /> --><br /> <!--<mergeScheduler>org.apache.lucene.index.ConcurrentMergeScheduler</mergeScheduler>--><br /> <!--<br /> This option specifies which Lucene LockFactory implementation to use.<br /> <br /> single = SingleInstanceLockFactory - suggested for a read-only index<br /> or when there is no possibility of another process trying<br /> to modify the index.<br /> native = NativeFSLockFactory<br /> simple = SimpleFSLockFactory<br /> (For backwards compatibility with Solr 1.2, 'simple' is the default<br /> if not specified.)<br /> --><br /> <lockType>single</lockType><br /> </indexDefaults><br /> <mainIndex><br /> <!-- options specific to the main on-disk lucene index --><br /> <useCompoundFile>false</useCompoundFile><br /> <ramBufferSizeMB>512</ramBufferSizeMB><br /> <mergeFactor>10</mergeFactor><br /> <!-- Deprecated --><br /> <!--<maxBufferedDocs>1000</maxBufferedDocs>--><br /> <maxMergeDocs>2147483647</maxMergeDocs><br /> <maxFieldLength>10000</maxFieldLength><br /> <!-- If true, unlock any held write or commit locks on startup. <br /> This defeats the locking mechanism that allows multiple<br /> processes to safely access a lucene index, and should be<br /> used with care.<br /> This is not needed if lock type is 'none' or 'single'<br /> --><br /> <unlockOnStartup>false</unlockOnStartup><br /> <!--<br /> Custom deletion policies can specified here. The class must<br /> implement org.apache.lucene.index.IndexDeletionPolicy.<br /> http://lucene.apache.org/java/2_3_2/api/org/apache/lucene/index/IndexDeletionPolicy.html<br /> The standard Solr IndexDeletionPolicy implementation supports deleting<br /> index commit points on number of commits, age of commit point and<br /> optimized status.<br /> The latest commit point should always be preserved regardless<br /> of the criteria.<br /> --><br /> <deletionPolicy class="solr.SolrDeletionPolicy"><br /> <!-- Keep only optimized commit points --><br /> <str name="keepOptimizedOnly">false</str><br /> <!-- The maximum number of commit points to be kept --><br /> <str name="maxCommitsToKeep">1</str><br /> <!--<br /> Delete all commit points once they have reached the given age.<br /> Supports DateMathParser syntax e.g.<br /> <br /> <str name="maxCommitAge">30MINUTES</str><br /> <str name="maxCommitAge">1DAY</str><br /> --><br /> </deletionPolicy><br /> </mainIndex><br /> <!-- Enables JMX if and only if an existing MBeanServer is found, use <br /> this if you want to configure JMX through JVM parameters. Remove<br /> this to disable exposing Solr configuration and statistics to JMX.<br /> If you want to connect to a particular server, specify the agentId<br /> e.g. <jmx agentId="myAgent" /><br /> If you want to start a new MBeanServer, specify the serviceUrl<br /> e.g <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr" /><br /> For more details see http://wiki.apache.org/solr/SolrJmx<br /> --><br /> <jmx /><br /> <!-- the default high-performance update handler --><br /> <updateHandler class="solr.DirectUpdateHandler2"><br /> <!-- A prefix of "solr." for class names is an alias that<br /> causes solr to search appropriate packages, including<br /> org.apache.solr.(search|update|request|core|analysis)<br /> --><br /> <!-- Perform a <commit/> automatically under certain conditions:<br /> maxDocs - number of updates since last commit is greater than this<br /> maxTime - oldest uncommited update (in ms) is this long ago<br /> <autoCommit> <br /> <maxDocs>10000</maxDocs><br /> <maxTime>1000</maxTime> <br /> </autoCommit><br /> --><br /> <!-- The RunExecutableListener executes an external command.<br /> exe - the name of the executable to run<br /> dir - dir to use as the current working directory. default="."<br /> wait - the calling thread waits until the executable returns. default="true"<br /> args - the arguments to pass to the program. default=nothing<br /> env - environment variables to set. default=nothing<br /> --><br /> <!-- A postCommit event is fired after every commit or optimize command<br /> <listener event="postCommit" class="solr.RunExecutableListener"><br /> <str name="exe">solr/bin/snapshooter</str><br /> <str name="dir">.</str><br /> <bool name="wait">true</bool><br /> <arr name="args"> <str>arg1</str> <str>arg2</str> </arr><br /> <arr name="env"> <str>MYVAR=val1</str> </arr><br /> </listener><br /> --><br /> <!-- A postOptimize event is fired only after every optimize command, useful<br /> in conjunction with index distribution to only distribute optimized indicies <br /> <listener event="postOptimize" class="solr.RunExecutableListener"><br /> <str name="exe">snapshooter</str><br /> <str name="dir">solr/bin</str><br /> <bool name="wait">true</bool><br /> </listener><br /> --><br /> </updateHandler><br /><br /> <query><br /> <!-- Maximum number of clauses in a boolean query... can affect<br /> range or prefix queries that expand to big boolean<br /> queries. An exception is thrown if exceeded. --><br /> <maxBooleanClauses>1024</maxBooleanClauses><br /><br /> <!-- There are two implementations of cache available for Solr,<br /> LRUCache, based on a synchronized LinkedHashMap, and<br /> FastLRUCache, based on a ConcurrentHashMap. FastLRUCache has faster gets<br /> and slower puts in single threaded operation and thus is generally faster<br /> than LRUCache when the hit ratio of the cache is high (> 75%), and may be<br /> faster under other scenarios on multi-cpu systems. --><br /> <!-- Cache used by SolrIndexSearcher for filters (DocSets),<br /> unordered sets of *all* documents that match a query.<br /> When a new searcher is opened, its caches may be prepopulated<br /> or "autowarmed" using data from caches in the old searcher.<br /> autowarmCount is the number of items to prepopulate. For LRUCache,<br /> the autowarmed items will be the most recently accessed items.<br /> Parameters:<br /> class - the SolrCache implementation LRUCache or FastLRUCache<br /> size - the maximum number of entries in the cache<br /> initialSize - the initial capacity (number of entries) of<br /> the cache. (seel java.util.HashMap)<br /> autowarmCount - the number of entries to prepopulate from<br /> and old cache.<br /> --><br /> <filterCache<br /> class="solr.FastLRUCache"<br /> size="512"<br /> initialSize="512"<br /> autowarmCount="128"/><br /> <!-- Cache used to hold field values that are quickly accessible<br /> by document id. The fieldValueCache is created by default<br /> even if not configured here.<br /> <fieldValueCache<br /> class="solr.FastLRUCache"<br /> size="512"<br /> autowarmCount="128"<br /> showItems="32"<br /> /><br /> --><br /> <!-- queryResultCache caches results of searches - ordered lists of<br /> document ids (DocList) based on a query, a sort, and the range<br /> of documents requested. --><br /> <queryResultCache<br /> class="solr.LRUCache"<br /> size="512"<br /> initialSize="512"<br /> autowarmCount="32"/><br /> <!-- documentCache caches Lucene Document objects (the stored fields for each document).<br /> Since Lucene internal document ids are transient, this cache will not be autowarmed. --><br /> <documentCache<br /> class="solr.LRUCache"<br /> size="512"<br /> initialSize="512"<br /> autowarmCount="0"/><br /> <!-- If true, stored fields that are not requested will be loaded lazily.<br /> This can result in a significant speed improvement if the usual case is to<br /> not load all stored fields, especially if the skipped fields are large compressed<br /> text fields.<br /> --><br /> <enableLazyFieldLoading>true</enableLazyFieldLoading><br /> <!-- Example of a generic cache. These caches may be accessed by name<br /> through SolrIndexSearcher.getCache(),cacheLookup(), and cacheInsert().<br /> The purpose is to enable easy caching of user/application level data.<br /> The regenerator argument should be specified as an implementation<br /> of solr.search.CacheRegenerator if autowarming is desired. --><br /> <!--<br /> <cache name="myUserCache"<br /> class="solr.LRUCache"<br /> size="4096"<br /> initialSize="1024"<br /> autowarmCount="1024"<br /> regenerator="org.mycompany.mypackage.MyRegenerator"<br /> /><br /> --><br /> <!-- An optimization that attempts to use a filter to satisfy a search.<br /> If the requested sort does not include score, then the filterCache<br /> will be checked for a filter matching the query. If found, the filter<br /> will be used as the source of document ids, and then the sort will be<br /> applied to that.<br /> <useFilterForSortedQuery>true</useFilterForSortedQuery><br /> --><br /> <!-- An optimization for use with the queryResultCache. When a search<br /> is requested, a superset of the requested number of document ids<br /> are collected. For example, if a search for a particular query<br /> requests matching documents 10 through 19, and queryWindowSize is 50,<br /> then documents 0 through 49 will be collected and cached. Any further<br /> requests in that range can be satisfied via the cache. --><br /> <queryResultWindowSize>50</queryResultWindowSize><br /> <!-- Maximum number of documents to cache for any entry in the<br /> queryResultCache. --><br /> <queryResultMaxDocsCached>200</queryResultMaxDocsCached><br /> <!-- This entry enables an int hash representation for filters (DocSets)<br /> when the number of items in the set is less than maxSize. For smaller<br /> sets, this representation is more memory efficient, more efficient to<br /> iterate over, and faster to take intersections. --><br /> <HashDocSet maxSize="3000" loadFactor="0.75"/><br /> <!-- a newSearcher event is fired whenever a new searcher is being prepared<br /> and there is a current searcher handling requests (aka registered). --><br /> <!-- QuerySenderListener takes an array of NamedList and executes a<br /> local query request for each NamedList in sequence. --><br /> <listener event="newSearcher" class="solr.QuerySenderListener"><br /> <arr name="queries"><br /> <lst> <str name="q">solr</str> <str name="start">0</str> <str name="rows">10</str> </lst><br /> <lst> <str name="q">rocks</str> <str name="start">0</str> <str name="rows">10</str> </lst><br /> <lst><str name="q">static newSearcher warming query from solrconfig.xml</str></lst><br /> </arr><br /> </listener><br /> <!-- a firstSearcher event is fired whenever a new searcher is being<br /> prepared but there is no current registered searcher to handle<br /> requests or to gain autowarming data from. --><br /> <listener event="firstSearcher" class="solr.QuerySenderListener"><br /> <arr name="queries"><br /> <lst> <str name="q">fast_warm</str> <str name="start">0</str> <str name="rows">10</str> </lst><br /> <lst><str name="q">static firstSearcher warming query from solrconfig.xml</str></lst><br /> </arr><br /> </listener><br /> <!-- If a search request comes in and there is no current registered searcher,<br /> then immediately register the still warming searcher and use it. If<br /> "false" then all requests will block until the first searcher is done<br /> warming. --><br /> <useColdSearcher>false</useColdSearcher><br /> <!-- Maximum number of searchers that may be warming in the background<br /> concurrently. An error is returned if this limit is exceeded. Recommend<br /> 1-2 for read-only slaves, higher for masters w/o cache warming. --><br /> <maxWarmingSearchers>2</maxWarmingSearchers><br /> </query><br /> <!-- <br /> Let the dispatch filter handler /select?qt=XXX<br /> handleSelect=true will use consistent error handling for /select and /update<br /> handleSelect=false will use solr1.1 style error formatting<br /> --><br /> <requestDispatcher handleSelect="true" ><br /> <!--Make sure your system has some authentication before enabling remote streaming! --><br /> <requestParsers enableRemoteStreaming="true" multipartUploadLimitInKB="2048000" /><br /> <!-- Set HTTP caching related parameters (for proxy caches and clients).<br /> <br /> To get the behaviour of Solr 1.2 (ie: no caching related headers)<br /> use the never304="true" option and do not specify a value for<br /> <cacheControl><br /> --><br /> <!-- <httpCaching never304="true"> --><br /> <httpCaching lastModifiedFrom="openTime"<br /> etagSeed="Solr"><br /> <!-- lastModFrom="openTime" is the default, the Last-Modified value<br /> (and validation against If-Modified-Since requests) will all be<br /> relative to when the current Searcher was opened.<br /> You can change it to lastModFrom="dirLastMod" if you want the<br /> value to exactly corrispond to when the physical index was last<br /> modified.<br /> etagSeed="..." is an option you can change to force the ETag<br /> header (and validation against If-None-Match requests) to be<br /> differnet even if the index has not changed (ie: when making<br /> significant changes to your config file)<br /> lastModifiedFrom and etagSeed are both ignored if you use the<br /> never304="true" option.<br /> --><br /> <!-- If you include a <cacheControl> directive, it will be used to<br /> generate a Cache-Control header, as well as an Expires header<br /> if the value contains "max-age="<br /> By default, no Cache-Control header is generated.<br /> You can use the <cacheControl> option even if you have set<br /> never304="true"<br /> --><br /> <!-- <cacheControl>max-age=30, public</cacheControl> --><br /> </httpCaching><br /> </requestDispatcher><br /><br /> <!-- requestHandler plugins... incoming queries will be dispatched to the<br /> correct handler based on the path or the qt (query type) param.<br /> Names starting with a '/' are accessed with the a path equal to the <br /> registered name. Names without a leading '/' are accessed with:<br /> http://host/app/select?qt=name<br /> If no qt is defined, the requestHandler that declares default="true"<br /> will be used.<br /> --><br /> <requestHandler name="standard" class="solr.SearchHandler" default="true"><br /> <!-- default values for query parameters --><br /> <lst name="defaults"><br /> <str name="echoParams">explicit</str><br /> <!--<br /> <int name="rows">10</int><br /> <str name="fl">*</str><br /> <str name="version">2.1</str><br /> --><br /> </lst><br /> </requestHandler><br /><!-- Please refer to http://wiki.apache.org/solr/SolrReplication for details on configuring replication --><br /> <!--Master config--><br /> <!--<br /> <requestHandler name="/replication" class="solr.ReplicationHandler" ><br /> <lst name="master"><br /> <str name="replicateAfter">commit</str><br /> <str name="confFiles">schema.xml,stopwords.txt</str><br /> </lst><br /> </requestHandler><br /> --><br /> <!-- Slave config--><br /> <!--<br /> <requestHandler name="/replication" class="solr.ReplicationHandler"><br /> <lst name="slave"><br /> <str name="masterUrl">http://localhost:8983/solr/replication</str><br /> <str name="pollInterval">00:00:60</str> <br /> </lst><br /> </requestHandler><br /> --><br /> <!-- DisMaxRequestHandler allows easy searching across multiple fields<br /> for simple user-entered phrases. It's implementation is now<br /> just the standard SearchHandler with a default query type<br /> of "dismax". <br /> see http://wiki.apache.org/solr/DisMaxRequestHandler<br /> --><br /> <requestHandler name="dismax" class="solr.SearchHandler" ><br /> <lst name="defaults"><br /> <str name="defType">dismax</str><br /> <str name="echoParams">explicit</str><br /> <float name="tie">0.01</float><br /> <str name="qf"><br /> text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4<br /> </str><br /> <str name="pf"><br /> text^0.2 features^1.1 name^1.5 manu^1.4 manu_exact^1.9<br /> </str><br /> <str name="bf"><br /> ord(popularity)^0.5 recip(rord(price),1,1000,1000)^0.3<br /> </str><br /> <str name="fl"><br /> id,name,price,score<br /> </str><br /> <str name="mm"><br /> 2&lt;-1 5&lt;-2 6&lt;90%<br /> </str><br /> <int name="ps">100</int><br /> <str name="q.alt">*:*</str><br /> <!-- example highlighter config, enable per-query with hl=true --><br /> <str name="hl.fl">text features name</str><br /> <!-- for this field, we want no fragmenting, just highlighting --><br /> <str name="f.name.hl.fragsize">0</str><br /> <!-- instructs Solr to return the field itself if no query terms are<br /> found --><br /> <str name="f.name.hl.alternateField">name</str><br /> <str name="f.text.hl.fragmenter">regex</str> <!-- defined below --><br /> </lst><br /> </requestHandler><br /> <!-- Note how you can register the same handler multiple times with<br /> different names (and different init parameters)<br /> --><br /> <requestHandler name="partitioned" class="solr.SearchHandler" ><br /> <lst name="defaults"><br /> <str name="defType">dismax</str><br /> <str name="echoParams">explicit</str><br /> <str name="qf">text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0</str><br /> <str name="mm">2&lt;-1 5&lt;-2 6&lt;90%</str><br /> <!-- This is an example of using Date Math to specify a constantly<br /> moving date range in a config...<br /> --><br /> <str name="bq">incubationdate_dt:[* TO NOW/DAY-1MONTH]^2.2</str><br /> </lst><br /> <!-- In addition to defaults, "appends" params can be specified<br /> to identify values which should be appended to the list of<br /> multi-val params from the query (or the existing "defaults").<br /> In this example, the param "fq=instock:true" will be appended to<br /> any query time fq params the user may specify, as a mechanism for<br /> partitioning the index, independent of any user selected filtering<br /> that may also be desired (perhaps as a result of faceted searching).<br /> NOTE: there is *absolutely* nothing a client can do to prevent these<br /> "appends" values from being used, so don't use this mechanism<br /> unless you are sure you always want it.<br /> --><br /> <lst name="appends"><br /> <str name="fq">inStock:true</str><br /> </lst><br /> <!-- "invariants" are a way of letting the Solr maintainer lock down<br /> the options available to Solr clients. Any params values<br /> specified here are used regardless of what values may be specified<br /> in either the query, the "defaults", or the "appends" params.<br /> In this example, the facet.field and facet.query params are fixed,<br /> limiting the facets clients can use. Faceting is not turned on by<br /> default - but if the client does specify facet=true in the request,<br /> these are the only facets they will be able to see counts for;<br /> regardless of what other facet.field or facet.query params they<br /> may specify.<br /> NOTE: there is *absolutely* nothing a client can do to prevent these<br /> "invariants" values from being used, so don't use this mechanism<br /> unless you are sure you always want it.<br /> --><br /> <lst name="invariants"><br /> <str name="facet.field">cat</str><br /> <str name="facet.field">manu_exact</str><br /> <str name="facet.query">price:[* TO 500]</str><br /> <str name="facet.query">price:[500 TO *]</str><br /> </lst><br /> </requestHandler><br /><br /> <!--<br /> Search components are registered to SolrCore and used by Search Handlers<br /> <br /> By default, the following components are avaliable:<br /> <br /> <searchComponent name="query" class="org.apache.solr.handler.component.QueryComponent" /><br /> <searchComponent name="facet" class="org.apache.solr.handler.component.FacetComponent" /><br /> <searchComponent name="mlt" class="org.apache.solr.handler.component.MoreLikeThisComponent" /><br /> <searchComponent name="highlight" class="org.apache.solr.handler.component.HighlightComponent" /><br /> <searchComponent name="stats" class="org.apache.solr.handler.component.StatsComponent" /><br /> <searchComponent name="debug" class="org.apache.solr.handler.component.DebugComponent" /><br /> <br /> Default configuration in a requestHandler would look like:<br /> <arr name="components"><br /> <str>query</str><br /> <str>facet</str><br /> <str>mlt</str><br /> <str>highlight</str><br /> <str>stats</str><br /> <str>debug</str><br /> </arr><br /> If you register a searchComponent to one of the standard names, that will be used instead.<br /> To insert components before or after the 'standard' components, use:<br /> <br /> <arr name="first-components"><br /> <str>myFirstComponentName</str><br /> </arr><br /> <br /> <arr name="last-components"><br /> <str>myLastComponentName</str><br /> </arr><br /> --><br /> <!-- The spell check component can return a list of alternative spelling<br /> suggestions. --><br /> <searchComponent name="spellcheck" class="solr.SpellCheckComponent"><br /> <str name="queryAnalyzerFieldType">textSpell</str><br /> <lst name="spellchecker"><br /> <str name="name">default</str><br /> <str name="field">spell</str><br /> <str name="spellcheckIndexDir">./spellchecker1</str><br /> </lst><br /> <lst name="spellchecker"><br /> <str name="name">jarowinkler</str><br /> <str name="field">spell</str><br /> <!-- Use a different Distance Measure --><br /> <str name="distanceMeasure">org.apache.lucene.search.spell.JaroWinklerDistance</str><br /> <str name="spellcheckIndexDir">./spellchecker2</str><br /> </lst><br /> <lst name="spellchecker"><br /> <str name="classname">solr.FileBasedSpellChecker</str><br /> <str name="name">file</str><br /> <str name="sourceLocation">spellings.txt</str><br /> <str name="characterEncoding">UTF-8</str><br /> <str name="spellcheckIndexDir">./spellcheckerFile</str><br /> </lst><br /> </searchComponent><br /> <!-- A request handler utilizing the spellcheck component. <br /> ################################################################################################<br /> NOTE: This is purely as an example. The whole purpose of the SpellCheckComponent is to hook it into<br /> the request handler that handles (i.e. the standard or dismax SearchHandler)<br /> queries such that a separate request is not needed to get suggestions.<br /> IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!<br /> ################################################################################################<br /> --><br /> <requestHandler name="/spellCheckCompRH" class="solr.SearchHandler"><br /> <lst name="defaults"><br /> <!-- omp = Only More Popular --><br /> <str name="spellcheck.onlyMorePopular">false</str><br /> <!-- exr = Extended Results --><br /> <str name="spellcheck.extendedResults">false</str><br /> <!-- The number of suggestions to return --><br /> <str name="spellcheck.count">1</str><br /> </lst><br /> <arr name="last-components"><br /> <str>spellcheck</str><br /> </arr><br /> </requestHandler><br /> <searchComponent name="tvComponent" class="org.apache.solr.handler.component.TermVectorComponent"/><br /> <!-- A Req Handler for working with the tvComponent. This is purely as an example.<br /> You will likely want to add the component to your already specified request handlers. --><br /> <requestHandler name="tvrh" class="org.apache.solr.handler.component.SearchHandler"><br /> <lst name="defaults"><br /> <bool name="tv">true</bool><br /> </lst><br /> <arr name="last-components"><br /> <str>tvComponent</str><br /> </arr><br /> </requestHandler><br /><!--<br /> <requestHandler name="/update/extract" class="org.apache.solr.handler.extraction.ExtractingRequestHandler"><br /> <lst name="defaults"><br /> <str name="ext.map.Last-Modified">last_modified</str><br /> <bool name="ext.ignore.und.fl">true</bool><br /> </lst><br /> </requestHandler><br /> --><br /> <br /> <searchComponent name="termsComp" class="org.apache.solr.handler.component.TermsComponent"/><br /> <requestHandler name="/autoSuggest" class="org.apache.solr.handler.component.SearchHandler"><br /> <arr name="components"><br /> <str>termsComp</str><br /> </arr><br /> </requestHandler><br /><br /> <!-- a search component that enables you to configure the top results for<br /> a given query regardless of the normal lucene scoring.--><br /> <searchComponent name="elevator" class="solr.QueryElevationComponent" ><br /> <!-- pick a fieldType to analyze queries --><br /> <str name="queryFieldType">string</str><br /> <str name="config-file">elevate.xml</str><br /> </searchComponent><br /> <!-- a request handler utilizing the elevator component --><br /> <requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy"><br /> <lst name="defaults"><br /> <str name="echoParams">explicit</str><br /> </lst><br /> <arr name="last-components"><br /> <str>elevator</str><br /> </arr><br /> </requestHandler><br /><br /> <!-- Update request handler. <br /> Note: Since solr1.1 requestHandlers requires a valid content type header if posted in<br /> the body. For example, curl now requires: -H 'Content-type:text/xml; charset=utf-8'<br /> The response format differs from solr1.1 formatting and returns a standard error code.<br /> To enable solr1.1 behavior, remove the /update handler or change its path<br /> --><br /> <requestHandler name="/update" class="solr.XmlUpdateRequestHandler" /><br /><br /> <requestHandler name="/update/javabin" class="solr.BinaryUpdateRequestHandler" /><br /> <!--<br /> Analysis request handler. Since Solr 1.3. Use to returnhow a document is analyzed. Useful<br /> for debugging and as a token server for other types of applications<br /> --><br /> <requestHandler name="/analysis" class="solr.AnalysisRequestHandler" /><br /><br /> <!-- CSV update handler, loaded on demand --><br /> <requestHandler name="/update/csv" class="solr.CSVRequestHandler" startup="lazy" /><br /><br /> <!-- <br /> Admin Handlers - This will register all the standard admin RequestHandlers. Adding <br /> this single handler is equivalent to registering:<br /> <br /> <requestHandler name="/admin/luke" class="org.apache.solr.handler.admin.LukeRequestHandler" /><br /> <requestHandler name="/admin/system" class="org.apache.solr.handler.admin.SystemInfoHandler" /><br /> <requestHandler name="/admin/plugins" class="org.apache.solr.handler.admin.PluginInfoHandler" /><br /> <requestHandler name="/admin/threads" class="org.apache.solr.handler.admin.ThreadDumpHandler" /><br /> <requestHandler name="/admin/properties" class="org.apache.solr.handler.admin.PropertiesRequestHandler" /><br /> <requestHandler name="/admin/file" class="org.apache.solr.handler.admin.ShowFileRequestHandler" ><br /> <br /> If you wish to hide files under ${solr.home}/conf, explicitly register the ShowFileRequestHandler using:<br /> <requestHandler name="/admin/file" class="org.apache.solr.handler.admin.ShowFileRequestHandler" ><br /> <lst name="invariants"><br /> <str name="hidden">synonyms.txt</str> <br /> <str name="hidden">anotherfile.txt</str> <br /> </lst><br /> </requestHandler><br /> --><br /> <requestHandler name="/admin/" class="org.apache.solr.handler.admin.AdminHandlers" /><br /> <!-- ping/healthcheck --><br /> <requestHandler name="/admin/ping" class="PingRequestHandler"><br /> <lst name="defaults"><br /> <str name="qt">standard</str><br /> <str name="q">solrpingquery</str><br /> <str name="echoParams">all</str><br /> </lst><br /> </requestHandler><br /> <!-- Echo the request contents back to the client --><br /> <requestHandler name="/debug/dump" class="solr.DumpRequestHandler" ><br /> <lst name="defaults"><br /> <str name="echoParams">explicit</str> <!-- for all params (including the default etc) use: 'all' --><br /> <str name="echoHandler">true</str><br /> </lst><br /> </requestHandler><br /> <highlighting><br /> <!-- Configure the standard fragmenter --><br /> <!-- This could most likely be commented out in the "default" case --><br /> <fragmenter name="gap" class="org.apache.solr.highlight.GapFragmenter" default="true"><br /> <lst name="defaults"><br /> <int name="hl.fragsize">100</int><br /> </lst><br /> </fragmenter><br /> <!-- A regular-expression-based fragmenter (f.i., for sentence extraction) --><br /> <fragmenter name="regex" class="org.apache.solr.highlight.RegexFragmenter"><br /> <lst name="defaults"><br /> <!-- slightly smaller fragsizes work better because of slop --><br /> <int name="hl.fragsize">70</int><br /> <!-- allow 50% slop on fragment sizes --><br /> <float name="hl.regex.slop">0.5</float><br /> <!-- a basic sentence pattern --><br /> <str name="hl.regex.pattern">[-\w ,/\n\"']{20,200}</str><br /> </lst><br /> </fragmenter><br /> <!-- Configure the standard formatter --><br /> <formatter name="html" class="org.apache.solr.highlight.HtmlFormatter" default="true"><br /> <lst name="defaults"><br /> <str name="hl.simple.pre"><![CDATA[<em>]]></str><br /> <str name="hl.simple.post"><![CDATA[</em>]]></str><br /> </lst><br /> </formatter><br /> </highlighting><br /> <!-- An example dedup update processor that creates the "id" field on the fly<br /> based on the hash code of some other fields. This example has overwriteDupes<br /> set to false since we are using the id field as the signatureField and Solr<br /> will maintain uniqueness based on that anyway. --><br /> <!--<br /> <updateRequestProcessorChain name="dedupe"><br /> <processor class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory"><br /> <bool name="enabled">true</bool><br /> <str name="signatureField">id</str><br /> <bool name="overwriteDupes">false</bool><br /> <str name="fields">name,features,cat</str><br /> <str name="signatureClass">org.apache.solr.update.processor.Lookup3Signature</str><br /> </processor><br /> <processor class="solr.LogUpdateProcessorFactory" /><br /> <processor class="solr.RunUpdateProcessorFactory" /><br /> </updateRequestProcessorChain><br /> --><br /><br /> <!-- queryResponseWriter plugins... query responses will be written using the<br /> writer specified by the 'wt' request parameter matching the name of a registered<br /> writer.<br /> The "default" writer is the default and will be used if 'wt' is not specified <br /> in the request. XMLResponseWriter will be used if nothing is specified here.<br /> The json, python, and ruby writers are also available by default.<br /> <queryResponseWriter name="xml" class="org.apache.solr.request.XMLResponseWriter" default="true"/><br /> <queryResponseWriter name="json" class="org.apache.solr.request.JSONResponseWriter"/><br /> <queryResponseWriter name="python" class="org.apache.solr.request.PythonResponseWriter"/><br /> <queryResponseWriter name="ruby" class="org.apache.solr.request.RubyResponseWriter"/><br /> <queryResponseWriter name="php" class="org.apache.solr.request.PHPResponseWriter"/><br /> <queryResponseWriter name="phps" class="org.apache.solr.request.PHPSerializedResponseWriter"/><br /> <queryResponseWriter name="custom" class="com.example.MyResponseWriter"/><br /> --><br /> <!-- XSLT response writer transforms the XML output by any xslt file found<br /> in Solr's conf/xslt directory. Changes to xslt files are checked for<br /> every xsltCacheLifetimeSeconds. <br /> --><br /> <queryResponseWriter name="xslt" class="org.apache.solr.request.XSLTResponseWriter"><br /> <int name="xsltCacheLifetimeSeconds">5</int><br /> </queryResponseWriter><br /> <queryResponseWriter name="php" class="org.apache.solr.request.PHPResponseWriter"/><br /> <queryResponseWriter name="phps" class="org.apache.solr.request.PHPSerializedResponseWriter"/><br /> <!-- example of registering a query parser<br /> <queryParser name="lucene" class="org.apache.solr.search.LuceneQParserPlugin"/><br /> --><br /> <!-- example of registering a custom function parser <br /> <valueSourceParser name="myfunc" class="com.mycompany.MyValueSourceParser" /><br /> --><br /> <!-- config for the admin interface --><br /> <admin><br /> <defaultQuery>solr</defaultQuery><br /> <!-- configure a healthcheck file for servers behind a loadbalancer<br /> <healthcheck type="file">server-enabled</healthcheck><br /> --><br /> </admin><br /><br /> <updateRequestProcessor><br /> <factory name="standard" class="solr.ChainedUpdateProcessorFactory" default="true"><br /> <chain class="com.pjaol.search.solr.update.LocalUpdateProcessorFactory"><br /> <str name="latField">lat</str><br /> <str name="lngField">lng</str><br /> <int name="startTier">9</int><br /> <int name="endTier">17</int><br /> </chain><br /> <chain class="solr.LogUpdateProcessorFactory" ><br /> <!-- <int name="maxNumToLog">100</int> --><br /> </chain><br /> <chain class="solr.RunUpdateProcessorFactory" /><br /> </factory><br /> </updateRequestProcessor><br /> <requestHandler name="geo" class="com.pjaol.search.solr.LocalSolrRequestHandler"><br /> <!-- Custom latitude longitude fields, below are the defaults if not otherwise<br /> specified --><br /> <str name="latField">lat</str><br /> <str name="lngField">lng</str><br /> </requestHandler><br /> </config><br />can any one help me?<br />Thanks,<br /> JimmyUnknownhttps://www.blogger.com/profile/00494352916110665409noreply@blogger.comtag:blogger.com,1999:blog-3207985.post-56818441116187891762009-04-14T11:10:00.000-04:002009-04-14T11:10:00.000-04:00Hello,
I have configure localsolr using given step...Hello,<br />I have configure localsolr using given steps, unfortunately it is not working for me..<br />it not giving any exception in log.<br />below is the schema.xml and solrconfig.xml<br /><B>schema.xml</B> <?xml version="1.0" encoding="UTF-8" ?><br /> <!--<br /> Licensed to the Apache Software Foundation (ASF) under one or more<br /> contributor license agreements. See the NOTICE file distributed with<br /> this work for additional information regarding copyright ownership.<br /> The ASF licenses this file to You under the Apache License, Version 2.0<br /> (the "License"); you may not use this file except in compliance with<br /> the License. You may obtain a copy of the License at<br /> http://www.apache.org/licenses/LICENSE-2.0<br /> Unless required by applicable law or agreed to in writing, software<br /> distributed under the License is distributed on an "AS IS" BASIS,<br /> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br /> See the License for the specific language governing permissions and<br /> limitations under the License.<br /> --><br /><!-- <br /> This is the Solr schema file. This file should be named "schema.xml" and<br /> should be in the conf directory under the solr home<br /> (i.e. ./solr/conf/schema.xml by default) <br /> or located where the classloader for the Solr webapp can find it.<br /> This example schema is the recommended starting point for users.<br /> It should be kept correct and concise, usable out-of-the-box.<br /> For more information, on how to customize this file, please see<br /> http://wiki.apache.org/solr/SchemaXml<br /> NOTE: this schema includes many optional features and should not<br /> be used for benchmarking.<br /> --><br /><schema name="example" version="1.2"><br /> <!-- attribute "name" is the name of this schema and is only used for display purposes.<br /> Applications should change this to reflect the nature of the search collection.<br /> version="1.2" is Solr's version number for the schema syntax and semantics. It should<br /> not normally be changed by applications.<br /> 1.0: multiValued attribute did not exist, all fields are multiValued by nature<br /> 1.1: multiValued attribute introduced, false by default <br /> 1.2: omitTf attribute introduced, true by default --><br /> <types><br /> <!-- field type definitions. The "name" attribute is<br /> just a label to be used by field definitions. The "class"<br /> attribute and any other attributes determine the real<br /> behavior of the fieldType.<br /> Class names starting with "solr" refer to java classes in the<br /> org.apache.solr.analysis package.<br /> --><br /> <!-- The StrField type is not analyzed, but indexed/stored verbatim. <br /> - StrField and TextField support an optional compressThreshold which<br /> limits compression (if enabled in the derived fields) to values which<br /> exceed a certain size (in characters).<br /> --><br /> <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/><br /> <!-- boolean type: "true" or "false" --><br /> <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/><br /> <!-- The optional sortMissingLast and sortMissingFirst attributes are<br /> currently supported on types that are sorted internally as strings.<br /> - If sortMissingLast="true", then a sort on this field will cause documents<br /> without the field to come after documents with the field,<br /> regardless of the requested sort order (asc or desc).<br /> - If sortMissingFirst="true", then a sort on this field will cause documents<br /> without the field to come before documents with the field,<br /> regardless of the requested sort order.<br /> - If sortMissingLast="false" and sortMissingFirst="false" (the default),<br /> then default lucene sorting will be used which places docs without the<br /> field first in an ascending sort and last in a descending sort.<br /> --> <br /><br /> <!-- numeric field types that store and index the text<br /> value verbatim (and hence don't support range queries, since the<br /> lexicographic ordering isn't equal to the numeric ordering) --><br /> <fieldType name="integer" class="solr.IntField" omitNorms="true"/><br /> <fieldType name="long" class="solr.LongField" omitNorms="true"/><br /> <fieldType name="float" class="solr.FloatField" omitNorms="true"/><br /> <fieldType name="double" class="solr.DoubleField" omitNorms="true"/><br /> <br /> <!-- Numeric field types that manipulate the value into<br /> a string value that isn't human-readable in its internal form,<br /> but with a lexicographic ordering the same as the numeric ordering,<br /> so that range queries work correctly. --><br /> <fieldType name="sint" class="solr.SortableIntField" sortMissingLast="true" omitNorms="true"/><br /> <fieldType name="slong" class="solr.SortableLongField" sortMissingLast="true" omitNorms="true"/><br /> <fieldType name="sfloat" class="solr.SortableFloatField" sortMissingLast="true" omitNorms="true"/><br /> <fieldType name="sdouble" class="solr.SortableDoubleField" sortMissingLast="true" omitNorms="true"/><br /><br /> <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and<br /> is a more restricted form of the canonical representation of dateTime<br /> http://www.w3.org/TR/xmlschema-2/#dateTime <br /> The trailing "Z" designates UTC time and is mandatory.<br /> Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z<br /> All other components are mandatory.<br /> Expressions can also be used to denote calculations that should be<br /> performed relative to "NOW" to determine the value, ie...<br /> NOW/HOUR<br /> ... Round to the start of the current hour<br /> NOW-1DAY<br /> ... Exactly 1 day prior to now<br /> NOW/DAY+6MONTHS+3DAYS<br /> ... 6 months and 3 days in the future from the start of<br /> the current day<br /> <br /> Consult the DateField javadocs for more information.<br /> --><br /> <fieldType name="date" class="solr.DateField" sortMissingLast="true" omitNorms="true"/><br /> <!--<br /> Numeric field types that manipulate the value into trie encoded strings which are not<br /> human readable in the internal form. Range searches on such fields use the fast Trie Range Queries<br /> which are much faster than range searches on the SortableNumberField types.<br /> For the fast range search to work, trie fields must be indexed. Trie fields are <b>not</b> sortable<br /> in numerical order. Also, they cannot be used in function queries. If one needs sorting as well as<br /> fast range search, one should create a copy field specifically for sorting. Same workaround is<br /> suggested for using trie fields in function queries as well.<br /> For each number being added to this field, multiple terms are generated as per the algorithm described in<br /> org.apache.lucene.search.trie package description. The possible number of terms depend on the precisionStep<br /> attribute and increase dramatically with higher precision steps (factor 2**precisionStep). The default<br /> value of precisionStep is 8.<br /> <br /> Note that if you use a precisionStep of 32 for int/float and 64 for long/double, then multiple terms<br /> will not be generated, range search will be no faster than any other number field,<br /> but sorting will be possible.<br /> --><br /> <fieldType name="tint" class="solr.TrieField" type="integer" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /> <fieldType name="tfloat" class="solr.TrieField" type="float" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /> <fieldType name="tlong" class="solr.TrieField" type="long" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /> <fieldType name="tdouble" class="solr.TrieField" type="double" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /> <fieldType name="tdouble4" class="solr.TrieField" type="double" precisionStep="4" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /> <!--<br /> This date field manipulates the value into a trie encoded strings for fast range searches. They follow the<br /> same format and semantics as the normal DateField and support the date math syntax except that they are<br /> not sortable and cannot be used in function queries.<br /> --><br /> <fieldType name="tdate" class="solr.TrieField" type="date" omitNorms="true" positionIncrementGap="0" indexed="true" stored="false" /><br /><br /> <!-- The "RandomSortField" is not used to store or search any<br /> data. You can declare fields of this type it in your schema<br /> to generate psuedo-random orderings of your docs for sorting <br /> purposes. The ordering is generated based on the field name <br /> and the version of the index, As long as the index version<br /> remains unchanged, and the same field name is reused,<br /> the ordering of the docs will be consistent. <br /> If you want differend psuedo-random orderings of documents,<br /> for the same version of the index, use a dynamicField and<br /> change the name<br /> --><br /> <fieldType name="random" class="solr.RandomSortField" indexed="true" /><br /> <!-- solr.TextField allows the specification of custom text analyzers<br /> specified as a tokenizer and a list of token filters. Different<br /> analyzers may be specified for indexing and querying.<br /> The optional positionIncrementGap puts space between multiple fields of<br /> this type on the same document, with the purpose of preventing false phrase<br /> matching across fields.<br /> For more info on customizing your analyzer chain, please see<br /> http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters<br /> --><br /> <!-- One can also specify an existing Analyzer class that has a<br /> default constructor via the class attribute on the analyzer element<br /> <fieldType name="text_greek" class="solr.TextField"><br /> <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/><br /> </fieldType><br /> --><br /> <!-- A text field that only splits on whitespace for exact matching of words --><br /> <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100"><br /> <analyzer><br /> <tokenizer class="solr.WhitespaceTokenizerFactory"/><br /> </analyzer><br /> </fieldType><br /> <!-- A text field that uses WordDelimiterFilter to enable splitting and matching of<br /> words on case-change, alpha numeric boundaries, and non-alphanumeric chars,<br /> so that a query of "wifi" or "wi fi" could match a document containing "Wi-Fi".<br /> Synonyms and stopwords are customized by external files, and stemming is enabled.<br /> Duplicate tokens at the same position (which may result from Stemmed Synonyms or<br /> WordDelim parts) are removed.<br /> --><br /> <fieldType name="text" class="solr.TextField" positionIncrementGap="100"><br /> <analyzer type="index"><br /> <tokenizer class="solr.WhitespaceTokenizerFactory"/><br /> <!-- in this example, we will only use synonyms at query time<br /> <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/><br /> --><br /> <!-- Case insensitive stop word removal.<br /> add enablePositionIncrements=true in both the index and query<br /> analyzers to leave a 'gap' for more accurate phrase queries.<br /> --><br /> <filter class="solr.StopFilterFactory"<br /> ignoreCase="true"<br /> words="stopwords.txt"<br /> enablePositionIncrements="true"<br /> /><br /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/><br /> <filter class="solr.LowerCaseFilterFactory"/><br /> <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/><br /> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/><br /> </analyzer><br /> <analyzer type="query"><br /> <tokenizer class="solr.WhitespaceTokenizerFactory"/><br /> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/><br /> <filter class="solr.StopFilterFactory"<br /> ignoreCase="true"<br /> words="stopwords.txt"<br /> enablePositionIncrements="true"<br /> /><br /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/><br /> <filter class="solr.LowerCaseFilterFactory"/><br /> <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/><br /> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/><br /> </analyzer><br /> </fieldType><br /> <!-- Less flexible matching, but less false matches. Probably not ideal for product names,<br /> but may be good for SKUs. Can insert dashes in the wrong place and still match. --><br /> <fieldType name="textTight" class="solr.TextField" positionIncrementGap="100" ><br /> <analyzer><br /> <tokenizer class="solr.WhitespaceTokenizerFactory"/><br /> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/><br /> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/><br /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/><br /> <filter class="solr.LowerCaseFilterFactory"/><br /> <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/><br /> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/><br /> </analyzer><br /> </fieldType><br /> <!--<br /> Setup simple analysis for spell checking<br /> --><br /> <fieldType name="textSpell" class="solr.TextField" positionIncrementGap="100" ><br /> <analyzer><br /> <tokenizer class="solr.StandardTokenizerFactory"/><br /> <filter class="solr.LowerCaseFilterFactory"/><br /> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/><br /> </analyzer><br /> </fieldType><br /> <!-- charFilter + "CharStream aware" WhitespaceTokenizer --><br /> <!--<br /> <fieldType name="textCharNorm" class="solr.TextField" positionIncrementGap="100" ><br /> <analyzer><br /> <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/><br /> <tokenizer class="solr.CharStreamAwareWhitespaceTokenizerFactory"/><br /> </analyzer><br /> </fieldType><br /> --><br /> <!-- This is an example of using the KeywordTokenizer along<br /> With various TokenFilterFactories to produce a sortable field<br /> that does not include some properties of the source text<br /> --><br /> <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true"><br /> <analyzer><br /> <!-- KeywordTokenizer does no actual tokenizing, so the entire<br /> input string is preserved as a single token<br /> --><br /> <tokenizer class="solr.KeywordTokenizerFactory"/><br /> <!-- The LowerCase TokenFilter does what you expect, which can be<br /> when you want your sorting to be case insensitive<br /> --><br /> <filter class="solr.LowerCaseFilterFactory" /><br /> <!-- The TrimFilter removes any leading or trailing whitespace --><br /> <filter class="solr.TrimFilterFactory" /><br /> <!-- The PatternReplaceFilter gives you the flexibility to use<br /> Java Regular expression to replace any sequence of characters<br /> matching a pattern with an arbitrary replacement string, <br /> which may include back refrences to portions of the orriginal<br /> string matched by the pattern.<br /> <br /> See the Java Regular Expression documentation for more<br /> infomation on pattern and replacement string syntax.<br /> <br /> http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html<br /> --><br /> <filter class="solr.PatternReplaceFilterFactory"<br /> pattern="([^a-z])" replacement="" replace="all"<br /> /><br /> </analyzer><br /> </fieldType><br /> <br /> <fieldtype name="phonetic" stored="false" indexed="true" class="solr.TextField" ><br /> <analyzer><br /> <tokenizer class="solr.StandardTokenizerFactory"/><br /> <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/><br /> </analyzer><br /> </fieldtype> <br /><br /> <!-- since fields of this type are by default not stored or indexed, any data added to <br /> them will be ignored outright <br /> --> <br /> <fieldtype name="ignored" stored="false" indexed="false" class="solr.StrField" /> <br /> </types><br /><br /> <fields> <br /> <!-- general --><br /> <field name="id" type="integer" indexed="true" stored="true" required="true"/><br /> <field name="name" type="alphaOnlySort" indexed="true" stored="true" required="true"/><br /> <field name="text" type="alphaOnlySort" indexed="true" stored="true" required="true"/><br /> <field name="lat" type="sdouble" indexed="true" stored="true"/><br /> <field name="lng" type="sdouble" indexed="true" stored="true"/><br /> <dynamicField name="_local*" type="sdouble" indexed="true" stored="true"/> <br /> </fields><br /> <!-- Field to use to determine and enforce document uniqueness. <br /> Unless this field is marked with required="false", it will be a required field<br /> --><br /> <!-- field to use to determine and enforce document uniqueness. --><br /> <uniqueKey>id</uniqueKey><br /> <!-- field for the QueryParser to use when an explicit fieldname is absent --><br /> <defaultSearchField>name</defaultSearchField><br /> <!-- SolrQueryParser configuration: defaultOperator="AND|OR" --><br /> <solrQueryParser defaultOperator="OR"/><br /> </schema><br /><B>solrconfig.xml</B> <?xml version="1.0" encoding="UTF-8" ?><br /> <!--<br /> Licensed to the Apache Software Foundation (ASF) under one or more<br /> contributor license agreements. See the NOTICE file distributed with<br /> this work for additional information regarding copyright ownership.<br /> The ASF licenses this file to You under the Apache License, Version 2.0<br /> (the "License"); you may not use this file except in compliance with<br /> the License. You may obtain a copy of the License at<br /> http://www.apache.org/licenses/LICENSE-2.0<br /> Unless required by applicable law or agreed to in writing, software<br /> distributed under the License is distributed on an "AS IS" BASIS,<br /> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br /> See the License for the specific language governing permissions and<br /> limitations under the License.<br /> --><br /><config><br /> <!-- Set this to 'false' if you want solr to continue working after it has <br /> encountered an severe configuration error. In a production environment, <br /> you may want solr to keep working even if one handler is mis-configured.<br /> You may also set this to false using by setting the system property:<br /> -Dsolr.abortOnConfigurationError=false<br /> --><br /> <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError><br /> <!-- Used to specify an alternate directory to hold all index data<br /> other than the default ./data under the Solr home.<br /> If replication is in use, this should match the replication configuration. --><br /> <!-- dataDir>/mnt/htdocs/apache-tomcat-6.0.18/solr/data</dataDir --><br /><br /> <indexDefaults><br /> <!-- Values here affect all index writers and act as a default unless overridden. --><br /> <useCompoundFile>false</useCompoundFile><br /> <mergeFactor>10000</mergeFactor><br /> <!--<br /> If both ramBufferSizeMB and maxBufferedDocs is set, then Lucene will flush based on whichever limit is hit first.<br /> --><br /> <!--<maxBufferedDocs>1000</maxBufferedDocs>--><br /> <!-- Tell Lucene when to flush documents to disk.<br /> Giving Lucene more memory for indexing means faster indexing at the cost of more RAM<br /> If both ramBufferSizeMB and maxBufferedDocs is set, then Lucene will flush based on whichever limit is hit first.<br /> --><br /> <ramBufferSizeMB>512</ramBufferSizeMB><br /> <maxMergeDocs>2147483647</maxMergeDocs><br /> <maxFieldLength>10000</maxFieldLength><br /> <writeLockTimeout>1000</writeLockTimeout><br /> <commitLockTimeout>10000</commitLockTimeout><br /> <!--<br /> Expert: Turn on Lucene's auto commit capability.<br /> This causes intermediate segment flushes to write a new lucene<br /> index descriptor, enabling it to be opened by an external<br /> IndexReader.<br /> NOTE: Despite the name, this value does not have any relation to Solr's autoCommit functionality<br /> --><br /> <!--<luceneAutoCommit>false</luceneAutoCommit>--><br /> <!--<br /> Expert:<br /> The Merge Policy in Lucene controls how merging is handled by Lucene. The default in 2.3 is the LogByteSizeMergePolicy, previous<br /> versions used LogDocMergePolicy.<br /> LogByteSizeMergePolicy chooses segments to merge based on their size. The Lucene 2.2 default, LogDocMergePolicy chose when<br /> to merge based on number of documents<br /> Other implementations of MergePolicy must have a no-argument constructor<br /> --><br /> <!--<mergePolicy>org.apache.lucene.index.LogByteSizeMergePolicy</mergePolicy>--><br /> <!--<br /> Expert:<br /> The Merge Scheduler in Lucene controls how merges are performed. The ConcurrentMergeScheduler (Lucene 2.3 default)<br /> can perform merges in the background using separate threads. The SerialMergeScheduler (Lucene 2.2 default) does not.<br /> --><br /> <!--<mergeScheduler>org.apache.lucene.index.ConcurrentMergeScheduler</mergeScheduler>--><br /> <!--<br /> This option specifies which Lucene LockFactory implementation to use.<br /> <br /> single = SingleInstanceLockFactory - suggested for a read-only index<br /> or when there is no possibility of another process trying<br /> to modify the index.<br /> native = NativeFSLockFactory<br /> simple = SimpleFSLockFactory<br /> (For backwards compatibility with Solr 1.2, 'simple' is the default<br /> if not specified.)<br /> --><br /> <lockType>single</lockType><br /> </indexDefaults><br /> <mainIndex><br /> <!-- options specific to the main on-disk lucene index --><br /> <useCompoundFile>false</useCompoundFile><br /> <ramBufferSizeMB>512</ramBufferSizeMB><br /> <mergeFactor>10</mergeFactor><br /> <!-- Deprecated --><br /> <!--<maxBufferedDocs>1000</maxBufferedDocs>--><br /> <maxMergeDocs>2147483647</maxMergeDocs><br /> <maxFieldLength>10000</maxFieldLength><br /> <!-- If true, unlock any held write or commit locks on startup. <br /> This defeats the locking mechanism that allows multiple<br /> processes to safely access a lucene index, and should be<br /> used with care.<br /> This is not needed if lock type is 'none' or 'single'<br /> --><br /> <unlockOnStartup>false</unlockOnStartup><br /> <!--<br /> Custom deletion policies can specified here. The class must<br /> implement org.apache.lucene.index.IndexDeletionPolicy.<br /> http://lucene.apache.org/java/2_3_2/api/org/apache/lucene/index/IndexDeletionPolicy.html<br /> The standard Solr IndexDeletionPolicy implementation supports deleting<br /> index commit points on number of commits, age of commit point and<br /> optimized status.<br /> The latest commit point should always be preserved regardless<br /> of the criteria.<br /> --><br /> <deletionPolicy class="solr.SolrDeletionPolicy"><br /> <!-- Keep only optimized commit points --><br /> <str name="keepOptimizedOnly">false</str><br /> <!-- The maximum number of commit points to be kept --><br /> <str name="maxCommitsToKeep">1</str><br /> <!--<br /> Delete all commit points once they have reached the given age.<br /> Supports DateMathParser syntax e.g.<br /> <br /> <str name="maxCommitAge">30MINUTES</str><br /> <str name="maxCommitAge">1DAY</str><br /> --><br /> </deletionPolicy><br /> </mainIndex><br /> <!-- Enables JMX if and only if an existing MBeanServer is found, use <br /> this if you want to configure JMX through JVM parameters. Remove<br /> this to disable exposing Solr configuration and statistics to JMX.<br /> If you want to connect to a particular server, specify the agentId<br /> e.g. <jmx agentId="myAgent" /><br /> If you want to start a new MBeanServer, specify the serviceUrl<br /> e.g <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr" /><br /> For more details see http://wiki.apache.org/solr/SolrJmx<br /> --><br /> <jmx /><br /> <!-- the default high-performance update handler --><br /> <updateHandler class="solr.DirectUpdateHandler2"><br /> <!-- A prefix of "solr." for class names is an alias that<br /> causes solr to search appropriate packages, including<br /> org.apache.solr.(search|update|request|core|analysis)<br /> --><br /> <!-- Perform a <commit/> automatically under certain conditions:<br /> maxDocs - number of updates since last commit is greater than this<br /> maxTime - oldest uncommited update (in ms) is this long ago<br /> <autoCommit> <br /> <maxDocs>10000</maxDocs><br /> <maxTime>1000</maxTime> <br /> </autoCommit><br /> --><br /> <!-- The RunExecutableListener executes an external command.<br /> exe - the name of the executable to run<br /> dir - dir to use as the current working directory. default="."<br /> wait - the calling thread waits until the executable returns. default="true"<br /> args - the arguments to pass to the program. default=nothing<br /> env - environment variables to set. default=nothing<br /> --><br /> <!-- A postCommit event is fired after every commit or optimize command<br /> <listener event="postCommit" class="solr.RunExecutableListener"><br /> <str name="exe">solr/bin/snapshooter</str><br /> <str name="dir">.</str><br /> <bool name="wait">true</bool><br /> <arr name="args"> <str>arg1</str> <str>arg2</str> </arr><br /> <arr name="env"> <str>MYVAR=val1</str> </arr><br /> </listener><br /> --><br /> <!-- A postOptimize event is fired only after every optimize command, useful<br /> in conjunction with index distribution to only distribute optimized indicies <br /> <listener event="postOptimize" class="solr.RunExecutableListener"><br /> <str name="exe">snapshooter</str><br /> <str name="dir">solr/bin</str><br /> <bool name="wait">true</bool><br /> </listener><br /> --><br /> </updateHandler><br /><br /> <query><br /> <!-- Maximum number of clauses in a boolean query... can affect<br /> range or prefix queries that expand to big boolean<br /> queries. An exception is thrown if exceeded. --><br /> <maxBooleanClauses>1024</maxBooleanClauses><br /><br /> <!-- There are two implementations of cache available for Solr,<br /> LRUCache, based on a synchronized LinkedHashMap, and<br /> FastLRUCache, based on a ConcurrentHashMap. FastLRUCache has faster gets<br /> and slower puts in single threaded operation and thus is generally faster<br /> than LRUCache when the hit ratio of the cache is high (> 75%), and may be<br /> faster under other scenarios on multi-cpu systems. --><br /> <!-- Cache used by SolrIndexSearcher for filters (DocSets),<br /> unordered sets of *all* documents that match a query.<br /> When a new searcher is opened, its caches may be prepopulated<br /> or "autowarmed" using data from caches in the old searcher.<br /> autowarmCount is the number of items to prepopulate. For LRUCache,<br /> the autowarmed items will be the most recently accessed items.<br /> Parameters:<br /> class - the SolrCache implementation LRUCache or FastLRUCache<br /> size - the maximum number of entries in the cache<br /> initialSize - the initial capacity (number of entries) of<br /> the cache. (seel java.util.HashMap)<br /> autowarmCount - the number of entries to prepopulate from<br /> and old cache.<br /> --><br /> <filterCache<br /> class="solr.FastLRUCache"<br /> size="512"<br /> initialSize="512"<br /> autowarmCount="128"/><br /> <!-- Cache used to hold field values that are quickly accessible<br /> by document id. The fieldValueCache is created by default<br /> even if not configured here.<br /> <fieldValueCache<br /> class="solr.FastLRUCache"<br /> size="512"<br /> autowarmCount="128"<br /> showItems="32"<br /> /><br /> --><br /> <!-- queryResultCache caches results of searches - ordered lists of<br /> document ids (DocList) based on a query, a sort, and the range<br /> of documents requested. --><br /> <queryResultCache<br /> class="solr.LRUCache"<br /> size="512"<br /> initialSize="512"<br /> autowarmCount="32"/><br /> <!-- documentCache caches Lucene Document objects (the stored fields for each document).<br /> Since Lucene internal document ids are transient, this cache will not be autowarmed. --><br /> <documentCache<br /> class="solr.LRUCache"<br /> size="512"<br /> initialSize="512"<br /> autowarmCount="0"/><br /> <!-- If true, stored fields that are not requested will be loaded lazily.<br /> This can result in a significant speed improvement if the usual case is to<br /> not load all stored fields, especially if the skipped fields are large compressed<br /> text fields.<br /> --><br /> <enableLazyFieldLoading>true</enableLazyFieldLoading><br /> <!-- Example of a generic cache. These caches may be accessed by name<br /> through SolrIndexSearcher.getCache(),cacheLookup(), and cacheInsert().<br /> The purpose is to enable easy caching of user/application level data.<br /> The regenerator argument should be specified as an implementation<br /> of solr.search.CacheRegenerator if autowarming is desired. --><br /> <!--<br /> <cache name="myUserCache"<br /> class="solr.LRUCache"<br /> size="4096"<br /> initialSize="1024"<br /> autowarmCount="1024"<br /> regenerator="org.mycompany.mypackage.MyRegenerator"<br /> /><br /> --><br /> <!-- An optimization that attempts to use a filter to satisfy a search.<br /> If the requested sort does not include score, then the filterCache<br /> will be checked for a filter matching the query. If found, the filter<br /> will be used as the source of document ids, and then the sort will be<br /> applied to that.<br /> <useFilterForSortedQuery>true</useFilterForSortedQuery><br /> --><br /> <!-- An optimization for use with the queryResultCache. When a search<br /> is requested, a superset of the requested number of document ids<br /> are collected. For example, if a search for a particular query<br /> requests matching documents 10 through 19, and queryWindowSize is 50,<br /> then documents 0 through 49 will be collected and cached. Any further<br /> requests in that range can be satisfied via the cache. --><br /> <queryResultWindowSize>50</queryResultWindowSize><br /> <!-- Maximum number of documents to cache for any entry in the<br /> queryResultCache. --><br /> <queryResultMaxDocsCached>200</queryResultMaxDocsCached><br /> <!-- This entry enables an int hash representation for filters (DocSets)<br /> when the number of items in the set is less than maxSize. For smaller<br /> sets, this representation is more memory efficient, more efficient to<br /> iterate over, and faster to take intersections. --><br /> <HashDocSet maxSize="3000" loadFactor="0.75"/><br /> <!-- a newSearcher event is fired whenever a new searcher is being prepared<br /> and there is a current searcher handling requests (aka registered). --><br /> <!-- QuerySenderListener takes an array of NamedList and executes a<br /> local query request for each NamedList in sequence. --><br /> <listener event="newSearcher" class="solr.QuerySenderListener"><br /> <arr name="queries"><br /> <lst> <str name="q">solr</str> <str name="start">0</str> <str name="rows">10</str> </lst><br /> <lst> <str name="q">rocks</str> <str name="start">0</str> <str name="rows">10</str> </lst><br /> <lst><str name="q">static newSearcher warming query from solrconfig.xml</str></lst><br /> </arr><br /> </listener><br /> <!-- a firstSearcher event is fired whenever a new searcher is being<br /> prepared but there is no current registered searcher to handle<br /> requests or to gain autowarming data from. --><br /> <listener event="firstSearcher" class="solr.QuerySenderListener"><br /> <arr name="queries"><br /> <lst> <str name="q">fast_warm</str> <str name="start">0</str> <str name="rows">10</str> </lst><br /> <lst><str name="q">static firstSearcher warming query from solrconfig.xml</str></lst><br /> </arr><br /> </listener><br /> <!-- If a search request comes in and there is no current registered searcher,<br /> then immediately register the still warming searcher and use it. If<br /> "false" then all requests will block until the first searcher is done<br /> warming. --><br /> <useColdSearcher>false</useColdSearcher><br /> <!-- Maximum number of searchers that may be warming in the background<br /> concurrently. An error is returned if this limit is exceeded. Recommend<br /> 1-2 for read-only slaves, higher for masters w/o cache warming. --><br /> <maxWarmingSearchers>2</maxWarmingSearchers><br /> </query><br /> <!-- <br /> Let the dispatch filter handler /select?qt=XXX<br /> handleSelect=true will use consistent error handling for /select and /update<br /> handleSelect=false will use solr1.1 style error formatting<br /> --><br /> <requestDispatcher handleSelect="true" ><br /> <!--Make sure your system has some authentication before enabling remote streaming! --><br /> <requestParsers enableRemoteStreaming="true" multipartUploadLimitInKB="2048000" /><br /> <!-- Set HTTP caching related parameters (for proxy caches and clients).<br /> <br /> To get the behaviour of Solr 1.2 (ie: no caching related headers)<br /> use the never304="true" option and do not specify a value for<br /> <cacheControl><br /> --><br /> <!-- <httpCaching never304="true"> --><br /> <httpCaching lastModifiedFrom="openTime"<br /> etagSeed="Solr"><br /> <!-- lastModFrom="openTime" is the default, the Last-Modified value<br /> (and validation against If-Modified-Since requests) will all be<br /> relative to when the current Searcher was opened.<br /> You can change it to lastModFrom="dirLastMod" if you want the<br /> value to exactly corrispond to when the physical index was last<br /> modified.<br /> etagSeed="..." is an option you can change to force the ETag<br /> header (and validation against If-None-Match requests) to be<br /> differnet even if the index has not changed (ie: when making<br /> significant changes to your config file)<br /> lastModifiedFrom and etagSeed are both ignored if you use the<br /> never304="true" option.<br /> --><br /> <!-- If you include a <cacheControl> directive, it will be used to<br /> generate a Cache-Control header, as well as an Expires header<br /> if the value contains "max-age="<br /> By default, no Cache-Control header is generated.<br /> You can use the <cacheControl> option even if you have set<br /> never304="true"<br /> --><br /> <!-- <cacheControl>max-age=30, public</cacheControl> --><br /> </httpCaching><br /> </requestDispatcher><br /><br /> <!-- requestHandler plugins... incoming queries will be dispatched to the<br /> correct handler based on the path or the qt (query type) param.<br /> Names starting with a '/' are accessed with the a path equal to the <br /> registered name. Names without a leading '/' are accessed with:<br /> http://host/app/select?qt=name<br /> If no qt is defined, the requestHandler that declares default="true"<br /> will be used.<br /> --><br /> <requestHandler name="standard" class="solr.SearchHandler" default="true"><br /> <!-- default values for query parameters --><br /> <lst name="defaults"><br /> <str name="echoParams">explicit</str><br /> <!--<br /> <int name="rows">10</int><br /> <str name="fl">*</str><br /> <str name="version">2.1</str><br /> --><br /> </lst><br /> </requestHandler><br /><!-- Please refer to http://wiki.apache.org/solr/SolrReplication for details on configuring replication --><br /> <!--Master config--><br /> <!--<br /> <requestHandler name="/replication" class="solr.ReplicationHandler" ><br /> <lst name="master"><br /> <str name="replicateAfter">commit</str><br /> <str name="confFiles">schema.xml,stopwords.txt</str><br /> </lst><br /> </requestHandler><br /> --><br /> <!-- Slave config--><br /> <!--<br /> <requestHandler name="/replication" class="solr.ReplicationHandler"><br /> <lst name="slave"><br /> <str name="masterUrl">http://localhost:8983/solr/replication</str><br /> <str name="pollInterval">00:00:60</str> <br /> </lst><br /> </requestHandler><br /> --><br /> <!-- DisMaxRequestHandler allows easy searching across multiple fields<br /> for simple user-entered phrases. It's implementation is now<br /> just the standard SearchHandler with a default query type<br /> of "dismax". <br /> see http://wiki.apache.org/solr/DisMaxRequestHandler<br /> --><br /> <requestHandler name="dismax" class="solr.SearchHandler" ><br /> <lst name="defaults"><br /> <str name="defType">dismax</str><br /> <str name="echoParams">explicit</str><br /> <float name="tie">0.01</float><br /> <str name="qf"><br /> text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4<br /> </str><br /> <str name="pf"><br /> text^0.2 features^1.1 name^1.5 manu^1.4 manu_exact^1.9<br /> </str><br /> <str name="bf"><br /> ord(popularity)^0.5 recip(rord(price),1,1000,1000)^0.3<br /> </str><br /> <str name="fl"><br /> id,name,price,score<br /> </str><br /> <str name="mm"><br /> 2&lt;-1 5&lt;-2 6&lt;90%<br /> </str><br /> <int name="ps">100</int><br /> <str name="q.alt">*:*</str><br /> <!-- example highlighter config, enable per-query with hl=true --><br /> <str name="hl.fl">text features name</str><br /> <!-- for this field, we want no fragmenting, just highlighting --><br /> <str name="f.name.hl.fragsize">0</str><br /> <!-- instructs Solr to return the field itself if no query terms are<br /> found --><br /> <str name="f.name.hl.alternateField">name</str><br /> <str name="f.text.hl.fragmenter">regex</str> <!-- defined below --><br /> </lst><br /> </requestHandler><br /> <!-- Note how you can register the same handler multiple times with<br /> different names (and different init parameters)<br /> --><br /> <requestHandler name="partitioned" class="solr.SearchHandler" ><br /> <lst name="defaults"><br /> <str name="defType">dismax</str><br /> <str name="echoParams">explicit</str><br /> <str name="qf">text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0</str><br /> <str name="mm">2&lt;-1 5&lt;-2 6&lt;90%</str><br /> <!-- This is an example of using Date Math to specify a constantly<br /> moving date range in a config...<br /> --><br /> <str name="bq">incubationdate_dt:[* TO NOW/DAY-1MONTH]^2.2</str><br /> </lst><br /> <!-- In addition to defaults, "appends" params can be specified<br /> to identify values which should be appended to the list of<br /> multi-val params from the query (or the existing "defaults").<br /> In this example, the param "fq=instock:true" will be appended to<br /> any query time fq params the user may specify, as a mechanism for<br /> partitioning the index, independent of any user selected filtering<br /> that may also be desired (perhaps as a result of faceted searching).<br /> NOTE: there is *absolutely* nothing a client can do to prevent these<br /> "appends" values from being used, so don't use this mechanism<br /> unless you are sure you always want it.<br /> --><br /> <lst name="appends"><br /> <str name="fq">inStock:true</str><br /> </lst><br /> <!-- "invariants" are a way of letting the Solr maintainer lock down<br /> the options available to Solr clients. Any params values<br /> specified here are used regardless of what values may be specified<br /> in either the query, the "defaults", or the "appends" params.<br /> In this example, the facet.field and facet.query params are fixed,<br /> limiting the facets clients can use. Faceting is not turned on by<br /> default - but if the client does specify facet=true in the request,<br /> these are the only facets they will be able to see counts for;<br /> regardless of what other facet.field or facet.query params they<br /> may specify.<br /> NOTE: there is *absolutely* nothing a client can do to prevent these<br /> "invariants" values from being used, so don't use this mechanism<br /> unless you are sure you always want it.<br /> --><br /> <lst name="invariants"><br /> <str name="facet.field">cat</str><br /> <str name="facet.field">manu_exact</str><br /> <str name="facet.query">price:[* TO 500]</str><br /> <str name="facet.query">price:[500 TO *]</str><br /> </lst><br /> </requestHandler><br /><br /> <!--<br /> Search components are registered to SolrCore and used by Search Handlers<br /> <br /> By default, the following components are avaliable:<br /> <br /> <searchComponent name="query" class="org.apache.solr.handler.component.QueryComponent" /><br /> <searchComponent name="facet" class="org.apache.solr.handler.component.FacetComponent" /><br /> <searchComponent name="mlt" class="org.apache.solr.handler.component.MoreLikeThisComponent" /><br /> <searchComponent name="highlight" class="org.apache.solr.handler.component.HighlightComponent" /><br /> <searchComponent name="stats" class="org.apache.solr.handler.component.StatsComponent" /><br /> <searchComponent name="debug" class="org.apache.solr.handler.component.DebugComponent" /><br /> <br /> Default configuration in a requestHandler would look like:<br /> <arr name="components"><br /> <str>query</str><br /> <str>facet</str><br /> <str>mlt</str><br /> <str>highlight</str><br /> <str>stats</str><br /> <str>debug</str><br /> </arr><br /> If you register a searchComponent to one of the standard names, that will be used instead.<br /> To insert components before or after the 'standard' components, use:<br /> <br /> <arr name="first-components"><br /> <str>myFirstComponentName</str><br /> </arr><br /> <br /> <arr name="last-components"><br /> <str>myLastComponentName</str><br /> </arr><br /> --><br /> <!-- The spell check component can return a list of alternative spelling<br /> suggestions. --><br /> <searchComponent name="spellcheck" class="solr.SpellCheckComponent"><br /> <str name="queryAnalyzerFieldType">textSpell</str><br /> <lst name="spellchecker"><br /> <str name="name">default</str><br /> <str name="field">spell</str><br /> <str name="spellcheckIndexDir">./spellchecker1</str><br /> </lst><br /> <lst name="spellchecker"><br /> <str name="name">jarowinkler</str><br /> <str name="field">spell</str><br /> <!-- Use a different Distance Measure --><br /> <str name="distanceMeasure">org.apache.lucene.search.spell.JaroWinklerDistance</str><br /> <str name="spellcheckIndexDir">./spellchecker2</str><br /> </lst><br /> <lst name="spellchecker"><br /> <str name="classname">solr.FileBasedSpellChecker</str><br /> <str name="name">file</str><br /> <str name="sourceLocation">spellings.txt</str><br /> <str name="characterEncoding">UTF-8</str><br /> <str name="spellcheckIndexDir">./spellcheckerFile</str><br /> </lst><br /> </searchComponent><br /> <!-- A request handler utilizing the spellcheck component. <br /> ################################################################################################<br /> NOTE: This is purely as an example. The whole purpose of the SpellCheckComponent is to hook it into<br /> the request handler that handles (i.e. the standard or dismax SearchHandler)<br /> queries such that a separate request is not needed to get suggestions.<br /> IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!<br /> ################################################################################################<br /> --><br /> <requestHandler name="/spellCheckCompRH" class="solr.SearchHandler"><br /> <lst name="defaults"><br /> <!-- omp = Only More Popular --><br /> <str name="spellcheck.onlyMorePopular">false</str><br /> <!-- exr = Extended Results --><br /> <str name="spellcheck.extendedResults">false</str><br /> <!-- The number of suggestions to return --><br /> <str name="spellcheck.count">1</str><br /> </lst><br /> <arr name="last-components"><br /> <str>spellcheck</str><br /> </arr><br /> </requestHandler><br /> <searchComponent name="tvComponent" class="org.apache.solr.handler.component.TermVectorComponent"/><br /> <!-- A Req Handler for working with the tvComponent. This is purely as an example.<br /> You will likely want to add the component to your already specified request handlers. --><br /> <requestHandler name="tvrh" class="org.apache.solr.handler.component.SearchHandler"><br /> <lst name="defaults"><br /> <bool name="tv">true</bool><br /> </lst><br /> <arr name="last-components"><br /> <str>tvComponent</str><br /> </arr><br /> </requestHandler><br /><!--<br /> <requestHandler name="/update/extract" class="org.apache.solr.handler.extraction.ExtractingRequestHandler"><br /> <lst name="defaults"><br /> <str name="ext.map.Last-Modified">last_modified</str><br /> <bool name="ext.ignore.und.fl">true</bool><br /> </lst><br /> </requestHandler><br /> --><br /> <br /> <searchComponent name="termsComp" class="org.apache.solr.handler.component.TermsComponent"/><br /> <requestHandler name="/autoSuggest" class="org.apache.solr.handler.component.SearchHandler"><br /> <arr name="components"><br /> <str>termsComp</str><br /> </arr><br /> </requestHandler><br /><br /> <!-- a search component that enables you to configure the top results for<br /> a given query regardless of the normal lucene scoring.--><br /> <searchComponent name="elevator" class="solr.QueryElevationComponent" ><br /> <!-- pick a fieldType to analyze queries --><br /> <str name="queryFieldType">string</str><br /> <str name="config-file">elevate.xml</str><br /> </searchComponent><br /> <!-- a request handler utilizing the elevator component --><br /> <requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy"><br /> <lst name="defaults"><br /> <str name="echoParams">explicit</str><br /> </lst><br /> <arr name="last-components"><br /> <str>elevator</str><br /> </arr><br /> </requestHandler><br /><br /> <!-- Update request handler. <br /> Note: Since solr1.1 requestHandlers requires a valid content type header if posted in<br /> the body. For example, curl now requires: -H 'Content-type:text/xml; charset=utf-8'<br /> The response format differs from solr1.1 formatting and returns a standard error code.<br /> To enable solr1.1 behavior, remove the /update handler or change its path<br /> --><br /> <requestHandler name="/update" class="solr.XmlUpdateRequestHandler" /><br /><br /> <requestHandler name="/update/javabin" class="solr.BinaryUpdateRequestHandler" /><br /> <!--<br /> Analysis request handler. Since Solr 1.3. Use to returnhow a document is analyzed. Useful<br /> for debugging and as a token server for other types of applications<br /> --><br /> <requestHandler name="/analysis" class="solr.AnalysisRequestHandler" /><br /><br /> <!-- CSV update handler, loaded on demand --><br /> <requestHandler name="/update/csv" class="solr.CSVRequestHandler" startup="lazy" /><br /><br /> <!-- <br /> Admin Handlers - This will register all the standard admin RequestHandlers. Adding <br /> this single handler is equivalent to registering:<br /> <br /> <requestHandler name="/admin/luke" class="org.apache.solr.handler.admin.LukeRequestHandler" /><br /> <requestHandler name="/admin/system" class="org.apache.solr.handler.admin.SystemInfoHandler" /><br /> <requestHandler name="/admin/plugins" class="org.apache.solr.handler.admin.PluginInfoHandler" /><br /> <requestHandler name="/admin/threads" class="org.apache.solr.handler.admin.ThreadDumpHandler" /><br /> <requestHandler name="/admin/properties" class="org.apache.solr.handler.admin.PropertiesRequestHandler" /><br /> <requestHandler name="/admin/file" class="org.apache.solr.handler.admin.ShowFileRequestHandler" ><br /> <br /> If you wish to hide files under ${solr.home}/conf, explicitly register the ShowFileRequestHandler using:<br /> <requestHandler name="/admin/file" class="org.apache.solr.handler.admin.ShowFileRequestHandler" ><br /> <lst name="invariants"><br /> <str name="hidden">synonyms.txt</str> <br /> <str name="hidden">anotherfile.txt</str> <br /> </lst><br /> </requestHandler><br /> --><br /> <requestHandler name="/admin/" class="org.apache.solr.handler.admin.AdminHandlers" /><br /> <!-- ping/healthcheck --><br /> <requestHandler name="/admin/ping" class="PingRequestHandler"><br /> <lst name="defaults"><br /> <str name="qt">standard</str><br /> <str name="q">solrpingquery</str><br /> <str name="echoParams">all</str><br /> </lst><br /> </requestHandler><br /> <!-- Echo the request contents back to the client --><br /> <requestHandler name="/debug/dump" class="solr.DumpRequestHandler" ><br /> <lst name="defaults"><br /> <str name="echoParams">explicit</str> <!-- for all params (including the default etc) use: 'all' --><br /> <str name="echoHandler">true</str><br /> </lst><br /> </requestHandler><br /> <highlighting><br /> <!-- Configure the standard fragmenter --><br /> <!-- This could most likely be commented out in the "default" case --><br /> <fragmenter name="gap" class="org.apache.solr.highlight.GapFragmenter" default="true"><br /> <lst name="defaults"><br /> <int name="hl.fragsize">100</int><br /> </lst><br /> </fragmenter><br /> <!-- A regular-expression-based fragmenter (f.i., for sentence extraction) --><br /> <fragmenter name="regex" class="org.apache.solr.highlight.RegexFragmenter"><br /> <lst name="defaults"><br /> <!-- slightly smaller fragsizes work better because of slop --><br /> <int name="hl.fragsize">70</int><br /> <!-- allow 50% slop on fragment sizes --><br /> <float name="hl.regex.slop">0.5</float><br /> <!-- a basic sentence pattern --><br /> <str name="hl.regex.pattern">[-\w ,/\n\"']{20,200}</str><br /> </lst><br /> </fragmenter><br /> <!-- Configure the standard formatter --><br /> <formatter name="html" class="org.apache.solr.highlight.HtmlFormatter" default="true"><br /> <lst name="defaults"><br /> <str name="hl.simple.pre"><![CDATA[<em>]]></str><br /> <str name="hl.simple.post"><![CDATA[</em>]]></str><br /> </lst><br /> </formatter><br /> </highlighting><br /> <!-- An example dedup update processor that creates the "id" field on the fly<br /> based on the hash code of some other fields. This example has overwriteDupes<br /> set to false since we are using the id field as the signatureField and Solr<br /> will maintain uniqueness based on that anyway. --><br /> <!--<br /> <updateRequestProcessorChain name="dedupe"><br /> <processor class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory"><br /> <bool name="enabled">true</bool><br /> <str name="signatureField">id</str><br /> <bool name="overwriteDupes">false</bool><br /> <str name="fields">name,features,cat</str><br /> <str name="signatureClass">org.apache.solr.update.processor.Lookup3Signature</str><br /> </processor><br /> <processor class="solr.LogUpdateProcessorFactory" /><br /> <processor class="solr.RunUpdateProcessorFactory" /><br /> </updateRequestProcessorChain><br /> --><br /><br /> <!-- queryResponseWriter plugins... query responses will be written using the<br /> writer specified by the 'wt' request parameter matching the name of a registered<br /> writer.<br /> The "default" writer is the default and will be used if 'wt' is not specified <br /> in the request. XMLResponseWriter will be used if nothing is specified here.<br /> The json, python, and ruby writers are also available by default.<br /> <queryResponseWriter name="xml" class="org.apache.solr.request.XMLResponseWriter" default="true"/><br /> <queryResponseWriter name="json" class="org.apache.solr.request.JSONResponseWriter"/><br /> <queryResponseWriter name="python" class="org.apache.solr.request.PythonResponseWriter"/><br /> <queryResponseWriter name="ruby" class="org.apache.solr.request.RubyResponseWriter"/><br /> <queryResponseWriter name="php" class="org.apache.solr.request.PHPResponseWriter"/><br /> <queryResponseWriter name="phps" class="org.apache.solr.request.PHPSerializedResponseWriter"/><br /> <queryResponseWriter name="custom" class="com.example.MyResponseWriter"/><br /> --><br /> <!-- XSLT response writer transforms the XML output by any xslt file found<br /> in Solr's conf/xslt directory. Changes to xslt files are checked for<br /> every xsltCacheLifetimeSeconds. <br /> --><br /> <queryResponseWriter name="xslt" class="org.apache.solr.request.XSLTResponseWriter"><br /> <int name="xsltCacheLifetimeSeconds">5</int><br /> </queryResponseWriter><br /> <queryResponseWriter name="php" class="org.apache.solr.request.PHPResponseWriter"/><br /> <queryResponseWriter name="phps" class="org.apache.solr.request.PHPSerializedResponseWriter"/><br /> <!-- example of registering a query parser<br /> <queryParser name="lucene" class="org.apache.solr.search.LuceneQParserPlugin"/><br /> --><br /> <!-- example of registering a custom function parser <br /> <valueSourceParser name="myfunc" class="com.mycompany.MyValueSourceParser" /><br /> --><br /> <!-- config for the admin interface --><br /> <admin><br /> <defaultQuery>solr</defaultQuery><br /> <!-- configure a healthcheck file for servers behind a loadbalancer<br /> <healthcheck type="file">server-enabled</healthcheck><br /> --><br /> </admin><br /><br /> <updateRequestProcessor><br /> <factory name="standard" class="solr.ChainedUpdateProcessorFactory" default="true"><br /> <chain class="com.pjaol.search.solr.update.LocalUpdateProcessorFactory"><br /> <str name="latField">lat</str><br /> <str name="lngField">lng</str><br /> <int name="startTier">9</int><br /> <int name="endTier">17</int><br /> </chain><br /> <chain class="solr.LogUpdateProcessorFactory" ><br /> <!-- <int name="maxNumToLog">100</int> --><br /> </chain><br /> <chain class="solr.RunUpdateProcessorFactory" /><br /> </factory><br /> </updateRequestProcessor><br /> <requestHandler name="geo" class="com.pjaol.search.solr.LocalSolrRequestHandler"><br /> <!-- Custom latitude longitude fields, below are the defaults if not otherwise<br /> specified --><br /> <str name="latField">lat</str><br /> <str name="lngField">lng</str><br /> </requestHandler><br /> </config><br />can any one help me?<br />Thanks,<br /> AmitUnknownhttps://www.blogger.com/profile/00494352916110665409noreply@blogger.com