Google runs a web page called AllForGood which helps people find opportunities to volunteer in various organizations in their neighbourhood. In the beginning, the search in the site was updated from Google’s crawlers crawling several volunteer webpages. However, when designing a tighter integrated, more real-time search, they turned to Apache Solr. On their blog they
I was in SF, California last week to dig deeper into Solr and to meet with its core developers (now working for Lucid Imagination). It was a great week with many new connections and new insight.
First of all, as you may know, Lucid Imagination is at the moment the primary commercial marketers, promotors and evangelists for Solr. They bring to the scene what an open source offering often lacks – a professional and polished image, nice wrapping and not least – commercial support. That is great for the future adoption of Solr with customers that need just that kind of safety. Being Lucid’s Norwegian parner, Cominvent AS now offers all of this in Norway as well.
The open source search server Solr from Apache Foundation has become a mature technology ready for prime-time. The recent editions has added features which previously were only found in commercial offerings, such as Automatic replication for large installations with distributed search Java-API (SolrJ) Conversion of Office-documents Full faceted search Advanced tokenization, highlighting and stemming Apache
Some of you read my previous posts The state of open source search. I will in this post go through the process of downloading, installing, configuring and using Apache SOLR to index some sample XML data and search it. This is the first post in a series, where each new post will explore some new