Given:
Possibly relevant info:
For Example:
Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database.
They use an index per client and store it in the client's database. I'm not sure on the details. And I'm not sure if this is a serious mod to Lucene.
The Question:
How would you setup Lucene search so that each client can only search within its database?
How would you setup the index(es)?
Where do you store the index(es)?
Would you need to add a filter to all search queries?
If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet)
Possible Solutions:
Make an index for each client (database)
Have a single, gigantic index with a database_name field. Always include database_name as a filter.
One last thing:
I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.
You summoned me from the FogBugz StackExchange. My name is Jude, I'm the current search architect for FogBugz.
Here's a rough outline of how the FogBugz On Demand search architecture is set up[1]:
There are a few benefits to what we've done. Managing the accounts is quite simple, since client data and their index are stored in the same place. There are some negatives too, though, such as a set of really pesky edge case searches which underperform our minimum standards. Retrospectively, our search was cool and well done for its time. If I were to do it again, however, I would discourage this approach.
Simply, unless your search domain is very special or you're willing to dedicate a developer to blazingly fast search, you're probably going to be outperformed by an excellent product like ElasticSearch, Solr, or Xapian.
If I were doing this today, unless my search domain was extremely specific, I would probably use ElasticSearch, Solr, or Xapian for my database-backed full-text search solution. As for which, that depends on your auxiliary needs (platform, type of queries, extensibility, tolerance for one set of quirks over another, etc.)
On the topic of one large index versus many(!) scattered indices: Both can work. I think the decision really lies with what kind of architecture you're looking to build, and what kind of performance you need. You can be pretty flexible if you decide that a 2-second search response is reasonable, but once you start saying that anything over 200ms is unacceptable, your options start disappearing pretty quickly. While maintaining a single large search index for all of your clients can be vastly more efficient than handling lots of small indices, it's not necessarily faster (as you pointed out). I personally feel that, in a secure environment, the benefit of keeping your client data separated is not to be underestimated. When your index gets corrupted, it won't bring all search to a halt; silly little bugs won't expose sensitive data; user accounts stay modular- it's easier to extract a set of accounts and plop them onto a new server; etc.
I'm not sure if that answered your question, but I hope that I at least satisfied your curiosity :-)
[1]: In 2013, FogBugz began powering its search and filtering capabilities with ElasticSearch. We like it.