Prologue:
I'm experimenting with a "multi-tenant" flat file database system. On application start, prior to starting the server, each flat file database (which I'm calling a journal) is converted into a large javascript object in memory. From there the application starts it's service.
The application runtime behavior will be to serve requests from many different databases (one db per domain). All reads come from the in memory object alone. While any CRUDs both modify the in memory object as well as stream it to the journal.
Question: If I have a N of these database objects in memory which are already loaded from flat files (let's say averaging around 1MB each), what kind of limitations would I be dealing with by having N number of write streams?
If you are using streams that have an open file handle behind them, then your limit for how many of them you can have open will likely be governed by the process limit on open file handles which will vary by OS and (in some cases) by how you have that OS configured. Each open stream also consumes some memory, both for the stream object and for read/write buffers associated with the stream.
If you are using some sort of custom stream that just reads/writes to memory, not to files, then there would be no file handle involved and you would just be limited by the memory consumed by the stream objects and their buffers. You could likely have thousands of these with no issues.
Some reference posts:
Node.js and open files limit in linux