If you're using the MMAPv1 storage engine, an update can cause a document to move on disk if the document grows in size. But, can growing or shrinking documents still cause performance issues when using the WiredTiger storage engine? If so, what's the recommended way to handle this?
The collection that triggered this question contains documents that may change between x kB and 3x kB in size.
I juste read the WiredTiger documentation on MongoDB site : https://docs.mongodb.com/manual/core/wiredtiger/
It contains some valuable information on how WiredTiger works (read all sentence in the doc) :
WiredTiger uses MultiVersion Concurrency Control (MVCC). At the start of an operation, WiredTiger provides a point-in-time snapshot of the data to the transaction. A snapshot presents a consistent view of the in-memory data.
When writing to disk, WiredTiger writes all the data in a snapshot [...]
MongoDB configures WiredTiger to create checkpoints (i.e. write the snapshot data to disk) at intervals of 60 seconds or 2 gigabytes of journal data.
So, WiredTiger writes all the operations (insert, update, delete) to snapshot log that will be then persisted to disk evey 60s or 2Gb. Of course if it crash, the snapshot will be read at startup. It's close to the way event log works. So as it didn't replace data in place (it just write in the snapshot the new version of the data then the checkpoints process will replace the old one later) there will be no penalty cost as to replace a document by a larger one.
Of course, if you want to be sure, test it as it's the only way to be 100% sure.