In our application we use ReadDirectoryChangesW()
to get notified when a file in a watched folder changes, to e.g. trigger a new examination of the file.
If copying a new file into the watched folder, everything works fine initially - I get notified of all changes that I subscribed for.
But at some point the notifications simply stop. The file is still growing and written as expected, though.
After observing the problem a couple of times I noticed that the stop of notifications seems to correlate with the size of the copy reaching and/or exceeding 2 GB (or rather, presumably 2^31 bytes).
No other operations are done on the watched folder and no errors are returned via the specified completion routine.
This is done using a local NTFS mount on Windows 11.
The folder is not shared.
Looking at the docs I could find no indication that there is an inherent limitation to ReadDirectoryChangesW()
.
I already tried switching to ReadDirectoryChangesExW()
, but to no avail.
So, is there an inherent limitation to this API? Is there a known way to circumvent it?
Setting up a minimal reproducible example let me answer my own question:
TLDR
No, there does not seem to be an inherent limitation to ReadDirectoryChangesW()
with regard to files growing beyond 2 GB.
The long(er) story
Here is an example that reproduces my problem - kind off.
Running the code, no events are received for the file written by the test most of the time.
Sometimes I get events, sometimes not. Sometimes they stop, sometimes not.
What I can say, is that I definitely got updates beyond 2 GB. So that part is clarified I guess.
What is interesting is that enabling Line 119 (FlushFileBuffers()
) will make me receive an update - for every write I do.
So it looks like I have a caching issue after all.
#include <atomic>
#include <thread>
#include <windows.h>
#include <iostream>
#include <iomanip>
#include <ctime>
std::atomic<bool> keepRunning(true);
VOID CALLBACK FileIOCompletionRoutine(DWORD dwErrorCode, DWORD dwNumberOfBytesTransfered, LPOVERLAPPED lpOverlapped) {
if (dwErrorCode == ERROR_SUCCESS) {
// Cast the buffer to a FILE_NOTIFY_INFORMATION structure
FILE_NOTIFY_INFORMATION* fni = (FILE_NOTIFY_INFORMATION*)lpOverlapped->hEvent;
std::wstring filename(fni->FileName, fni->FileNameLength / sizeof(WCHAR));
// Create a wide string version of the full file path
std::wstring filePath = L"D:\\TEST\\";
filePath += filename;
auto t = std::time(nullptr);
auto tm = *std::localtime(&t);
std::cout << std::put_time(&tm, "%d-%m-%Y %H-%M-%S") ;
std::wcout << "\tFile " << filePath << " has been modified.";
// Open the file
HANDLE hFile = CreateFileW(filePath.c_str(), GENERIC_READ,
FILE_SHARE_READ | FILE_SHARE_WRITE,
NULL, OPEN_EXISTING, FILE_ATTRIBUTE_READONLY, NULL);
if (hFile == INVALID_HANDLE_VALUE) {
std::cout << " Failed to open file for file size with " << GetLastError() << ".\n";
return;
}
// Get the file size
LARGE_INTEGER fileSize;
if (!GetFileSizeEx(hFile, &fileSize)) {
std::cout << " Failed to get file size.\n";
CloseHandle(hFile);
return;
}
std::wcout << L" New size: " << fileSize.QuadPart / (1024 * 1024) << L" MiB (" << fileSize.QuadPart << " bytes).\n";
CloseHandle(hFile);
} else {
std::cout << "An error occurred: " << dwErrorCode << "\n";
}
}
void monitorDirectory() {
// Open the directory containing the file
auto watched = L"D:\\TEST\\";
HANDLE hDir = CreateFileW( watched, FILE_LIST_DIRECTORY,
FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
NULL, OPEN_EXISTING,
FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OVERLAPPED, NULL);
if (hDir == INVALID_HANDLE_VALUE) {
std::cout << "Failed to open directory.\n";
return;
}
std::wcout << L"Watching directory " << watched << "\n";
OVERLAPPED overlapped = {0};
char buffer[64*1024];
overlapped.hEvent = buffer;
DWORD bytesReturned;
// Start watching the directory
while (keepRunning) {
// Start watching the directory
if (!ReadDirectoryChangesW(hDir, buffer, sizeof(buffer), FALSE,
FILE_NOTIFY_CHANGE_SIZE | FILE_NOTIFY_CHANGE_LAST_WRITE | FILE_NOTIFY_CHANGE_ATTRIBUTES | FILE_NOTIFY_CHANGE_LAST_ACCESS,
&bytesReturned, &overlapped, FileIOCompletionRoutine)) {
std::cout << "Failed to setup directory watch.\n";
CloseHandle(hDir);
return;
}
// Wait for changes
SleepEx(INFINITE, TRUE);
}
CloseHandle(hDir);
std::wcout << L"Watch ended on " << watched << "\n";
}
int main() {
std::thread monitorThread(monitorDirectory);
// Create a file
HANDLE hFile = CreateFileW(L"D:\\TEST\\file2.txt", GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
if (hFile == INVALID_HANDLE_VALUE) {
std::cout << "Failed to create file.\n";
return 1;
}
// Write data to the file at 100 MB/s until a key is hit
const size_t writeSize = 10 * 1024 * 1024; // 100 MB
char* data = new char[writeSize];
memset(data, 0, writeSize); // Fill the data with zeros
DWORD bytesWritten;
auto start = std::chrono::steady_clock::now();
for( int i = 0; i < 60; i++ ) {
if (!WriteFile(hFile, data, writeSize, &bytesWritten, NULL)) {
std::cout << "Failed to write to file.\n";
CloseHandle(hFile);
delete[] data;
return 1;
}
//FlushFileBuffers(hFile);
auto t = std::time(nullptr);
auto tm = *std::localtime(&t);
std::cout << std::put_time(&tm, "%d-%m-%Y %H-%M-%S") ;
std::wcout << "\t" << bytesWritten / (1024*1024) << L" MiB written" << std::endl;
// Sleep to control the write rate
auto end = std::chrono::steady_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();
if (duration < 1000) {
std::this_thread::sleep_for(std::chrono::milliseconds(1000 - duration));
}
start = std::chrono::steady_clock::now();
}
CloseHandle(hFile);
delete[] data;
// Stop the monitoring thread
keepRunning = false;
monitorThread.join();
return 0;
}
EDIT:
After some more testing, FlushFileBuffers()
is the only solid way to ensure consistently getting notifications for growing files while using buffered I/O.
Alternatively unbuffered I/O can be used for writing the files via FILE_FLAG_NO_BUFFERING - if writing of the monitored files can be controlled.
FILE_FLAG_WRITE_THROUGH
will only partly do the job of FlushFileBuffers()
and was not sufficient in my case.
Related reading:
EDIT 2:
Turns out the 2 GB part of the issue was present in our application after all - and a plain bug.
The version of _wstat()
used to process the file event simply was not fit to handle file sizes beyond 32-bit.
Using _wstat64()
solved the issue.
See also: