I got a problem with concatenation of huge CStrings in a legacy code base. The CStrings can contain base64 encoded files and so can be huge. At several points those CStrings are concatenated like this:
result += CString(_T("Some smaller String")) + Huge_CString + _T("Some smaller String");
This leads to several allocations and so we get huge memory peaks. Although this is done in parallel on multiple threads for different files. If they all come together I will finally get "Memory Exceptions".
What is the best way to handle this. If I could reduce the number of allocations that would help already. Right now I'm not looking for the perfect solution, but for a way to just reduce the peaks.
I will suggest essentially the same thing as Remy Lebeau, but using some different functions. I'm not sure what version of MFC/ATL introduced the CString::Preallocate function, so it is possibly you are stuck with a version of MFC/ATL that does not have this function.
CString result(_T("Initial string "));
CString prefix(_T("Prefix string:"));
CString suffix(_T(":Suffix string"));
CString bigString(_T("This really isn't very big."));
auto totalLength = result.GetLength() + prefix.GetLength() + bigString.GetLength() + suffix.GetLength();
result.Preallocate(totalLength);
result += prefix.GetString();
result += bigString.GetString();
result += suffix.GetString();
The calls to CString::GetString may or may not be useful. You will likely get the same allocation behavior by just appending each of the sub-strings to result
.