Corrected the false implications:
Golang's GC does virtual address space defragmentation fragmentation-prevention strategies, which enables a program to run for a very long time (if not ever).
But it seems C code (cgo or SWIG) has no means of having it's memory pointers updated in case they get moved elsewhere. getting benefit from these strategies.
Is it true? Won't C code get benefit from Golang's virtual address space defragmentation fragmentation-prevention, and will finally get fragmentation?
If that's false, how?
Also, what happens to any DLL code loaded by C code (e.g. Windows DLLs) ?
(The question is updated to correct my wrong assumptions)
I'm afraid you might be confusing things on multiple levels here.
First, calling into C in a production-grade Go code is usually a no-go right from the start: it is slow; as slow as making a system call — as for the most part it really works as a system call: one need to switch from Go stack to C stack and have the OS thread which happened to be executing the Go code which made the cgo
call to be locked to that thread even if something on the C side blocks.
That is not to say you must avoid calling out to C, but this means you need to think this through up front and measure. May be setting up a pool of worker goroutines onto which to fan out the tasks which need to make C calls.
Second, your memory concerns might be well unfounded; let me explain.
Fragmenting virtual memory should be a non-issue on contemporary systems
usually used to run Go programs (I mean amd64
and the like).
That is pretty much because allocating virtual memory does not force the OS
to actually allocate physical memory pages — the latter happens only
when the virtual memory gets used (that is, accessed at an address
happening to point into an allocated virtual memory region).
So, want you or not, you do have that physical memory fragmentation problem
anyway, and it is getting sorted out
at the OS and CPU level using multiple-layered address translation
tables (and TLB-caches).
Third, you appear to be falling into a common trap of speculating about how things will perform under load instead of writing a highly simplified model program and inspecting how it behaves under the estimated production load. That is, you think a problem with allocating C memory will occur and then fancy the whole thing will not work.
I would say your worries are unfounded — given the amount of production code written in C and C++ and working under hardcore loads.
And finally, C and C++ programmers tred the pathways to high-performance memory management long time ago. A typical solution is using custom pool allocators for the objects which exhibit the most allocation/deallocation churn under the typical load. With this approach, the memory allocated on your C side is mostly stable for the lifetime of your program.
TL;DR
Write a model program, put the estimated load on it and see how it behaves. Then analyze, what the problems with the memory are, if any, and only then start attacking them.