Quote: "Edit: I should also mention that this happens even under the simplest circumstances; in total 4 packets of at most 64 bytes each are send between ONE client and the server, so it wouldn't be that any memory is allocated on an as-needed basis under long extents of time."
MikeNet hardly ever allocates memory on the fly, everything is allocated during instance initialization and reused. The reason for this is that memory allocation and deallocation are expensive whilst reusing and copying blocks of memory is comparatively cheap. Everything that is allocated in instance initialization (mnStartServer, mnConnect, mnStartBroadcast) is deallocated by mnFinish in order to prevent memory leaks; we could hold the memory in a memory pool, but I never implemented that.
The 4 important values which determine memory usage are:
- Maximum number of clients when setting up server.
- TCP buffer size on mnSetProfileBufferSizes (default 1024 bytes).
- UDP buffer size on mnSetProfileBufferSizes (default 1024 bytes).
- UDP mode.
If you set maximum number of clients to 50 then there will be at least 50 TCP buffers and 50 UDP buffers created so you are looking at about 100kb (actually it will be more because there are additional buffers for TCP).
Also note, that UDP modes per client and per client, per operation are particularly good at eating up memory.
If you have a maximum number of clients set to x, then in per client mode x*x UDP buffers will be created. If you also have y operations then x*x*y buffers will be created. CATCH_ALL and CATCH_ALL_NO don't incur additional overhead so it would be x buffers only.
It all boils down to CPU time being prioritized over memory consumption in MikeNet. So we allocate fixed memory at startup and do not allocate or free anything for the life time of the instance, even if we only end up using 1 byte of the 1024 allocated (for example).
The big advantage of this design is that client joining/leaving is almost completely free! because the area of memory is already reserved ready for them to jump into, and when they leave it is kept for the next client.
Quote: "I can get that cleaning up all the UDP message queues might take a while, but I find it weird that deallocation takes (significantly) longer than the allocation at startup."
What OS are you using? I noticed this a while ago and ran some tests; I think Microsoft sorted this post XP. Interestingly deallocation is significantly cheaper from Windows 7 (maybe Vista) onwards.