Was writing C++ code that deals with very large files in memory. Sometimes the files in question are larger than physical memory. I wrote my C++ code to catch memory allocation failures and terminate gracefully,
But that is not what happens. On modern Unixes, the kernel's OOM (out-of-memory) process will kill a process to terminate it before the process itself gets a a chance to terminate gracefully.
How do folks deal with this?
https://stackoverflow.com/questions/58935003/detecting-that-a-child-process-was-killed-because-the-os-is-out-of-memory
#linux #macOS #cpp #cplusplus
=> More informations about this toot | More toots from sumanthvepa@mastodon.social
@sumanthvepa Usually, when the size of files is starting to approach 5% of your memory or more, I tend to mmap it. Let the pager deal with it.
=> More informations about this toot | More toots from dascandy@infosec.exchange
@dascandy @sumanthvepa Agreed, mmaping. The kernel will unmap pages not accessed for a while, and map them back in if accessed again. Its honestly pretty cool.
=> More informations about this toot | More toots from dgentry@hachyderm.io
@dascandy Interesting. I should explore using mmap.
=> More informations about this toot | More toots from sumanthvepa@mastodon.social This content has been proxied by September (3851b).Proxy Information
text/gemini