Was writing C++ code that deals with very large files in memory. Sometimes the files in question are larger than physical memory. I wrote my C++ code to catch memory allocation failures and terminate gracefully,
But that is not what happens. On modern Unixes, the kernel's OOM (out-of-memory) process will kill a process to terminate it before the process itself gets a a chance to terminate gracefully.
How do folks deal with this?
https://stackoverflow.com/questions/58935003/detecting-that-a-child-process-was-killed-because-the-os-is-out-of-memory
#linux #macOS #cpp #cplusplus
=> More informations about this toot | View the thread | More toots from sumanthvepa@mastodon.social
=> View linux tag | View macos tag | View cpp tag | View cplusplus tag This content has been proxied by September (3851b).Proxy Information
text/gemini