Ancestors

Toot

Written by sumanthvepa on 2024-12-03 at 11:26

Was writing C++ code that deals with very large files in memory. Sometimes the files in question are larger than physical memory. I wrote my C++ code to catch memory allocation failures and terminate gracefully,

But that is not what happens. On modern Unixes, the kernel's OOM (out-of-memory) process will kill a process to terminate it before the process itself gets a a chance to terminate gracefully.

How do folks deal with this?

https://stackoverflow.com/questions/58935003/detecting-that-a-child-process-was-killed-because-the-os-is-out-of-memory

#linux #macOS #cpp #cplusplus

=> More informations about this toot | More toots from sumanthvepa@mastodon.social

Descendants

Written by Peter Bindels on 2024-12-03 at 12:57

@sumanthvepa Usually, when the size of files is starting to approach 5% of your memory or more, I tend to mmap it. Let the pager deal with it.

=> More informations about this toot | More toots from dascandy@infosec.exchange

Written by Denton Gentry on 2024-12-03 at 14:33

@dascandy @sumanthvepa Agreed, mmaping. The kernel will unmap pages not accessed for a while, and map them back in if accessed again. Its honestly pretty cool.

=> More informations about this toot | More toots from dgentry@hachyderm.io

Written by sumanthvepa on 2024-12-03 at 20:03

@dascandy Interesting. I should explore using mmap.

=> More informations about this toot | More toots from sumanthvepa@mastodon.social

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113588646124271905
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
259.494007 milliseconds
Gemini-to-HTML Time
1.075011 milliseconds

This content has been proxied by September (3851b).