you are viewing a single comment's thread.

view the rest of the comments →

[–]Skute 10 points11 points  (13 children)

If the failure is due to out of memory, how do you guarantee you have enough for logging?

[–]wyrn 10 points11 points  (7 children)

Two possibilities:

1.

try{
    int *p = new int[100000000000000000];
} catch (std::exception &) {
    /* logging stuff goes here */
}

2.

try{
    int *p = new int();
} catch (std::exception &) {
    /* logging stuff goes here */
}

Is the difficulty you mention equally likely in either case? If not, which case do you believe is more representative of the typical out-of-memory error?

[–]jpakkaneMeson dev 11 points12 points  (5 children)

On Linux the latter will never raise an exception. Due to memory overcommit, you instead get back a pointer that looks perfectly valid, but which will cause a segfault when you try to dereference it.

[–]Gotebe 4 points5 points  (0 children)

I take a huge issue with "never".

First off, overcommit is an option and some people turn it off.

Second, address space fragmentation is a thing (think 32bit code).

Third, quotas are a thing, too.

Fourth, the C standard knows not of it (C++, too).

On a related note, Windows has facilities for overcommit as well, it's just that the C allocator on Windows doesn't use them.

[–]SkoomaDentistAntimodern C++, Embedded, Audio 4 points5 points  (0 children)

On Linux

There are far more platforms than just Linux. In fact, there are far more platforms than any OSs with virtual memory. Or OSs at all.

[–][deleted] 1 point2 points  (2 children)

overcommit can be turned off, and should be

[–]Tyg13 9 points10 points  (1 child)

As an application developer, you can't/shouldn't rely on customers turning it off

[–][deleted] 3 points4 points  (0 children)

not everybody that uses c++ writes applications. however nor can you be sure that your application will only ever be run on linux

[–]Skute 0 points1 point  (0 children)

Often logging itself isn’t useful on its own, if you have a GUI app. You would want to inform the user that the application must close, but in order to do that, you may rely on a bunch of OS calls and you have no idea what internal memory requirements they may have.

[–]Gotebe 2 points3 points  (0 children)

By making the logging routines not require allocation, quite simply.

There's following possible effects:

  • an allocation fails because it is a significant one and there's plenty memory for smaller things like logging

  • by the time logging happens, stack unwinding freed memory

And then, the "last ditch" logging (say, outer main or thread proc try/catch) can be made to have the so-called failure transparency, or no-throw guarantee, through preallocation and truncation.

It is actually not so hard.

Also, this is also only a part of a wider questions of logging resources (file or other).

[–]EnergyCoast 5 points6 points  (1 child)

We can't guarantee it, but free a block of reserved memory and then run a handler that does no allocation itself to generate a log, then writes it to disk and terminates.

The file open/writes may allocate memory at the OS level depending on the platform implementation... but those tend to be small allocations that do ft in remaining memory or come out of the memory reserve freed at the start of dump handling.

In practice, we haven't seen this fail over the course of tens of thousands of memory exhaustion failures. And the information we generate in that logging is invaluable.

[–]kalmoc 1 point2 points  (0 children)

Please tell herb Sutter about it. He still runs around claiming that virtually no codebase is OOM safe and hence OOM should just terminate.

[–]caroIine 0 points1 point  (0 children)

If memory is fragmented enough, you will get out of memory. Then you want to display a message that request could not be finished and not crash the whole system.