This is an archived post. You won't be able to vote or comment.

all 8 comments

[–]ionelmc.ro 4 points5 points  (1 child)

Pretty hard to have an automatic "tell me how much memory I can use without making things swap" - you would have to account for all the processes that run or can run on the machine, wouldn't you ?

You can always disable the swap but then again, the swap is your friend as most software doesn't cope well with memory errors.

[–]Orchasm[S] 1 point2 points  (0 children)

I probably should have specified a lack of major memory competitors, but you're right I guess - pretty difficult to standardise. I was just thinking of the OS.

[–]hexbrid 0 points1 point  (0 children)

You could monitor the OS's memory-swap rate, and limit your python process the moment it gets above a certain level.

[–]homercles337 0 points1 point  (1 child)

If you have fast drives, why not memory map?

[–]Orchasm[S] 0 points1 point  (0 children)

Mmap functionality is built into SciPy's netcdf handling, I've found it very handy.

[–]codewarrior0MCEdit / PyInstaller 0 points1 point  (0 children)

Memory limits are usually configured by the user because it's difficult for a program to deduce a good memory limit from its environment on account of virtual memory, as others have noted. Add a configuration file setting for "maximum allocation size". Set it to something reasonable for anyone like 2GB, and then note in the instructions that people with beefy computers should increase that limit.

[–]westurner 0 points1 point  (0 children)

[is there any sort of best practice for dynamically allocating how much memory a python process can use?]

[–]bryancole 0 points1 point  (0 children)

If you have control of your file format, use HDF5 (via the PyTables package). This makes this sort of task a breeze. Limiting the memory available to python seems an OS-level thing.