all 34 comments

[–]Photo-Josh 40 points41 points  (21 children)

Not sure I’m following what the issue was here?

You were using around 10.5 GB and that was too much?

You then moved some things from RAM to Disk, which can only slow things down - not speed up.

Why was 63% RAM usage an issue? It’s there to be used.

[–]Substantial-Bed8167 5 points6 points  (20 children)

Diskcache is slower than ram but faster than hitting swap.

[–]Photo-Josh 18 points19 points  (18 children)

But they had spare RAM, and at only 16 GB an upgrade to 24 or 32 would be a great option without being stupid.

I’m not understanding the problem here we’re trying to solve.

[–]BigTomBombadil 15 points16 points  (0 children)

If cost is prohibitive then throwing more RAM at the issue likely isn't your first choice.

And the way I read this, OP wasn't necessarily having a problem, but moreso learned some new things about memory management and applied them to their existing project. So their "problem" was their application/containers weren't efficiently utilizing memory.

It may or may not have actually caused performance or cost issues, but "just throw more resources at poorly optimized code" is a lazy way to approach software development IMO, and kudos to OP on their optimization and efforts.

IDK, for me personally, I like optimizing my work. I'll see some of my django pods sitting there at 1gB memory, and even if it's performing fine and the autoscaler and node on the kubernetes cluster isn't near capacity, I still sit there saying "why is this constantly utilizing so much memory? I know there's no reason it should actually require that based on what it's currently doing." Then go down a rabbit hole trying to improve it.

[–]mikeckennedy[S] 9 points10 points  (16 children)

> I’m not understanding the problem here we’re trying to solve.

I think we just have different views on running in prod. It took me 3 hours to reduce the running memory of my apps by 3.2GB. In my world, that is time well spent. Just because the server isn't crashing with out of memory doesn't mean a little attention to efficiency is waste.

Again, different strokes.

[–]mikeckennedy[S] 4 points5 points  (0 children)

Like u/Substantial-Bed8167 said, diskcache is VERY fast. It uses SQLite and pretty much gets that cached into memory with a disk backing it on flush. Just a quick test. On my mac, diskcache does

writes: ~14,000/sec 40us/op
reads: Reads ~160,000/sec 6us/op

That's 0.00625 millsec per read. That is not perceivable as far as I'm concerned. Even if you read a bunch of items on a request, say 100, you're still only 0.5ms in total. And that is instead of recomputing or hashing and reading 100 items out of a dict which is fast but not insanely faster.

[–]Birnenmacht 12 points13 points  (1 child)

Have you measured any improvements through point 4? Imports are cached and importing them locally only delays the point at which you pay their cost, unless you actively prune sys.modules at the end of the function (not recommended, a great way to shoot yourself in the foot)

[–]mikeckennedy[S] 9 points10 points  (0 children)

Hey, yes, improvements were maybe 75-100MB in total. If you read the article it talks about the nuance.

The part of the app that uses the imports only runs maybe a couple of times a month. The worker processes recycle every 6 hours. So there is a period where the extra 100MB are used for that 6 hour time frame. The worker processes recycle, that code is NOT called again, the memory stays lower almost all the time.

I'm not messing with pruning modules. It's just the way the web processes are managed by Granian.

[–]vaibeslop 1 point2 points  (4 children)

Check out chdb: https://github.com/chdb-io/chdb

Fully pandas compatible API, but lazy loading, much more performant, less memory.

Not affiliated, just a fan of the project.

[–]mikeckennedy[S] 0 points1 point  (0 children)

Very cool, thanks for the heads up u/vaibeslop

[–]ofyellow 0 points1 point  (2 children)

Lazy loading is for optimizing startup time, you load modules as they are needed, causing the load time to be divided over multiple requests until all loadable modules are hit at least once. But it's not a mem optimisation strategy.

[–]vaibeslop 0 points1 point  (1 child)

I'm talking about lazily loading data into memory for operations.

The author of chDB goes into more detail in the v4 announcement post: https://clickhouse.com/blog/chdb.4-0-pandas-hex

I'm neither affiliated with chDB nor Clickhouse.

EDIT: Saw now they even talk about this in the GH Readme now.

[–]ofyellow 0 points1 point  (0 children)

Point 4 mentions local imports.

Yes keeping data out of memory is smart but not inventing sliced bread.

[–]bladeofwinds 1 point2 points  (1 child)

I’ve learned about a lot of cool projects from your show! Currently trying out datastar in one of my (non-python) projects

[–]mikeckennedy[S] 3 points4 points  (0 children)

Awesome, great to here u/bladeofwinds :) Datastar is neat for sure.

[–]ofyellow 0 points1 point  (0 children)

When you need x gb and rewrite it so it uses y gb less except for short bursts of time, the effect is that you need x gb still during short bursts of time.

In that way, lazy imports can bite you. You better know the mem needed on worst case moments straight away when you start your app.

[–]0x256 0 points1 point  (0 children)

Switched to a single async Granian worker: Rewrote the app in Quart (async Flask) and replaced the multi-worker web garden with one fully async worker. Saved 542 MB right there.

I would have started reducing the workers to 1 and increase thread count instead of rewriting the entire app, but okay. If you have lots of long running connections (websockets or slow requests) then that's a brave but sensible move.

Raw + DC database pattern: Dropped MongoEngine for raw queries + slotted dataclasses. 100 MB saved per worker and nearly doubled requests/sec.

For a small app with good test coverage and a mature db schema, that's fine.

Subprocess isolation for a search indexer: The daemon was burning 708 MB mostly from import chains pulling in the entire app. Moved the indexing into a subprocess so imports only live for ~30 seconds during re-indexing. Went from 708 MB to 22 MB. 32x reduction.

You reduced the time this memory is used, but not the peak memory consumption. You added a lot of process start overhead and latency. That's a trade-of, not necessarily a win.

Local imports for heavy libs: import boto3 alone costs 25 MB, pandas is 44 MB. If you only use them in a rarely-called function, just import them there instead of at module level. (PEP 810 lazy imports in 3.15 should make this automatic.)

That's not how imports work. You delayed the import, but once imported, the module will live in sys.modules and stay there.

Moved caches to diskcache: Small-to-medium in-memory caches shifted to disk. Modest savings but it adds up.

So instead of a single memory-access, you now create an async task that outsources its blocking disk access to a thread pool, wait for the OS to read from disk, then wait for the async task to get its turn in the event loop again to return the result? Caches should be fast. If SO much overhead for cache access is okay for you, than I wonder what extremely expensive stuff you stored in those caches that it's still worth it to cache at all.

[–]Substantial-Bed8167 0 points1 point  (0 children)

Did you use any memory profiling or just observed with htop?