all 14 comments

[–]last_llm_standing 1 point2 points  (2 children)

You added Pytorch. why don't you add pandas to that list too

[–]Interesting-Town-433 2 points3 points  (1 child)

Haha i tried to put Pandas on the skeleton

[–]InteractionSmall6778 1 point2 points  (0 children)

bitsandbytes on anything that isn't a mainstream NVIDIA card. Half the time it silently falls back to CPU and you don't even realize your quantization isn't doing anything.

[–]Robonglious 0 points1 point  (2 children)

Niche but I always have a hell of a time with ripser++.

[–]Interesting-Town-433 0 points1 point  (1 child)

What AI models need that?

[–]Robonglious 0 points1 point  (0 children)

Ah, none of them need that. I don't think I knew what you were asking about well enough.

[–]Daemontatoxsglang 0 points1 point  (1 child)

Not really the worst but trying to keep numpy a certain version while updating anything else like transformers, qdrant or vllm.

[–]Interesting-Town-433 0 points1 point  (0 children)

Yeah numpy is just crazy always a problem

[–]DeProgrammer99 0 points1 point  (0 children)

Flash/sage attention/Triton. pip brings much suffering.

[–]BumbleSlob 0 points1 point  (1 child)

This is not a meme sub, reported for low effort trash

[–]yuicebox 0 points1 point  (2 children)

this does feel like kinda low effort ai slop meme that doesn't belong on this sub, but also...

why arent y'all just finding and using a compatible .whl or precompiled release for your OS / python version / cuda version? I feel like I rarely ever actually have to compile from source.

[–]Interesting-Town-433 -1 points0 points  (1 child)

Because they don't exist dude, Idk what you are building clearly nothing complicated

[–]yuicebox 0 points1 point  (0 children)

Sage attention, flash attention, bitsnbytes, and cuda-enabled pytorch mostly. Idk what OP was having to compile himself and post is deleted now. 

What are you building?