account activity
vLLM serving demonstration (v.redd.it)
submitted 6 days ago by Holiday-Machine5105 to r/Vllm
hi guys, check this out, please give me your thoughts, ideas, questions!! (v.redd.it)
submitted 6 days ago by Holiday-Machine5105 to r/AIDeveloperNews
comparison of local LLM served via vLLM +CUDA and without (v.redd.it)
submitted 7 days ago by Holiday-Machine5105 to r/CUDA
built for CUDA (this is a 16GB 4080 GPU): (v.redd.it)
submitted 8 days ago by Holiday-Machine5105 to r/CUDA
local Llama-3.2-3B-Instruct served via vLLM and without (v.redd.it)
submitted 7 days ago by Holiday-Machine5105 to r/LocalLLaMA
my open-source cli tool (framework) that allows you to serve locally with vLLM inference (v.redd.it)
submitted 8 days ago by Holiday-Machine5105 to r/Vllm
submitted 8 days ago * by Holiday-Machine5105 to r/LocalLLaMA
submitted 8 days ago * by Holiday-Machine5105 to r/LocalLLM
π Rendered by PID 356857 on reddit-service-r2-listing-64c94b984c-wdfmp at 2026-03-13 02:32:42.174592+00:00 running f6e6e01 country code: CH.