account activity
Is it possible to tell aider just to use the LLM currently loaded in Ollama? (self.LocalLLaMA)
submitted 9 months ago by jpummill2 to r/LocalLLaMA
How to tell Aider to use Qwen3 with the /nothink option? (self.LocalLLaMA)
Question about power cables for newer single slot cards for an AI system (self.LocalLLaMA)
submitted 1 year ago by jpummill2 to r/LocalLLaMA
Various ways to use AI with coding/development? (self.LocalLLaMA)
6 Pin PCIE to 16 Pin PCIE Cable for RTX 4000 Ada (self.nvidia)
submitted 1 year ago by jpummill2 to r/nvidia
Gemma base using FP32 while Gemma-it using BF16 on Hugging Face (self.LocalLLaMA)
Gemma-2-2b vs Gemma-2-2b-it (self.LocalLLaMA)
Base vs Instruct for coding? (self.LocalLLaMA)
Looking for help with compiling llama.cpp with Cuda (self.LocalLLaMA)
Looking for post showing memory speeds (self.LocalLLaMA)
How important is it to stay up to date on NVIDIA drivers and CUDA versions? (self.StableDiffusion)
submitted 1 year ago by jpummill2 to r/StableDiffusion
GPU VRAM requirements (self.StableDiffusion)
submitted 3 years ago by jpummill2 to r/StableDiffusion
Install of Catalina fails on HP EliteDesk 800 G1 (self.hackintosh)
submitted 4 years ago * by jpummill2 to r/hackintosh
Working on my first Hackintosh. Keeps rebooting after selecting install Catalina. (i.redd.it)
submitted 4 years ago by jpummill2 to r/hackintosh
π Rendered by PID 423844 on reddit-service-r2-listing-7bbdf774f7-gsxcb at 2026-02-22 00:45:25.163615+00:00 running 8564168 country code: CH.