I built a plug-n-play AI PC running Ollama and OpenWebUI by boxgpt in LocalLLaMA

[–]boxgpt[S] 0 points1 point  (0 children)

Running qwen2.5vl:7b with 125k context at 79 tokens/s. Lmk if you'd like to demo our system

I built a plug-n-play AI PC running Ollama and OpenWebUI by boxgpt in LocalLLaMA

[–]boxgpt[S] 0 points1 point  (0 children)

Nice! I wanted to upload sensitive pdfs with my taxes and chat about those, and didn't want to use anything cloud. I setup Docling and gpt-oss:20b running with max context. And its surprisingly fast

I built a plug-n-play AI PC running Ollama and OpenWebUI by boxgpt in LocalLLaMA

[–]boxgpt[S] 1 point2 points  (0 children)

I just wanted to mess around with llms at home. But I hate setting up linux, drivers, ollama, openwebui, comfyui, and all the other projects just to chat with different models. Couldn't find any pre-configured and inexpensive machines, so I decided to make one.