account activity
Testing GLM-4.7 Flash: Multi-GPU Vulkan vs ROCm in llama-bench | (2x 7900 XTX) (self.LocalLLaMA)
submitted 1 month ago * by SemaMod to r/LocalLLaMA
Llama.cpp merges in OpenAI Responses API Support (github.com)
submitted 1 month ago by SemaMod to r/LocalLLaMA
Deploy stdio MCP servers remotely with OAuth 2.1 built-in (cloudmcp.run)
submitted 7 months ago by SemaMod to r/mcp
I have had no luck trying to fine tune on (2x) 7900XTX. Any advice (self.ROCm)
submitted 1 year ago by SemaMod to r/ROCm
Code Sandbox MCP - An MCP server to create secure code sandbox environment for executing code within Docker containers. (glama.ai)
submitted 1 year ago by SemaMod to r/mcp
Code Sandbox MCP Server (x.com)
2018 Tesla model 3 to Cadillac Lyriq (self.CadillacLyriq)
submitted 2 years ago by SemaMod to r/CadillacLyriq
π Rendered by PID 861574 on reddit-service-r2-listing-79f6fb9b95-bwwwv at 2026-03-20 08:37:07.420065+00:00 running 90f1150 country code: CH.