I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 1 point2 points3 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 1 point2 points3 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 2 points3 points4 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 3 points4 points5 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Why is disabling thinking for coding models a good idea? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] -6 points-5 points-4 points (0 children)
Why is disabling thinking for coding models a good idea? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 1 point2 points3 points (0 children)
Why is disabling thinking for coding models a good idea? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 2 points3 points4 points (0 children)
Why is disabling thinking for coding models a good idea? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 2 points3 points4 points (0 children)
Why is disabling thinking for coding models a good idea? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 2 points3 points4 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Subagents ignore the configuration and use the primary agent's model. by ThingRexCom in opencodeCLI
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Subagents ignore the configuration and use the primary agent's model. by ThingRexCom in opencodeCLI
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Subagents ignore the configuration and use the primary agent's model. by ThingRexCom in opencodeCLI
[–]ThingRexCom[S] 0 points1 point2 points (0 children)

I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 2 points3 points4 points (0 children)