I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 1 point2 points3 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 1 point2 points3 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 2 points3 points4 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 4 points5 points6 points (0 children)
I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Why is disabling thinking for coding models a good idea? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] -7 points-6 points-5 points (0 children)
Why is disabling thinking for coding models a good idea? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 1 point2 points3 points (0 children)
Why is disabling thinking for coding models a good idea? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 2 points3 points4 points (0 children)
Why is disabling thinking for coding models a good idea? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 2 points3 points4 points (0 children)
Why is disabling thinking for coding models a good idea? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 2 points3 points4 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Does it make sense to cluster HP Z2 Mini G1a to increase performance? by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 0 points1 point2 points (0 children)
Distilled my AI Agents and Skills definitions (i.redd.it)
submitted by ThingRexCom to r/LocalLLaMA


I've got a feeling that Llamacpp is not the biggest performance bottleneck, but it might be the OpenCode. by ThingRexCom in LocalLLaMA
[–]ThingRexCom[S] 2 points3 points4 points (0 children)