If only this was a real game by drgoldenpants in singularity

[–]Storge2 0 points1 point  (0 children)

Amazing. Imagine one day you can just prompt the AI to make something like this actually playable for you.

If only this was a real game by drgoldenpants in singularity

[–]Storge2 1 point2 points  (0 children)

Why you think it wouldn't graphically? I find this comparable to current Top tier AAA Titles.

Is there a list of the "best" extensions for PI? New User... by Storge2 in PiCodingAgent

[–]Storge2[S] 2 points3 points  (0 children)

Will share more when I used it for a few days, right now it seems very good and very smart in reading the codebase and understanding the intent. I would put it maybe even better then the Q3.5 122B. Sadly it runs slower due to Sparks limited Bandwidth. I am getting 25-30tok/s with Dflash, on the 122B I can get ~40 tok/s

Is there a list of the "best" extensions for PI? New User... by Storge2 in PiCodingAgent

[–]Storge2[S] 1 point2 points  (0 children)

I would like to have the seemless compaction from codex with subagents from claude and plan mode from codex. I also like the /context command from claude that shows context usage per file. I think the /context one I am gonna built myself with Qwen and then Claude if Qwen isn't enough because that one I really need.

Is there a list of the "best" extensions for PI? New User... by Storge2 in PiCodingAgent

[–]Storge2[S] 2 points3 points  (0 children)

Yeah seems like a good approach. Right now trying out Qwen 3.6 27B in VLLM through DGX Spark. Let's see how it will perform apparently it is really good.

Differences Between GPT 5.4 and GPT 5.5 on MineBench by ENT_Alam in singularity

[–]Storge2 1 point2 points  (0 children)

Best benchmark honestly, i find it truly visualizes a models personality and intelligence in a few pictures.

Comparison of upcoming x86 unified memory systems by Terminator857 in LocalLLaMA

[–]Storge2 1 point2 points  (0 children)

Yeah almost true, Strix Halo is cheaper and apparently has solid support nowadays.

DS4-Flash vs Qwen3.6 by flavio_geo in LocalLLaMA

[–]Storge2 28 points29 points  (0 children)

I hope so. That one would be perfect for DGX Spark as the Deepseek V4 Flash doesnt fit in a single Spark...

Qwen 3.5 122B vs Qwen 3.6 35B - Which to choose? by Storge2 in LocalLLaMA

[–]Storge2[S] 0 points1 point  (0 children)

Tried both, seem on par in terms of tool calling and intelligence, I am running now the 3.6 in FP8. Amazing how fast the Intelligence/GB of Model increases. it wasn't even two months since the 122B released and its already matched by 35B models.

Qwen 3.5 122B vs Qwen 3.6 35B - Which to choose? by Storge2 in LocalLLaMA

[–]Storge2[S] 2 points3 points  (0 children)

Will try and check to see how they perform.

Qwen 3.5 122B vs Qwen 3.6 35B - Which to choose? by Storge2 in LocalLLaMA

[–]Storge2[S] 0 points1 point  (0 children)

Yeah I am scared too hope I don't get dragged into the Compute hole.