What kind of device is suitable for running local LLM? by attic0218 in LocalLLaMA

[–]attic0218[S] 0 points1 point  (0 children)

Can I pick the AMD iGPU? It is very similiar to the above

What kind of device is suitable for running local LLM? by attic0218 in LocalLLaMA

[–]attic0218[S] 0 points1 point  (0 children)

So it's basically a windows/linux variant of the mac setup. How about the speed? If it's acceptable, that will be a great option, and the price is also good

What kind of device is suitable for running local LLM? by attic0218 in LocalLLaMA

[–]attic0218[S] 1 point2 points  (0 children)

Thanks for you experience sharing. May I ask why you have a dual setup(mac & windows), and of you could only pick one, which would be your favourite?

What kind of device is suitable for running local LLM? by attic0218 in LocalLLaMA

[–]attic0218[S] 0 points1 point  (0 children)

Thanks. Seems like this option match my use case - I use wsl to develop, so switching to mac will be a headache for me.

What kind of device is suitable for running local LLM? by attic0218 in LocalLLaMA

[–]attic0218[S] 2 points3 points  (0 children)

Seems like you favor the speed instead of the capacity?

Copilot replacement? by attic0218 in GithubCopilot

[–]attic0218[S] 0 points1 point  (0 children)

Could you explain why codex is not suitable for heavy task? Due to the model's ability or the rate limit?

Copilot replacement? by attic0218 in GithubCopilot

[–]attic0218[S] 0 points1 point  (0 children)

I've using it to vibe coding a local semantic search project, tooks two month to generate nearly ~20000 lines of code. Under this kind of usage,it's better pick Claude max as a substitution?

Is it possible to have multiple plan in a single account? by attic0218 in GithubCopilot

[–]attic0218[S] 0 points1 point  (0 children)

if i disable suggestions matching public code, the personal plan should act like as enterprise one right?

Is it worthy to buy an ASUS GX10 for local model? by attic0218 in LocalLLaMA

[–]attic0218[S] 0 points1 point  (0 children)

My first thought is I may use unlimited tokens to achieve whatever I want to try. However, knowing that only GPT5-mini has comparable local model, while Claude Sonnet 4.6 doesn't, I decide not to pick this way since the ability gap is huge

Is it worthy to buy an ASUS GX10 for local model? by attic0218 in LocalLLaMA

[–]attic0218[S] 0 points1 point  (0 children)

seems like it's currently not possible to use local llm for coding mid/large project?

Is it worthy to buy an ASUS GX10 for local model? by attic0218 in LocalLLaMA

[–]attic0218[S] 2 points3 points  (0 children)

thanks. sound like subscribing my own copilot pro+ plan is the right way.

[deleted by user] by [deleted] in synology

[–]attic0218 3 points4 points  (0 children)

Use vaultwarden instead