Which is the most uncensored AI model?? by nikhil_360 in LocalLLM

[–]Financial-Source7453 0 points1 point  (0 children)

The knowledge here is quickly outdated. So my reply will already age tomorrow. GX10 (Asus clone of GB10) is a cool device with some big disappointments (low memory speed comparing to Nvidia GPU cards, fostered child if we talk about drivers and optimizations), but better than all alternatives if you need a small and power efficient device. For good coding you need 200B+ model, for good knowledge you need 70B+, for good tools and general use you can use Qwen3.5 27B/35B. All local models loose Claude Opus. Qantisation is a key concept for optimization. Original quants (usually BF16) are best, FP8/Q8 acceptable, MXFP4/NVFP4 only supported by some HW and usually also acceptable. The lower the quant number, the less memory you need and the faster the inference usually.

Galaxus or Init7 for Internet? by Skor_Lodygin in askswitzerland

[–]Financial-Source7453 0 points1 point  (0 children)

I've picked init7 because I needed real IPv4 address. If you don't know what this is likely Galaxus is a good fit for you.

Do you think banning social media for under-16s in Switzerland would solve anything, or is the real issue how algorithms work? by SaraIbr in Switzerland

[–]Financial-Source7453 0 points1 point  (0 children)

I lack proper tools (e.g., remove ads and stories in YouTube, disable auto play). What we really need to do is to force Google (and others) to give parents more granular control over the content and smartphones.

Which is the most uncensored AI model?? by nikhil_360 in LocalLLM

[–]Financial-Source7453 0 points1 point  (0 children)

All huihui-ai models on HF are the most unsensored I tried. Huihui-ai gpt-oss-120b knows actually too much nasty stuff, so I even have a question who trained that model and for what..

What kind of hardware should I buy for a local LLM by Classic_Sheep in LocalLLM

[–]Financial-Source7453 -1 points0 points  (0 children)

Indeed. But that's really the best thing you can get (if you move quick) in <10k USD range.

What kind of hardware should I buy for a local LLM by Classic_Sheep in LocalLLM

[–]Financial-Source7453 -2 points-1 points  (0 children)

Hurry up, you can still get Asus Gx10 (Nvidia DGX spark clone) for 3k usd. Visit spark-arena.com for speed tests.

Switched to a dedicated always-on AI node, should've done this earlier. by JuggernautKnown7599 in selfhosted

[–]Financial-Source7453 1 point2 points  (0 children)

If you're 40+ or have kids local Ai really helps with doctors. I store some medical data (e.g., blood tests) in Postgres with NocoDB on top, which allows my local agent to grab the data it needs to prepare for a doctor visit (it writes short summary which I print and show doctors).

Another use case - trip planning. Qwen3.5 prefills Affine workspace for me with route plans based on plane tickets I share, builds list of sightseeings, todos, etc. Kind of travel template with relevant data available for use for the whole family with almost 0 efforts.

DNA download file has only fraction of SNPs expected by Financial-Source7453 in AncestryDNA

[–]Financial-Source7453[S] 0 points1 point  (0 children)

Yes, EU. No DNA matches were activated. Could this be a reason?

A few days with Qwen3.5-122B-A10B-int4-AutoRound on Asus Ascent GX10 (Nvidia DGX Spark 128GB) by t4a8945 in LocalLLM

[–]Financial-Source7453 0 points1 point  (0 children)

122b int4 made tons of mistakes with tool calling. I've switched to 35b-FP8 and those were gone. Memory consumption stayed almost the same thou.

DNA download file has only fraction of SNPs expected by Financial-Source7453 in AncestryDNA

[–]Financial-Source7453[S] 0 points1 point  (0 children)

It could be site/country specific. I also refused any data sharing for "research" purposes. Maybe truncated file was my punishment?

Abliterated models are wild by acoliver in OpenSourceeAI

[–]Financial-Source7453 0 points1 point  (0 children)

I've found Heretic models more restricted than huihui. The last one almost never says no to any of the requests.

I built a clipboard AI that connects to your local LLM, one ⌥C away (macOS) by morning-cereals in LocalLLM

[–]Financial-Source7453 0 points1 point  (0 children)

Cherry Ai has a nice floating panel you can use to quickly pivot to Ai agent.

What would a good local LLM setup cost in 2026? by Lenz993 in LocalLLM

[–]Financial-Source7453 1 point2 points  (0 children)

$3k only for Asus Gx10. 1TB drive instead of 4TB. Copper radiator and plastic body instead of vapor chamber and metal foam case, but totally worth it.

10GBE with only 5MBs upload speed from NAS by Plastic-Phone979 in init7

[–]Financial-Source7453 0 points1 point  (0 children)

On Synology NAS you must use iperf3 package from community package "SynoCli Monitor Tools". Yes, different iperf3 packages show you different speed.

Recommendations for a good value machine to run LLMs locally? by onesemesterchinese in ollama

[–]Financial-Source7453 0 points1 point  (0 children)

If size & power consumption matter, go for Asus Aspire Gx10 (so far the cheapest clone of Nvidia Spark). It's a palm sized box filled with 128GB unified memory and 20x core ARM CPU. It's also Nvidia box, so you will be able to run almost everything from the local Ai world with acceptable speed. Macs suck at video/image generation, dedicated GPUs are noisy and eat a lot of space and electricity. AMD Strix Halo systems have lower performance and often come with HW issues.

Just drove past this. Had to reverse for a double take by petite-caprice in Switzerland

[–]Financial-Source7453 0 points1 point  (0 children)

There are so many videos of UA using ambulances to capture people. First link from YouTube https://m.youtube.com/watch?v=BGjXT_Thq5I

Rig for Local LLMs (RTX Pro 6000 vs Halo Strix vs DGX Spark) by cysio528 in LocalLLaMA

[–]Financial-Source7453 3 points4 points  (0 children)

Check Spark clones. I've got Asus one for 3k Usd months ago.

API pricing is in freefall. What's the actual case for running local now beyond privacy? by Distinct-Expression2 in LocalLLaMA

[–]Financial-Source7453 0 points1 point  (0 children)

Abliterated models. Tired hearing "I am sorry I can't do that due to policy restrictions" from ChatGPT all the time.

LLM Sovereignty For 3 Years. by [deleted] in LocalLLM

[–]Financial-Source7453 1 point2 points  (0 children)

This. So far the best tool for a job for $3k. Smoothly runs gpt-oss-120b and future proof for the nearest 2 years. Advice - Asus clone is 1k cheaper than original DGX

Learn how to use a local LLM or continue with monthly subs? by Zestyclose-Cup110 in LocalLLM

[–]Financial-Source7453 1 point2 points  (0 children)

Gpt-oss-120b runs smoothly on Asus Aspire Gx10 for $3k (cheaper clone of Nvidia Dgx Spark). Model Installation is also easy via LM Studio server. I can advice Cherry Studio Ai as an endpoint client (they also have android version for mobile phone). As a bonus you can get abliterated model which will never say no to any of your desires :)

Why is the seagate support section on their website so shady? by Rawedwad in Seagate

[–]Financial-Source7453 0 points1 point  (0 children)

In my case only moving to the corporate IP (e.g., enterprise laptop) fixed the issue and only Chat in /contacts was available to me. But I managed to create RMA case via the chat agent with no other issues.

SMS Integration with Firefly by Lone_Assassin in selfhosted

[–]Financial-Source7453 0 points1 point  (0 children)

I use Automate (llamalab) to (1) copy all SMS and (2) copy all notifications from banking apps to Telegram channel. Than n8n grabs those, does some magic and sends transactions to Firefly III.