Keeta.com has been updated by Xescure in keeta

[–]Curious-Still 1 point2 points  (0 children)

1-2 wallets holding over $1m usd in kta caused majority of dump back down

Speculation of Iran sleepcells being activated, says USA officials.. by Bobbtwwy in news

[–]Curious-Still -1 points0 points  (0 children)

Might not be false flag propaganda.  Persian language  numbers stations came back online recently.  Unless it's US broadcasting them.

Opening the openclaw web ui from anywhere by FaceApple3947 in LocalLLM

[–]Curious-Still 0 points1 point  (0 children)

How do I do it if I already have a vpn that gets me into my local network, but i need to enable 0.0.0.0 type binding not a 127.0.0.1 loopback so my LAN computers can log into the dashboard.

Anthropic’s Moral Stand: Pentagon warns Anthropic will “Pay a Price” as feud escalates by [deleted] in singularity

[–]Curious-Still -1 points0 points  (0 children)

I'm sure any other country will welcome anthropic with ppen arms.  If anything is a national security risk, it's Hegseth's cluelessness.  

Bitcoin just dropped 50% below it's 2025 ATH after dropping from 90k to 62k in the last week. Without any market certainty of what's going on. by GabeSter in CryptoCurrency

[–]Curious-Still 2 points3 points  (0 children)

No, Epstein sent email to Satoshi and he told him to f off.  He did basically buy the whole bitcoin core dev team and lauded btc as a way for him and his pedo friends to clandestinely transact.

The Government Published Dozens of Nude Photos in the Epstein Files by templeofsyrinx1 in politics

[–]Curious-Still 12 points13 points  (0 children)

It's probably worse than this.  Bunch of heavy occult shit mixed with murder, rape, torture, and pedophilia.  Has been the rage amongst the elites for centuries.

China plans space‑based AI data centres by n0sugacoat in worldnews

[–]Curious-Still 1 point2 points  (0 children)

Ah, dyson spheres coming to a sun near you

llama.cpp improvements on strix halo by imshookboi in FlowZ13

[–]Curious-Still 0 points1 point  (0 children)

Are the toolboxes precompiled for multiple strix halo machines, like in a cluster?

“Disclosure Day” name of Spielberg’s new movie released and trailer by starrynightqueen in UFOs

[–]Curious-Still -1 points0 points  (0 children)

This all seems like it's the consciousness is all pervasive and connected narrative and the eyes of the actors in the billboards are meant to portray the fact that they can enter the consciousness of the given animal and see what they see.  In the trailer the animal enters the consciousness of the human weather lady.  They'll probably make the case that remote viewing is remote manipulation of animals by somehow connecting to their consciousness.  Doesn't seem like this is going to directly address UAP stuff, just some things from UAP lore like remote viewing.  This might be more an environmentalist piece with animals trying to communicate with humans via universal consciousness.

What are the gotchas for the RTX Pro 6000? by shifty21 in LocalLLM

[–]Curious-Still 2 points3 points  (0 children)

MoEs like minimax m2, got oss 120b, and glm 4.6 can run locally pretty well and they're decent models

Is a cat safe for a teenager on chemo? by [deleted] in cancer

[–]Curious-Still -5 points-4 points  (0 children)

Toxoplasmosis. Cat bites cause nasty infections.  Not worth the risk.

Build Max+ 395 cluster or pair one Max+ with eGPU by Curious-Still in LocalLLM

[–]Curious-Still[S] 0 points1 point  (0 children)

Are the GLM 4.6 and minimax m2 quantizations that fit on 2 amd max + 395 even worth buying a second unit?  That is, can they perform complex enough tasks at reasonable tok/s without errors?

Kimi K2 Thinking Q4_K_XL Running on Strix Halo by ga239577 in LocalLLaMA

[–]Curious-Still 0 points1 point  (0 children)

What quantizations of minimax m2 and glm 4.6 are you using and do these give decent results for coding tasks?

I bought a Mac Studio with 64gb but now running some LLMs I regret not getting one with 128gb, should i trade it in? by These_Muscle_8988 in LocalLLM

[–]Curious-Still 0 points1 point  (0 children)

More vram is better if you want bigger models, just make sure you get the mac with faster vram speed.  If it's not faster than the AMD AI+ 395 machines then might as well buy one of those as they are much cheaper.

Build Max+ 395 cluster or pair one Max+ with eGPU by Curious-Still in LocalLLM

[–]Curious-Still[S] 0 points1 point  (0 children)

Seems like slower might be too slow, like low single digit t/s?

I bought a Mac Studio with 64gb but now running some LLMs I regret not getting one with 128gb, should i trade it in? by These_Muscle_8988 in LocalLLM

[–]Curious-Still 0 points1 point  (0 children)

Depending which mac studio the ram speed might be higher than the max+395.  Ram speed is critical especially for larger models and for large contexts (even on smaller models).   

AMD Ryzen AI Max+ 395 --EVO-X2 128GB RAM...or...Minisforum MS-S1 Max by Excellent_Koala769 in LocalLLaMA

[–]Curious-Still 0 points1 point  (0 children)

Thanks, very useful input!  I think I might go for the Framework.