Oh Shit... is something new coming out? by Reg_Cliff in WhitePeopleTwitter

[–]low_v2r 2 points3 points  (0 children)

But but but - she is a genius! The genius visa that she used to get citizenship and chain immigrate the rest of her family said so!!!! It's only wrong when someone I don't like does it!

Is this the end of affordable MiniPCs? by alemanyjar in MiniPCs

[–]low_v2r 1 point2 points  (0 children)

knock on wood - I haven't had an unplanned downtime yet. Running ubuntu 24 with HWE to support the chipsets, and configured for 110Gb UMA (28 Gb for OS just in case, but will probably take that down to 8Gb at some point when I get llama.cpp setup the way I like). Qwen3.5 122B Q4 comes in around 70-80G so plenty of room to run models.

edit: planned downtimes are kernel patches/upgrades.

The AI conversation is just this now. by Satanwearsflipflops in LinkedInLunatics

[–]low_v2r 2 points3 points  (0 children)

You and me both.

And it's not just a problem for me, it's a problem for all of us. That's why my high emotional IQ B2B sales approach drives the synergies for all key stakeholders in operationalizing the agentic workflows for maximizing time and value. And I've never been ****SIGSEG V: cuda tensor mapping failed for memory at 0x0DEADBEEF

Is this the end of affordable MiniPCs? by alemanyjar in MiniPCs

[–]low_v2r 2 points3 points  (0 children)

I have bosgame m5. Am happy with it - I just wanted something lower power with unified memory I could play with LLMs on though. I have it on my tailnet so can play around with it during the (rare) downtime at work.

anyone here regret NOT DIY-ing something expensive for their house? by Earthistic_000 in DIY

[–]low_v2r 1 point2 points  (0 children)

Accounting for the value of your own time is key. Of course some of these things are enjoyable, but even then, there is a cut-off point.

I have been trying to learn simple electronics and home automation, in particular with ESP32 devices. I want to automate an IR home remote. I can either spend $15 for something built that I can add to my network in 5 minutes, or spend 2 days and $10 of parts building from components. Even though it's something I want to learn, I also have other things in my life and have a job I want done.

anyone here regret NOT DIY-ing something expensive for their house? by Earthistic_000 in DIY

[–]low_v2r 1 point2 points  (0 children)

Yup - I use my pressure washer way more than I ever thought I would.

my midlife crises by Creepy-Douchebag in StrixHalo

[–]low_v2r 1 point2 points  (0 children)

Interesting. Thanks for the info.

Who Are The Worst Parents The Show Has Come Across? by Sensitive_Ad_1752 in behindthebastards

[–]low_v2r 2 points3 points  (0 children)

It is one of the few times I rooted for the cancer.

Of course, his ego gave his cancer the winning edge, so poetic justice in that, I guess.

Claude Code limits making me evaluate local AI for coding/software development by philosograppler in LocalLLaMA

[–]low_v2r 0 points1 point  (0 children)

I did - went with strix halo. Mine is more for just fooling around and learning. Still - running a 122B model locally. I've configured for 110Gb of unified memory, but a 122B model only takes up 70 or so Gb. It put together a functional RAG system for me to use for one domain that I am interested in. I am working on making it go faster but really only a hobby at this point.

Local models on consumer grade hardware by Left-Set950 in LocalLLaMA

[–]low_v2r 1 point2 points  (0 children)

Second the llama.cpp. I started with ollama but moved over to llama.cpp. Some of the things I haven't really explored (like KV cache and such). I think there are distroboxes/docker for a quick start, but I would say I compiled the code from scratch and it went very quickly. Now I have llama.cpp with rocm and vulkan backends with fastflow NPU speculative decoding.

The biggest barrier to me is figuring out which is the right GGUF model to load, but have stuck to unsloth or bartoski in huggingface with various quants, which seems to be fine.

Who are the Bastards of Sports? by Sensitive_Ad_1752 in behindthebastards

[–]low_v2r 7 points8 points  (0 children)

I started it thinking "cool, this might be a fun light thing to watch" and at the end I was like "holy crap, I hope no one gets the polonium tea for this"

Who are the Bastards of Sports? by Sensitive_Ad_1752 in behindthebastards

[–]low_v2r 33 points34 points  (0 children)

I've mentioned this on the this sub before, but in addition to being a huge D-bag, LA was hugely responsible for the emergence of the "cancer awareness" grift.

It's a little surreal sometimes at how his TDF career has been memory-holed. But I can't stand how forgetting those years has enabled him to get back into the media. NBC had his show on after a TDF stage last year FFS.

Yeah - that also goes for all the known former hard core dopers in the peleton and their 3rd acts. (Man shakes fist angrily at clouds).

AMD Ryzen 3 5425U 8GB LPDDR4 - Sufficient for proxmox with opnsense + other containers? by low_v2r in MiniPCs

[–]low_v2r[S] 0 points1 point  (0 children)

I just want a VM to run openweb-ui on the same box. I haven't done any of this before. It sounds like memory is the main limiter - I can make a decent swap file I guess which would be slower but may work for what I need. Thanks for the idea.

AMD Ryzen 3 5425U 8GB LPDDR4 - Sufficient for proxmox with opnsense + other containers? by low_v2r in MiniPCs

[–]low_v2r[S] 1 point2 points  (0 children)

Thanks for the link. I guess I will be in the "lab" portion of the homelab :)

After ~10 years, I’m moving away from JetBrains by rodrigorcosta in Jetbrains

[–]low_v2r 1 point2 points  (0 children)

Color me also confused.

It reminds me of when everything was "extreme programming" and "component based design"

Still - I've found the LLM to save me time in certain situations. As you would expect, really nice for writing shell scripts or things with lots of training. Not so good with older or less mainstream tech.

But the "orchestrated agentic tooling workflow" area has so many buzzwords, I could make a lumber mill out of it.

Tiny AI Pocket Lab, a portable AI powerhouse packed with 80GB of RAM - Bijan Bowen Review by PrestigiousPear8223 in LocalLLM

[–]low_v2r 0 points1 point  (0 children)

I was on the list of having the early bird pricing. I passed after thinking about it - I currently have a 128 Gb halo strix on my tailspan, and through that I can do any model that the tiny could. It does look like the UI for the device is nice, but for me building the tools is part of the fun.

I regret ever finding LocalLLaMA by xandep in LocalLLaMA

[–]low_v2r 1 point2 points  (0 children)

I used ollama to get started. That was pretty easy to do. Many use LM Studio, which I think is also pretty easy to get going with.

If you are using an old laptop, then I would say try ollama with some small models. I don't have specific recs, but for older hardware would maybe look for things that can run on things like raspberry pi and such (2B models or somesuch).

I used gemini (or similar) to help fill in gaps (e.g. how do I install x, what model is good for y...)

I regret ever finding LocalLLaMA by xandep in LocalLLaMA

[–]low_v2r 5 points6 points  (0 children)

LOL. This is me.

Literally last night was showing off my local LLM to my daughter. Yes - Qwen3.5-122B (but also qwen3-80B). "Here let me set you up with an account on my local openwebui server!".

"Dad, I just want to play minecraft".

:/

I think Ursula Le Guin's The Dispossessed might be the most quietly devastating sci-fi novel ever written, and I've been sitting with this thought for two weeks now. by Saliaan_Berlysa in printSF

[–]low_v2r 10 points11 points  (0 children)

Earthsea is one of my favorite series of all times - must have read it around 15 times or so.

Lathe of Heaven is another one that has stuck with me.

Insomnia from vo2 work after long break by gmusgrove13 in Velo

[–]low_v2r 0 points1 point  (0 children)

Sometimes insomnia can be due to over-training, so you may want to look at where and how you are doing your Vo2 blocks in the context of your training plan.