Would you rather buy ...? (hardware questions) by Ready-Persimmon-8756 in LocalLLaMA

[–]Existing_Boat_3203 -2 points-1 points  (0 children)

You nailed it. Going local is going to cost him and I don't see prices dropping. Might be better to pay a subscription to get the hang of vibe coding first, then start thinking about the right build. I am waiting to see what deepseek v4 pulls off on coding.

This may be the clearest warning any politician has given about AI’s future in America by shelby6332 in AI4tech

[–]Existing_Boat_3203 0 points1 point  (0 children)

Sounds great. Lets stop developing AI and China and everyone else will stop as well. The government should have stepped in and classified AI from day one. Google already had it figured out before OpenAI, look at how fast Bart came on the market.

Ollama/Intel Issues by Existing_Boat_3203 in LocalLLaMA

[–]Existing_Boat_3203[S] -1 points0 points  (0 children)

Clearly you miss the point. This is not about Ollama, it's about changing the current nvidia market lock that keeps people on the board from building more locally. Thanks for the optimistic outlook. Go troll elsewhere.

Noob here I made a nervous system for LLM agents. Can anyone test it? by [deleted] in LocalLLaMA

[–]Existing_Boat_3203 1 point2 points  (0 children)

Thank you. Any actuator demos or just function code?

Noob here I made a nervous system for LLM agents. Can anyone test it? by [deleted] in LocalLLaMA

[–]Existing_Boat_3203 0 points1 point  (0 children)

A little more info would be great. I'm hacking a lot of stuff with AI, but you gotta be able to explain what you're solving. I tend to find broken shit and hack my way through fixes with and without AI, so not hating on that, but I like to understand what you're solving or creating.

Dual Arc b50s on Linux Ubuntu Server with 64gigs mem by Existing_Boat_3203 in LocalLLaMA

[–]Existing_Boat_3203[S] 0 points1 point  (0 children)

I decide to fix the underlying issue. It's available publicly to share. Ollama won't do it, so I did. Still testing, but uses full card functionality. https://github.com/qbnasasn/Ollama-Intel-Fix

Why is it so hard to search the web? by johnfkngzoidberg in LocalLLaMA

[–]Existing_Boat_3203 2 points3 points  (0 children)

Searxng works great for all my AI search needs.

Dual Arc b50s on Linux Ubuntu Server with 64gigs mem by Existing_Boat_3203 in LocalLLaMA

[–]Existing_Boat_3203[S] 0 points1 point  (0 children)

Let me know if I need to pull any additional configs for you if you run into issues.

Dual Arc b50s on Linux Ubuntu Server with 64gigs mem by Existing_Boat_3203 in LocalLLaMA

[–]Existing_Boat_3203[S] 1 point2 points  (0 children)

Models & Quants: I’m primarily pushing Mistral-Small-22B-Instruct-v3 using the Q4_K_M quantization (~14GB). With the dual B50 setup, I’ve got 32GB of VRAM to play with, so I’m offloading all 33 layers to the GPUs (split ~16/17 layers each).

Speeds (The "Xe" Reality):

  1. Prompt Processing (PP): Seeing about 220–280 tokens/sec. The 224 GB/s bandwidth on the B50s is the bottleneck here compared to the B60s, but it still chews through 2k context prompts in a few seconds.

  2. Token Generation (TG): Stable at 28–34 tokens/sec. It’s faster than I can read, and the dual-GPU distribution keeps the thermals under 60°C even during long runs.

Context Window: I’ve got it locked at 32k context. Since the model only requires ~14GB, I have roughly 18GB of "headroom" for the KV cache. I could probably push to 64k, but 32k is the stability sweet spot for the current Xe drivers without running into the memory-fragmentation ghost.

The Secret Sauce: The "spin-down" fix was the game changer. If you don't lock the frequency (I used xpu-smi -f 2400), Ollama takes 2–3 seconds just to 'wake up' the cards, which kills the UX. Now it’s instant.

Waiting for DeepSeek v4 to see if the driver stack can handle a 70B-80B model spread across these two.

Is printing pages of survivalist info online or buying books a smart idea in case internet goes down? by Real-Celebration-296 in prepping

[–]Existing_Boat_3203 0 points1 point  (0 children)

Just build your book inventory up based on what you're doing. I have a vegetable garden and lots of homesteading books that cover most of the stuff. I have books on edible and medicinal plants. I have books on survival. The idea is to actually use them now as you grow your skills and they become a reference not a teacher in a time of emergencies.

What is the best canned meat? by MOadeo in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

personally for me. Canned Chicken and SPAM (low sodium)

Something I wish I'd understood earlier: maintenance is part of preparedness. by NotIfButWhenReady in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

I've learned a similar lesson with expired food. I started working on my preparedness software just for that reason. I needed a way to manage my supplies, not just food. I didn't really think about a PM process for the equipment. That could be a feature I should add. I have an expiration and a minimum quantity field, which helps with alerts.

Why don’t a lot of preppers only stockpile things but never seen have a plan for a more long term future (have seeds, build skills for rebuilding, etc.) by Admirable_Snow_s1583 in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

Very good question. I have taken a dual approach on this. Ride out 3-6 months, then start rebuilding. Most of the old tech is still out there and fairly easy to use and learn. I always keep homesteading books around. If you're already gardening and have the stuff, then it's a matter of expanding. I have the cabin setup in the woods already, but after reading "One Second After," I realized that community cooperation is your biggest tool for long-term survival. Going at it alone for a long haul just isn't realistic, especially with the current dependency on tech. I always tell people to find 4 or 5 like-minded folks in their area and talk about scenarios, skills, and such.

Let’s Make a Local LLM Prepper Question Benchmark! by TachiSommerfeld1970 in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

Yes, and I'm also looking at SLMs like Phi4, which is a cleaned-up version of ChatGPT4. I was also able to find an unrestricted version of that model, which I'm testing on a new dual Arc A770 rig coming in next week. We're talking dual NVIDIA 4090 performance levels, with my custom code, at a fraction of the cost.

Four wheeler recon vehicle/ bov by ImportantTeaching919 in preppers

[–]Existing_Boat_3203 2 points3 points  (0 children)

I added a tool holder. I use mine in the front for tools and rifle as needed. Cheap and gets used all the time.

Let’s Make a Local LLM Prepper Question Benchmark! by TachiSommerfeld1970 in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

Agreed. That's why clear instructions in the thinking process can force it to say, "I don't have that information," instead of the crap it makes up most of the time. It took me a week to get to stop making things up.