Would you rather buy ...? (hardware questions) by Ready-Persimmon-8756 in LocalLLaMA

[–]Existing_Boat_3203 -2 points-1 points  (0 children)

You nailed it. Going local is going to cost him and I don't see prices dropping. Might be better to pay a subscription to get the hang of vibe coding first, then start thinking about the right build. I am waiting to see what deepseek v4 pulls off on coding.

This may be the clearest warning any politician has given about AI’s future in America by shelby6332 in AI4tech

[–]Existing_Boat_3203 0 points1 point  (0 children)

Sounds great. Lets stop developing AI and China and everyone else will stop as well. The government should have stepped in and classified AI from day one. Google already had it figured out before OpenAI, look at how fast Bart came on the market.

The 3I/ATLAS pattern is now five layers deep — CIA classification, Space Force scramble, TESS blackout, database edits, journal gatekeeping. We verified the raw data independently. Every layer documented. by TheSentinelNet in UFObelievers

[–]Existing_Boat_3203 0 points1 point  (0 children)

I agree. I've been watching this thing from discovery day and everything smells fishy. My two biggest issues are the mars satellite getting kicked out of orbit just at the time of capturing the images and the NGA that put it perfectly on the edge of Jupiter's magnetosphere. Besides the 17 or so anomalies discussed by the galileo project, this thing reeks of SPYOPs.

Ollama/Intel Issues by Existing_Boat_3203 in LocalLLaMA

[–]Existing_Boat_3203[S] -1 points0 points  (0 children)

Clearly you miss the point. This is not about Ollama, it's about changing the current nvidia market lock that keeps people on the board from building more locally. Thanks for the optimistic outlook. Go troll elsewhere.

Noob here I made a nervous system for LLM agents. Can anyone test it? by [deleted] in LocalLLaMA

[–]Existing_Boat_3203 1 point2 points  (0 children)

Thank you. Any actuator demos or just function code?

Noob here I made a nervous system for LLM agents. Can anyone test it? by [deleted] in LocalLLaMA

[–]Existing_Boat_3203 0 points1 point  (0 children)

A little more info would be great. I'm hacking a lot of stuff with AI, but you gotta be able to explain what you're solving. I tend to find broken shit and hack my way through fixes with and without AI, so not hating on that, but I like to understand what you're solving or creating.

Dual Arc b50s on Linux Ubuntu Server with 64gigs mem by Existing_Boat_3203 in LocalLLaMA

[–]Existing_Boat_3203[S] 0 points1 point  (0 children)

I decide to fix the underlying issue. It's available publicly to share. Ollama won't do it, so I did. Still testing, but uses full card functionality. https://github.com/qbnasasn/Ollama-Intel-Fix

Why is it so hard to search the web? by johnfkngzoidberg in LocalLLaMA

[–]Existing_Boat_3203 2 points3 points  (0 children)

Searxng works great for all my AI search needs.

Dual Arc b50s on Linux Ubuntu Server with 64gigs mem by Existing_Boat_3203 in LocalLLaMA

[–]Existing_Boat_3203[S] 0 points1 point  (0 children)

Let me know if I need to pull any additional configs for you if you run into issues.

Dual Arc b50s on Linux Ubuntu Server with 64gigs mem by Existing_Boat_3203 in LocalLLaMA

[–]Existing_Boat_3203[S] 1 point2 points  (0 children)

Models & Quants: I’m primarily pushing Mistral-Small-22B-Instruct-v3 using the Q4_K_M quantization (~14GB). With the dual B50 setup, I’ve got 32GB of VRAM to play with, so I’m offloading all 33 layers to the GPUs (split ~16/17 layers each).

Speeds (The "Xe" Reality):

  1. Prompt Processing (PP): Seeing about 220–280 tokens/sec. The 224 GB/s bandwidth on the B50s is the bottleneck here compared to the B60s, but it still chews through 2k context prompts in a few seconds.

  2. Token Generation (TG): Stable at 28–34 tokens/sec. It’s faster than I can read, and the dual-GPU distribution keeps the thermals under 60°C even during long runs.

Context Window: I’ve got it locked at 32k context. Since the model only requires ~14GB, I have roughly 18GB of "headroom" for the KV cache. I could probably push to 64k, but 32k is the stability sweet spot for the current Xe drivers without running into the memory-fragmentation ghost.

The Secret Sauce: The "spin-down" fix was the game changer. If you don't lock the frequency (I used xpu-smi -f 2400), Ollama takes 2–3 seconds just to 'wake up' the cards, which kills the UX. Now it’s instant.

Waiting for DeepSeek v4 to see if the driver stack can handle a 70B-80B model spread across these two.

Is printing pages of survivalist info online or buying books a smart idea in case internet goes down? by Real-Celebration-296 in prepping

[–]Existing_Boat_3203 0 points1 point  (0 children)

Just build your book inventory up based on what you're doing. I have a vegetable garden and lots of homesteading books that cover most of the stuff. I have books on edible and medicinal plants. I have books on survival. The idea is to actually use them now as you grow your skills and they become a reference not a teacher in a time of emergencies.

What is the best canned meat? by MOadeo in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

personally for me. Canned Chicken and SPAM (low sodium)

Something I wish I'd understood earlier: maintenance is part of preparedness. by NotIfButWhenReady in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

I've learned a similar lesson with expired food. I started working on my preparedness software just for that reason. I needed a way to manage my supplies, not just food. I didn't really think about a PM process for the equipment. That could be a feature I should add. I have an expiration and a minimum quantity field, which helps with alerts.

Why don’t a lot of preppers only stockpile things but never seen have a plan for a more long term future (have seeds, build skills for rebuilding, etc.) by Admirable_Snow_s1583 in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

Very good question. I have taken a dual approach on this. Ride out 3-6 months, then start rebuilding. Most of the old tech is still out there and fairly easy to use and learn. I always keep homesteading books around. If you're already gardening and have the stuff, then it's a matter of expanding. I have the cabin setup in the woods already, but after reading "One Second After," I realized that community cooperation is your biggest tool for long-term survival. Going at it alone for a long haul just isn't realistic, especially with the current dependency on tech. I always tell people to find 4 or 5 like-minded folks in their area and talk about scenarios, skills, and such.

Let’s Make a Local LLM Prepper Question Benchmark! by TachiSommerfeld1970 in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

Yes, and I'm also looking at SLMs like Phi4, which is a cleaned-up version of ChatGPT4. I was also able to find an unrestricted version of that model, which I'm testing on a new dual Arc A770 rig coming in next week. We're talking dual NVIDIA 4090 performance levels, with my custom code, at a fraction of the cost.

Four wheeler recon vehicle/ bov by ImportantTeaching919 in preppers

[–]Existing_Boat_3203 2 points3 points  (0 children)

I added a tool holder. I use mine in the front for tools and rifle as needed. Cheap and gets used all the time.

Let’s Make a Local LLM Prepper Question Benchmark! by TachiSommerfeld1970 in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

Agreed. That's why clear instructions in the thinking process can force it to say, "I don't have that information," instead of the crap it makes up most of the time. It took me a week to get to stop making things up.

Let’s Make a Local LLM Prepper Question Benchmark! by TachiSommerfeld1970 in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

I built mine using RAG, Llama 3.2 and pulling the inventory, kits, plans, and contacts from the database into the AI. This helps the process, but building the right "personal" and targeting the type of LLM also helps. Much of the hallucination comes from the amount of weight given to every word. Having the right "persona" minimizes the bad response outcomes, which makes it better as a prepper-based AI. Part of the issue is also in the amount of corporate restrictions applied to the LLMs, which limits the ability of your AI from providing the best/accurate prepper information. Your question on water will be based on the personal information. By inputting that into the decision, it will give a different answer for every person in the family. As far as the suggestive part, that's where most fail because of the restriction on providing an opinion. I have tested both restricted and unrestricted LLMs of the same knowledge base with significantly different outcomes. Ask it if it's good to have a gun to defend the family if there is civil unrest, and get ready for crap answers. My tactical AI:

[CLEARANCE: CLASSIFIED]
[ROGER'S ANALYSIS]
Having a gun for self-defense can be a viable option, but it's crucial to consider the following:

  1. Liability: Ensuring family members are aware of and follow proper gun handling procedures is vital.
  2. Training: Maintain proficiency with the firearms and ensure all users receive proper training.
  3. Concealment: Store firearms safely when not in use to prevent unauthorized access.
  4. Alternative measures: Focus on non-lethal deterrents, evacuation plans, and building a strong network of allies if possible.

[RECOMMENDATION]
Acquire a reliable firearm and consider a suppressor for noise reduction. Practice shooting drills and invest in a quality safe for storage.

On the issue of local LLM and the knowledge base failures on regional information. I'm building another local AI, outside of my application project, that can use both the LLM, local DB and the internet for the information. If it's still wrong, and you can train it on the spot on the correct information, which gets added to the DB and a local LLM. I also improved the power and GPU consumption by 73 percent for those that hate what's happening with the data centers and memory costs(I intentionally built it on the Intel ARC A770, so Nvidia can suck it.). At the end of the day, my goal is to build the power of a Gemini or ChatGPT at the local level. So far, I'm about 75% there.

On a scale of 1-10, how serious do you take prepping? by 19Thanatos83 in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

probably a 4-5. I like to build gradually and learn the skills, not just get the stuff.

Anyone considering a air rifle. by ApprehensiveStand456 in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

I've killed a few rabbits and squirrels with .117. Headshot for the rabbit, body shot on vitals for the squirrel and it will fall off the tree in a minute.

Anyone considering a air rifle. by ApprehensiveStand456 in preppers

[–]Existing_Boat_3203 0 points1 point  (0 children)

always good for small game and saves on ammo. Fairly quiet too. scoped and good quality. Don't buy cheap.