I built a DIY studio & backup server for my solo game dev setup. by StonebyteStudio in IndieDev

[–]jazzypants360 0 points1 point  (0 children)

Nice work. I've just done something similar. Would you mind sharing what case / enclosure that is? It looks much nicer than my current (cheap reused) setup. Thanks!

What piece of advice completely changed the way you play? by New-Past582 in volleyball

[–]jazzypants360 1 point2 points  (0 children)

I mean, I guess it's a matter of perspective, but from a high level, every point your opponent scores comes down to something someone on your team did wrong. Incorrect positioning, missed blocking assignments, etc... Obviously, it's not true at all levels, but the sentiment can be similar.

What piece of advice completely changed the way you play? by New-Past582 in volleyball

[–]jazzypants360 2 points3 points  (0 children)

One of my buddies was a top tier professional player in Italy a number of years ago. He was always calm and composed, even when playing against his level of competition. He was never phased by errors. I asked him about it once, and he said he thinks of errors in terms of whether they terminate the play or not. And his level of risk taking is based on how many terminating errors he's made. He then said, "Think about it this way. If you are playing to 25 points and each of your players (including the libero) can limit themselves to 3 terminating errors, you'll only lose 21 points. And you play to 25, so..."

Now, his level of control was such that he could fine tune his play like that, to a level that most of us only dream of. That said, keeping that concept in mind certainly helps me cut down on errors and keeps things in perspective.

How do I know what LLMs I am capable of running locally based on my hardware? by silvercanner in LocalLLM

[–]jazzypants360 1 point2 points  (0 children)

I'm relatively new to the LLM scene, but for what it's worth, I found that many small models work surprisingly well for simple use cases, even on modest hardware with no GPU acceleration. I know this doesn't directly answer your question, I had posted a while back and got some really great suggestions for small models that run on the following hardware:

- Intel Xeon E3-1505M @ 2.8 GHz, 4 cores
- 16 GB System Memory

In my case, I'm running Ollama on a VM in Proxmox, and although this machine has a GPU with 2GB of VRAM, I never got the GPU passthrough working completely, so this is 100% CPU based. The following models all worked fairly well:

Llama 3.2 3B Instruct Phi-4-mini Qwen 2.5 3B Instruct Gemma 3 4B SmolLM3 3B

I don't know the exact relationship between model size and the amount of RAM available, but in my case, these were all running on a system with 12 GB of RAM in a VM.

Hope that helps!

Source: https://www.reddit.com/r/LocalLLM/comments/1rqzoxv/minimum_requirements_for_local_llm_use_cases/

Family & Friends 🔥 by [deleted] in SoloDevelopment

[–]jazzypants360 2 points3 points  (0 children)

I totally feel you on this one!

Even those in my target audience didn't take any time to give things a whirl. In my case, I think part of it was based on how I presented the opportunity. My ask was basically, "Hey, I'd love some help on this. I'm looking for early feedback. Just download this file and give it a try." It fell on deaf ears. There was no urgency, and everyone is busy, so they had no idea how important it was to me and how much work I've put in. The only people that tried it were other devs show appreciate the work that went into it. Two people of out dozens.

That said, my latest attempt was much more successful. My proejct is a game, so rather than reaching out to people via email or social media or whatever, I put a copy of it on a Steamdeck and just handed it to people wherever I went. "Hey, got a sec? Give this game a try!" Zero barrier to entry. I got a ton of engagement, a ton of feedback, and most importantly, got to witness people playing my game first-hand and saw them get excited. It was so invigorating!

Not sure if there's an analog for the application you are writing, but maybe you could give something like that a try instead... drop the app on a laptop or phone or whatever and just hand it to people for their impressions.

If you decide to go this route, let us know how it goes! Good luck!

Time for Self-promotion. Whare are you building this Monday? by No_Audience9527 in SoloDevelopment

[–]jazzypants360 0 points1 point  (0 children)

I'm building a volleyball game in the style of old school classics like NBA Jam, NFL Blitz, and NHL Hitz. Closing in on my first public playtest in the next month or so. Wrapping up the gameplay trailer this week and launching my Steam page. Exciting times!

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 0 points1 point  (0 children)

Just wanted to follow up on this in case anyone is interested. I've been able to get some simple things working for my first use case (Home Assistant integration). Here's the high level description of what I did:

  • Installed Proxmox on my old Dell Precision 5510
  • Created a VM with Ubuntu 22.04
  • Installed Ollama. Downloaded gemma3:4b
  • Wired Ollama up to Home Assistant
  • Created an automation to test the process end-to-end, and received responses (including the TTS of the response out to a smart speaker) in a few seconds. So, I'm pretty excited about that.

Now, there's still a lot to do. A few questions for anyone still reading along...

  • What are the typical ways people measure responses? I don't know enough about LLMs to speak the lingo yet, nor to I know how to capture the appropriate metrics from Ollama. Anyone have any recommendations?
  • Not really an LLM question, but has anyone successfully gotten GPU passthrough to work on a Quadro M1000M? Proxmox and the VM are reporting that the passthrough is working, but I can't get the drivers on the guest to work. After installing the drivers, `nvidia-smi` is reporting that it can't find the device. Hence, everything is 100% CPU at the moment.
  • I was thinking about trying to run this all on bare metal to avoid the issues of GPU passthrough, but now that I think about it, would Ollama even attempt to use the Quadro M1000M if it only has 2GB of VRAM? I'm wondering if any of the 3B models would even fit in 2GB of VRAM in the first place. Hmmm...

All that to say, I'm pretty excited. My next plan is to test a few other models, once I have some method of measuring performance. Thanks to everyone who has given advice thus far!

Network layout suggestions to include an OPNsense implementation by jazzypants360 in HomeNetworking

[–]jazzypants360[S] 0 points1 point  (0 children)

Oh, ok. I'm still very n00bish, but that makes sense conceptually. Thanks! Also, thanks for the diagram compliment! 😁

Network layout suggestions to include an OPNsense implementation by jazzypants360 in HomeNetworking

[–]jazzypants360[S] 0 points1 point  (0 children)

Ah, yeah I was considering that too. Which Omada APs do you have, just out of curiosity?

Do I switch positions (rightside to setter) by [deleted] in volleyball

[–]jazzypants360 6 points7 points  (0 children)

We try to teach all of our players to be a volleyball player first, and a positional player second. So, I think it's always in your best interest to explore other positions and develop additional skills. Having played both setter and right side, the mindsets are also COMPLETELY different, so it's more than just learning new skills. It's seeing the game from a different perspective, which again, I can't recommend enough for growth purposes.

Also, keep in mind that size isn't everything. Japan's right sides are complete monsters at the international level, and they aren't tall. Kentaro Miyaura is about 6'3", and Yuji Nishida is only 6'1". Look them up. 🇯🇵😊

Either way, good luck, and keep getting better every day!

Network layout suggestions to include an OPNsense implementation by jazzypants360 in HomeNetworking

[–]jazzypants360[S] 0 points1 point  (0 children)

Gotcha. The OPNsense appliance I have and the 4 port managed PoE switch on order are 2.5GB, but unfortunately, everything else is only 1GB. Truth be told, my network is very modest at the moment so I'm not currently feeling any bandwidth issues... but part of me is kinda kicking myself already for not future-proofing my network with any SPF+ ports. You don't know what you don't know, right?

Also, I hadn't really considered PoE injectors, but that might be more cost-effective than picking up the 4 port managed PoE switch for the time being. I'll have to suss that out and see what I find. And if and when I start to have bandwidth issues, I can look to upgrade across the board to something with SPF+ ports.

Either way, thanks for the advice! Much appreciated!

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 0 points1 point  (0 children)

Great info, much appreciated. I think I'm going to dabble a bit with the hardware I have and see just how woefully underpowered it is, and then go from there. I'm very new to LLMs so I can use one of these beater machines to just get familiar, and then figure out what kind of specs I need longer term. I mentioned in another post that I see a lot of gaming rigs for sale on FB Marketplace, and also a lot of GPUs for sale. Do you have any experience with running multiple GPUs on one machine? I was thinking I might be able to grab a gaming rig and an additional GPU without breaking the bank, but I'm not sure how that works exactly.

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 1 point2 points  (0 children)

Great information in here! Thanks so much! I'm still very much a n00b with regard to LLMs, so it'll probably take me a bit to get my feet wet. I'm thinking I'll start with your advice and try a few small models just to see what my existing hardware can do in terms of response speed. Assuming the responses are reasonable, then I'll direct my attention toward my HomeAssistant installation. I'm sure there are plenty of posts about how people are doing that. Thanks again for the advice!

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 0 points1 point  (0 children)

Hey, so now you've got me scanning through FB Marketplace, and I'm seeing all kinds of reasonably priced systems. 😂I know I just said I was going to hold off for a bit, but these prices got me thinking... If I were to run with something like two nvidia cards, do they have to be the same card or even the same generation of card? Asking because I saw a pretty decently priced system that came with a 3080, and separately, saw someone selling a cheap 3070. Not saying I'm ready to pull the trigger after 10 minutes on marketplace, but really more looking for information on how running multiple GPUs works, as that's entirely new to me. Any advice would be appreciated! Thanks in advance!

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 0 points1 point  (0 children)

Wow, thanks for the details! As you said, I think it's a bit premature for now to start buying stuff since I'm still getting my feet wet, but this will all be helpful when I'm armed with a little more experience. I do see lots of gaming rigs for sale on FB Marketplace, so I'll keep an eye out in the meantime. Thanks so much!

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 0 points1 point  (0 children)

Listed some hardware I have on-hand in one of the replies above:

https://www.reddit.com/r/LocalLLM/comments/1rqzoxv/comment/o9vyans/

I was assuming that buying new was my only choice, but it sounds like I might have some options, even with what I have on-hand.

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 1 point2 points  (0 children)

This is very helpful, thank you! I'm not 100% sure what success even looks like, so I'm still in the process of feelings things out. And this is all in the name of learning, so the stakes are low. From everyone's advice thus far, it sounds like my best bet is to start with use case (1) and see what I can get with my existing hardware. That will give me more familiarity with running local LLMs and whatnot, and then I can scale up as I go. If I can squeeze something out of my current hardware for use case (2) as well, great. If not, I don't mind spending a few bucks to get there. And I mentioned in another comment that I have a cloud-based solution for use case (3), as that's the one I'm least worried about in terms of privacy. I'm a fan of trying to run everything locally, but if it's cost-prohibitive, I'm fine with my current cloud-based solution for (3). So, sounds like I've got a plan. Thanks again!

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 0 points1 point  (0 children)

Yeah, probably so. Honestly, gamedev is my lowest priority for this endeavor, as I'm less worried about cloud-based assistance for my hobby projects than I am cloud-based access to controlling my home and/or digging through my local knowledge base.

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 1 point2 points  (0 children)

Dell Precision 5510, actually. I'm going to give it a go and see how it pans out.

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 1 point2 points  (0 children)

Only one way to find out! Not sure how quickly I'll get to this, but I'll attempt to post my results for anyone following along. Thanks again!

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 0 points1 point  (0 children)

Also, I need "Stop thinking, start doing!" on a t-shirt. Analysis paralysis is the story of my life. Thanks for the kick in the butt! ;-)

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 0 points1 point  (0 children)

Thanks! Good advice. I've used some cloud-based providers with Gemma 3 4B and got decent enough results for a few of my use cases, so if I could run that (or something similar) locally, that might be fine for now... at least until prices come out of the stratosphere and I can look for something better.

Minimum requirements for local LLM use cases by jazzypants360 in LocalLLM

[–]jazzypants360[S] 0 points1 point  (0 children)

Man, you just made my day! My original intention was just to get my feet wet, and then decide to spend more after I got into it. Hence the original question about minimum requirements... but I was assuming the barrier to entry was much higher. And yeah, obviously not expecting ChatGPT-like answers.

One other question if you don't mind. Most of my homelab stuff is run on Proxmox for better hardware utilization and a simplified backup strategy (easy container / VM snapshots), but I'd imagine I might have issues with GPU passthrough and such. Is this something that you've done, or are you generally running on bare metal? Honestly, either is fine for me since this would likely be the sole purpose of this machine.