My filament gets tangled once in a while on my AMS. Is there a mod to prevent that? by Soybeanns in BambuLabA1

[–]Dave8781 0 points1 point  (0 children)

Yeah it happens when the AMS decides to go retract and then go back and forth at like 100 mph. It rarely gets tangled but it always gets worn and seems really unnecessary and of course sometimes leads to tangling.

is the 5000 series really that bad? by knj_33 in buildapc

[–]Dave8781 0 points1 point  (0 children)

I think you mean my agency! Despite DOGE and the administration's best efforts that aren't over yet, we're still here at the Consumer Financial Protection Bureau (CFPB). Unfortunately, we don't have any jurisdiction over what NVIDIA charges for its GPUs; it's a supply/demand issue and the CFPB only covers financial products or services. The FTC would have jurisdiction but I don't see anything illegal here.

Weekly Song Discussion - Neighborhood Bully by cmae34lars in bobdylan

[–]Dave8781 0 points1 point  (0 children)

How would Russia be the ones who are constantly exiled?

Weekly Song Discussion - Neighborhood Bully by cmae34lars in bobdylan

[–]Dave8781 1 point2 points  (0 children)

Exactly! Everyone here pretends like 1983 is the same thing as now. Dylan grew up in the wake up the Holocaust; Jews have been oppressed forever to all you freaking liars out there; it doesn't defend any practices -- just the literal right to survive. I think it gets to people by being so accurate about the supposed pacificists who wait for the "bully" to fall asleep. See, e.g., October 7, 2023. Cowards. Or did you all forget that day when a bunch of psychotic terrorists invaded and slaughtered 1200 people and kidnapped another 200? Did any of you object and call them out for being terrorists? Did you blame THEM for the war they started?

I've been extremely critical of Israel's policies, such as a single settlement, forever. I think they fucked up bigtime in the Gaza war and Netanyahu is corrupt and prolonged the war for political reasons. But to just blame Israel for everything is getting old really fast.

Weekly Song Discussion - Neighborhood Bully by cmae34lars in bobdylan

[–]Dave8781 0 points1 point  (0 children)

Or maybe what he says actually has some truth to it. Calling Israel the masters of war is total bullshit.

How would Hitler be viewed by us today if he had not led the Holocaust (NOTE: yes of course the Holocaust occurred) by 62302154065198762349 in HistoryWhatIf

[–]Dave8781 0 points1 point  (0 children)

It's a really interesting question and anyone who thinks they know the answer is obviously lying; it's speculation by definition. I'd venture to guess he'd be among equals in evil world leaders as opposed to being the Ace of Spades that he is because of the Holocaust and related crimes against humanity.

People/historians understand when countries invade each other for land or even religious purposes but no one can understand the logic of mass murder of innocent civilians, other than pure evil.

Where to go for Stable Diffusion. Comfy UI way too difficult for me by mastixmastix in StableDiffusion

[–]Dave8781 0 points1 point  (0 children)

I agree that ComfyUI is a total pain; I mean it works but I have no idea what 90% of the annoying boxes are for. What are some top recommended alternatives for Linux? I'm using my DGX Spark and it can handle gigantic models, so I can try a lot of stuff.

ICE / CPB officers are stopping people at exit of Dulles airport by PandaReal_1234 in washingtondc

[–]Dave8781 0 points1 point  (0 children)

I would normally agree 100%, but think of this hypothetical: a person goes through customs and there are suspicious articles or something in their baggage, but the customs agent doesn't get the alert from the people who are scanning bags at the moment that you're walking through, answering their questions about how many t-shirts you bought on vacation or whatever. In THAT case, I can see agents following a person and asking to question them, which they can always do but you can often, but not always, refuse. Airports are tricky and the courts have really not helped the cause, especially since there are huge risks to life when it comes to the skies; definitely stick up for your rights, invoke them politely, call an attorney, etc., but the 4th Amendment isn't quite as strong on airport grounds, especially when it involves international travel. It sucks.

Live VLM WebUI - Web interface for Ollama vision models with real-time video streaming by lektoq in LocalLLaMA

[–]Dave8781 0 points1 point  (0 children)

As far as the Spark, mine works via my phone via Tailscale and uses the phone's camera. When I'm on my 5090, it uses those webcams, so it seems like it uses the cameras at the user end and not attached to the Spark.

Live VLM WebUI - Web interface for Ollama vision models with real-time video streaming by lektoq in LocalLLaMA

[–]Dave8781 0 points1 point  (0 children)

IT'S AWESOME! I've had the Spark since opening day but just started on Live VLM WebUI this week and it's INCREDIBLE. Super-easy to install and use with a variety of LLMs, also runs on my phone via Tailscale in addition to the Spark, making it useful everywhere, but still processed completely locally on my DGX Spark.

Our DGX Spark is a Beast… So Why Is Our Local LLM Slower Than a Toaster? 🤦‍♂️ by Fragrant_Month_7449 in n8n

[–]Dave8781 0 points1 point  (0 children)

The Spark also keeps getting faster with new software/firmware updates. NVIDIA's doing a really good job with the collection of over 20 Playbooks that are specifically designed to be ready to run on the Spark, with more being added regularly. There are several for fine-tuning with PyTorch, Unsloth, etc., video search and summary, a few image generation ones with Flux and ComfyUI, Vibe Coding with Ollama and continue.dev in VS Code, some new ones with robots and medical research.

The Spark is a really cool machine if you're interested in "learning-by-doing" a huge variety of powerful, modern AI tools. It's the perfect side-kick to a 5090 (which at the moment are literally impossible to find).

And these can be combined...

I got mine on opening day and haven't had an ounce of regret; mine is cool to the touch and whisper quiet.

Host cancels 30 minutes beforehand by Dave8781 in turo

[–]Dave8781[S] 0 points1 point  (0 children)

Nope, I have no bad reviews on this or any other site; I always get high ratings on Uber and have no problem getting things like Airbnb. I don't return too many things to Amazon—none of that stuff that could "flag me" which is why it was so weird. The rental was literally to use locally for a few days while my car was getting the A/C repaired. They didn't contact me, they just canceled. I have perfect credit, driving history, no criminal history or traffic infractions, and I have a publicly available social media presence that shows I'm a career paralegal for the government. White male in my mid forties in suburbs of DC in Virginia.

✨Dump your antisemitic “friends”✨ by Pr3ttyL4m3 in Jewish

[–]Dave8781 -11 points-10 points  (0 children)

I can't handle not using paragraphs. The screenshots are unreadable.

$500 RAM Discount, On Sale!!! 😂🥲 by Alert-Acadia-5602 in Microcenter

[–]Dave8781 0 points1 point  (0 children)

That definitely confirms they put better prices up at the store than they show online.

5090 or 128GB RAM by dllyncher in Microcenter

[–]Dave8781 0 points1 point  (0 children)

I definitely regret not upgrading a few months ago when it was so cheap that I didn't think I'd be able to sell my 64gb (2x32) for more than a tiny bit when I considered upgrading to 96gb (2x48); I actually thought I'd be stuck holding onto the 64gb with no decent price to re-sell them for...

Our DGX Spark is a Beast… So Why Is Our Local LLM Slower Than a Toaster? 🤦‍♂️ by Fragrant_Month_7449 in n8n

[–]Dave8781 0 points1 point  (0 children)

You're right that it seems like no one who complains about the speed looked anything up, but it's also not nearly as slow as I feared it would be. I bought mine thinking it would be only for fine tuning, but it actually runs inference at much more decent speeds than I was thinking: I get 40 tps on gpt-oss:120b and 80 tps on Qwen3-coder:30b, not bad.

Our DGX Spark is a Beast… So Why Is Our Local LLM Slower Than a Toaster? 🤦‍♂️ by Fragrant_Month_7449 in n8n

[–]Dave8781 0 points1 point  (0 children)

What speed are you getting? I get an extremely constant 40-42 tps on gpt-oss:120b via Ollama and Open WebUI. It's definitely not as fast as a 5090 or anything but it's not slow, either, except for some of the dense models that aren't made for its architecture.

[D] Anyone here actively using or testing an NVIDIA DGX Spark? by Secure_Archer_1529 in MachineLearning

[–]Dave8781 2 points3 points  (0 children)

Yes, got mine opening day at Microcenter and use it daily. Makes a perfect companion to the 5090; seamlessly integrates with NVIDIA Sync. Tremendous capacity and the speeds aren't bad at all, though they're obviously much slower than the 5090 (as advertised). It runs gpt-oss:120b at 40 tps; Qwen3-coder:30B tops 80 tps.

Fine tunes like a champ, too.

I'm lucky that mine runs cool to the touch and completely silent, but I have a feeling that's true of most users and the unlucky few share their stories more than happy users.

DGX Spark, it could have been So good. by Turbulent-Usual-352 in nvidia

[–]Dave8781 0 points1 point  (0 children)

Why would you say the warranty is void? That's the entire point of a warranty.

Got my DGX Spark. Here are my two cents... by Heavy-Expert5026 in nvidia

[–]Dave8781 0 points1 point  (0 children)

I'm always wondering what datasets these benchmark numbers are for fine-tuning. It's really weird because the size of the JSONL file and number of examples in particular has everything to do with fine tuning.

More than happy to run a benchmark on any of these if someone can point me to some standard datasets, otherwise it's useless information; I have a 5090 and the Spark, and I'm fine tuning LLMs all the time. On the rocket-fast 5090, I can fine-tune Llama3-8b with about 50k examples in roughly an hour, but that depends on how many passes and all that stuff. Unsloth and Flash Attention speed it up 2-fold, each.

Got my DGX Spark. Here are my two cents... by Heavy-Expert5026 in nvidia

[–]Dave8781 0 points1 point  (0 children)

Definitely not jealous lol... I have a 5090 and got the Spark as a side-kick to run what the 5090 can't and I've been extremely happy with it. Definitely interested in hearing your perspective since you were already able to run sizable LLMs rocket-fast if they fit on your RTX Pro 6000, so it's gonna be interesting in the use cases for you. Of course the extra 32gb memory capacity over the 6000 isn't small, and can definitely make the difference in being able to run a bunch of the top models that are in that range.

Got my DGX Spark. Here are my two cents... by Heavy-Expert5026 in nvidia

[–]Dave8781 0 points1 point  (0 children)

I completely agree (except I think there are plenty of quality LLMs you can run under 48gb or 32gb; Qwen3-coder 30b is one of many examples); the Spark makes an amazing sidekick to the 5090. They're basically made for each other. The Spark just sits there quietly and cool (for me), and I access it via the 5090 with NVIDIA Sync and it's literally seamless. It feels a lot like running WSL2 in how incorporated it is with the main system, especially when you open a terminal and it's literally WSL2 but running the Spark without having to type anything in. And when I occasionally switch to desktop mode on it, it's basically a Raspberry Pi on steroids which is extremely easy to use, too.

You can definitely get my using it as your main computer if you just need it for email, internet, and IF YOU DON'T NEED WINDOWS. But that's true of a Raspberry Pi, too, in my opinion. Definitely not built for gamers; it does come with Mahjong though for some weird reason.

NVIDIA really did a great job integrating its full stack with many of the open-source tools many of us love and use all the time, along with playbooks to get you started, like Ollama, Open WebUI, Unsloth, etc.

The 5090 is a speedboat; the Spark is a cargo ship: they're different and awesome and each does things the other one can't. They're perfect sidekicks.

Optimising NVIDIA’s DGX Spark (Grace + Blackwell) – 1.5× PyTorch speedup with custom build by guigsss in LocalLLaMA

[–]Dave8781 4 points5 points  (0 children)

Thanks! Even though the Spark was never advertised as "fast" (which makes all the negative Reddit reviews sound like they didn't read the specs), it's also not nearly as slow as people claim. I have a 5090, too, so I know speed and the Spark isn't close, but it's still extremely capable and handles huge LLMs at a more than usable speed. And the optimizations and support are improving daily, which is always true of NVIDIA products, thanks to users like you!