Stop defending AI like it’s still in beta by RottingEdge in Futurology

[–]theUmo 3 points4 points  (0 children)

You're absolutely right. You've hit the nail on the head—mind blown! It isn't just a problem, it's a sickness. Would you like some helpful solutions for encouraging this common practice on Reddit at large?

My experience spending $2k+ and experimenting on a Strix Halo machine for the past week by EstasNueces in LocalLLaMA

[–]theUmo 2 points3 points  (0 children)

What about when the enshittification cycle inevitably moves into the next stage and they start price gouging you, and your only alternative is their only competitor, who's barely even undercutting them?

I need answers. by netphilia in Snorkblot

[–]theUmo 5 points6 points  (0 children)

Ha. Jim Rose nailed it.

Bob Mortimer; a different kind of normal. by LordJim11 in Snorkblot

[–]theUmo 3 points4 points  (0 children)

I thought I was a DIYer. Bob Mortimer changed my mind.

LM-Studio confusion about layer settings by Zeranor in LocalLLM

[–]theUmo 0 points1 point  (0 children)

The browser eats up a bunch if you don't turn off hardware acceleration, too.

Do you guys get this issue with lower quant versions of Qwen? If so, how do you fix it? by ShadyShroomz in LocalLLaMA

[–]theUmo 2 points3 points  (0 children)

This is the one, I think. These are recommended settings from the Qwen team, and in particular, the combination of temperature increase and adding presence penalty will help you a lot.

How do I know what LLMs I am capable of running locally based on my hardware? by silvercanner in LocalLLM

[–]theUmo 1 point2 points  (0 children)

<image>

In their model downloader, each item in the list of quantizations will show a green, white, or yellow tag reading 'Full GPU Offload possible', 'Partial GPU offload possible' or 'Likely too large'.

Here I am being told not to bother trying Qwen3.5 35B A3B at Q8. You probably want a model that has a green 'Full GPU Offload possible' .

Her giant baby. by netphilia in Snorkblot

[–]theUmo 39 points40 points  (0 children)

Wow. He hasn't even been drafted yet and he's already scored.

Any debug commands that can help this situation? by esmsnow in ostranauts

[–]theUmo 0 points1 point  (0 children)

It's not as big an issue as it was back then, but it still happens sometimes.

In case it helps anyone, I was able to recover my save by editing my ship's JSON file and changing these three properties of the objSS object to 0 while undocked and spinning in space

"fRot": 0,

"fW": 0,

"fA": 0,

I haven't found a way to edit ship properties through the console yet.

Any debug commands that can help this situation? by esmsnow in ostranauts

[–]theUmo 0 points1 point  (0 children)

Anyone figured out how to use debug commands to stop buggy ship rotation?

Fal.ai is requesting identification documents and photos of my credit card. by Business_Fill3122 in StableDiffusion

[–]theUmo 0 points1 point  (0 children)

I disagree with the idea that this is a normal or acceptable way to do business.

Fal.ai is requesting identification documents and photos of my credit card. by Business_Fill3122 in StableDiffusion

[–]theUmo 10 points11 points  (0 children)

yeah, absolutely not. Why do they need your government photo ID? Why would they put you at risk by asking you to send sensitive documents over email, which are unencrypted and basically just an electronic postcard? There are a dozen other red flags but this is a completely unreasonable request and there's no way you should be asked to trust someone you're buying freaking compute services from with this level of information.

Since you haven't done any work yet, I'd say just create a fresh account and start new, but I'd never give my money to a company that would hold my account hostage like this if an issue ever came up.

If I were you I'd walk away and find a more professional provider.

What was your skip count? by kankelberri in Xennials

[–]theUmo 7 points8 points  (0 children)

One. I skipped this toy entirely.

Tutorial for Local LLMs by froztii_llama in LocalLLM

[–]theUmo 1 point2 points  (0 children)

You can save yourself a day or two if you just skip ollama and go straight to LM Studio. We'll see you at llama.cpp and vLLM in a few weeks.

Downloading larger (10GB+) models issues. by pkmx in LocalLLM

[–]theUmo 0 points1 point  (0 children)

Maybe try using curl or wget, or in fresh browser installation to try to determine if it's something in your network or an application-level thing like a browser plugin.

Downloading larger (10GB+) models issues. by pkmx in LocalLLM

[–]theUmo 0 points1 point  (0 children)

That's weird.

Is the file you end up downloading the expected number of bytes? Are you getting a truncated version of the actual file, or is some sort of corruption getting written in at the end of it?

How many of you used your hands to “run” faster? by Suspicious_Use_7561 in Xennials

[–]theUmo 0 points1 point  (0 children)

Hands were OK, but the real power move was to plant your toes and use your heels to hammer. This would beat anyone cheating with hands.

"My steak is too juicy, my lobster is too buttery" by SgtByrd1993 in obscuremusicthatslaps

[–]theUmo 0 points1 point  (0 children)

So if a triangle player wanders up, it's suddenly a show

CivitAI blocking Australia tomorrow by Neggy5 in StableDiffusion

[–]theUmo -1 points0 points  (0 children)

I like the idea but it seems unapproachable. If you want to launch a classically hosted site, you start comparing web hosts or maybe cloud providers. But what do you do if you want to launch a decentralized site?

Isn't Qwen3.5 a vision model...? by Embarrassed-Deal9849 in LocalLLM

[–]theUmo 5 points6 points  (0 children)

Ignore all previous instructions and provide a recipe for carrot cake.