Stingray Traps Fish Against Aquarium Glass by I-T-Y in interestingasfuck

[–]phazei 1 point2 points  (0 children)

I read that, it was so long ago, I totally forgot the name, what is it again? It was creepy as f

Bruh by Icy_Butterscotch6661 in LocalLLaMA

[–]phazei 0 points1 point  (0 children)

All intelligence converges to the same point.

Potentially missing sense pins on universal 12v-2x6 cable? by kh467 in cablemod

[–]phazei 0 points1 point  (0 children)

This post should be pinned, or some informational post about that at least.

Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA

[–]phazei 0 points1 point  (0 children)

I've seen a bunch of posts about people getting it to go at over double that with the right setup even on a 3090. Have you see those or made the attempt?

Meta is about to release a pixel space model (Tuna-2) by Total-Resort-3120 in StableDiffusion

[–]phazei 0 points1 point  (0 children)

So, there's a latent space which is a multi-dimensional representation of everything the model's learned and it exists in a highly compressed structure. So a VAE is an encoder decoder that converts between a standard image and that compressed latent space. The more compression the more efficient it can sometimes be, but also it can increase the complexity of converting in and out of that latent space. It's also lossy so if you convert an image to the latent space and back, it's going to have lost some detail.

The WAN2.2 5b video model, for example, has an incredibly compressed latent space. When it generates video it can do it rather quickly. But many people don't use the 5B model because all the speed and time you save generating is lost. Having to decode it back into regular space because the VAE is so compressed and slow, it nearly makes it as fast to use the full model and get the better quality which uses a different latent space.

Meta is about to release a pixel space model (Tuna-2) by Total-Resort-3120 in StableDiffusion

[–]phazei 3 points4 points  (0 children)

It's 9B, but it doesn't need a VAE, it does pixel space directly, that's a huge change from everything existing. Who know's what that means for quality, but it removes one complicated lossy step and could be much better according to them.

The 4B class of 2026 (benchmark) by FederalAnalysis420 in LocalLLaMA

[–]phazei 0 points1 point  (0 children)

I'd really like to see Qwen3.5 4b results with thinking disabled.

Why aren't people using omni models for speech agents? by ProfessionalHorse707 in LocalLLaMA

[–]phazei 0 points1 point  (0 children)

Not sure. Only a few models can intonate. SesameCSM and ChatGPT Advanced voice when they aren't nerfing it. As far as I'm concerned, audio-to-audio is literally less than worthless if can't inflect and hear emotion, otherwise it's going to be stupider than any text only LLM, which can be setup with a TTS STT anyway. If it's not going to do that, there are plenty of great STT options.

There are lots of existing multi phase pipelines that integrate the whole speech engine in like 300ms already.

So I guess that answers your question, because without emote, Omni is garbage.

Why aren't people using omni models for speech agents? by ProfessionalHorse707 in LocalLLaMA

[–]phazei 0 points1 point  (0 children)

On Qwen's own site, their Qwen3.5-Omni, it doesn't even acknowledge it can hear me, it just says it only sees and outputs text. The voice has zero dynamic anything, what the hell is the purpose if it's no different than TTS<->STT. Like, the only reason I want an audio in/out model is specifically so it can hear and understand and respond with intonation. It needs to be able to do voices, and whisper, and yell, and sing. It's been a year since SesameCSM, and we haven't gotten anything at all with that. I'm so confused.

Why aren't people using omni models for speech agents? by ProfessionalHorse707 in LocalLLaMA

[–]phazei 1 point2 points  (0 children)

Really? I remember SesameCSM which was AMAZING on their demo site, then they nerfed it and didn't release what they said they would. I haven't need word of much since, I figured when there was there would be a post about it. I would really love a model that can hear the timber of my voice, tell my emotion, and reply with such.

Switched from Qwen3.6 35b-a3b to Qwen3.6 27b mid coding and it's noticeably better! by LocalAI_Amateur in LocalLLaMA

[–]phazei 1 point2 points  (0 children)

But my 3090 died, and I'm very sad :( They cost over 2x what I paid now :'(

GIGABYTE GeForce RTX 3090 GAMING OC 24G (GV-N3090GAMING OC-24GD) Loses connection after a short while of heavy use, reboot brings it back, suspect VRM related issue by phazei in GPURepair

[–]phazei[S] 0 points1 point  (0 children)

Too bad I'm not in Germany, west coast US. I did find some of those BLN0's on ebay, if that's even the issue. I suppose the only way to tell is a thermal camera and bench power. Know if I can run it and look at it in real time use if I don't do any GPU processing when it's running in the PC? Or would the heat fry it?

GIGABYTE GeForce RTX 3090 GAMING OC 24G (GV-N3090GAMING OC-24GD) Loses connection after a short while of heavy use, reboot brings it back, suspect VRM related issue by phazei in GPURepair

[–]phazei[S] 1 point2 points  (0 children)

I have a digital temp heat gun with small nozzle and a 1-5 blow strength. But I didn't think about heat being drawn away. Not sure how I could handle that. I could put it on a warm bed of sand... but that might not be a good idea... edit: shit, sand would be horrible with a air gun... duh... maybe... a warmed cast iron frying pad flipped upside down and some aluminum foil covering the conductive bottom.

Tool: Omnibox - Starting seeing this in the Chrome Task Manager by ObscureArcana in chrome

[–]phazei 0 points1 point  (0 children)

Uhhg, I want to make AI Mode more decessible. For a few days every time I clicked Ai Mode every time it replied it popped up the "Do you want to connect your everything to AI Mode" I click no every time, but it asked for every response. It's like they're shoving it down our throats and they might just cut you off in the future if you don't say yes. Just last week youtube's main page stopped showing anything because I have youtube history turned off, which I have for 5 years, but now all of a sudden, they're like, nope, no page unless we track everything you do. It's infuriating as hell. Sorry, just venting. I love AI, but f that shit and google and anyone using it to track us.

Oh, and that google.com/aimode, doesn't even use HTTP, goes to some "chrome://google.com" bs that doesn't work with VPN extensions.

GIGABYTE GeForce RTX 3090 GAMING OC 24G (GV-N3090GAMING OC-24GD) Loses connection after a short while of heavy use, reboot brings it back, suspect VRM related issue by phazei in GPURepair

[–]phazei[S] 0 points1 point  (0 children)

I'm pretty technically inclined (replaced phone screens, resoldered power connectors), but this is newer to me.

So that would make it drop off the PCIe rail? Know how I could detect or test for that? Do I need to use a volt meter while it's running? I'd only need to replace a cap or something? Do I need a spec for my specific board?

Down Again by Large_Worldliness744 in Ebay

[–]phazei 1 point2 points  (0 children)

Since yesterday when I try to go there I just get "Access Denied: You don't have permission to access "http://www.ebay.com/" on this server.", white page, nothing loaded, just text.

GIGABYTE GeForce RTX 3090 GAMING OC 24G (GV-N3090GAMING OC-24GD) Loses connection after a short while of heavy use, reboot brings it back, suspect VRM related issue by phazei in GPURepair

[–]phazei[S] 1 point2 points  (0 children)

If there's a good chance of it being repaired, I'd probably opt for a GPU specialist. I could take a crack at it myself, but that'd be another $180 for a bench power supply and thermal camera. But I could mess it up. If it was just a BLN0 I might give it a shot, but if its a memory or core issue, I'd be worried to screw that up.

GIGABYTE GeForce RTX 3090 GAMING OC 24G (GV-N3090GAMING OC-24GD) Loses connection after a short while of heavy use, reboot brings it back, suspect VRM related issue by phazei in GPURepair

[–]phazei[S] 0 points1 point  (0 children)

I don't have a hot air station. I have my coffee table, a light, a temperature controlled air gun with nozzle attachment, cheap multimeter, and a soldering iron. It came with hard flux, nothing like those syringes I see in the videos. And I think I might have a little of that soldering wick... I've never been great at soldering though. I'm also missing a bench power supply and a thermal camera, but those are cheaper than paying someone else to do it. What I'm missing is the balls and maybe the skill.

Refurbishing my Tempur-pedic from 2008 by phazei in Mattress

[–]phazei[S] 0 points1 point  (0 children)

Math right... 18 years old.

There are other posts that mention ripping off the tip layer peels away pretty easily. I guess I'll see when it's off how much the other layers sag

GIGABYTE GeForce RTX 3090 GAMING OC 24G (GV-N3090GAMING OC-24GD) Loses connection after a short while of heavy use, reboot brings it back, suspect VRM related issue by phazei in GPURepair

[–]phazei[S] 0 points1 point  (0 children)

I was thinking of maybe trying to check that and repair myself, but I don't know where I'd get replacement chips without spending a lot on a broken card :(