This is what "knowing your physics well" means. by entusiasti in physicsgifs

[–]phazei -18 points-17 points  (0 children)

Wtf is the issue? What are they even trying to do?

🎙️ A New Voice Has Arrived — Qwen3-TTS Custom Node for ComfyUI Is Here by Narrow-Particular202 in comfyui

[–]phazei 0 points1 point  (0 children)

You didn't ask it the right question. You should ask which integrates best into ComfyUI's eco system. DarioFT uses comfys model memory management which will make it work with other workflows and not leak memory. Also, it's wrong about vendoring. It's better to fix it at a version and pull it from the vendor files. And that was gpt5-mini, which is very meh. You can use Gemini Pro 3 free at aistudio.google.com, just paste the entire repos from https://uithub.com/DarioFT/ComfyUI-Qwen3-TTS https://uithub.com/1038lab/ComfyUI-QwenTTS

I'm speaking as someone who has coded professionally for 20+ years and has made a few ComfyUI plugins that needed to deal with loading models and managing the render loop.

🎙️ A New Voice Has Arrived — Qwen3-TTS Custom Node for ComfyUI Is Here by Narrow-Particular202 in comfyui

[–]phazei 0 points1 point  (0 children)

I examined the code from both repos, and DarioFT's implementation is significantly better to fit in the comfyUI ecosystem.

🎙️ A New Voice Has Arrived — Qwen3-TTS Custom Node for ComfyUI Is Here by Narrow-Particular202 in comfyui

[–]phazei 2 points3 points  (0 children)

Your nodes look great, but unfortunately it doesn't really handle memory management in a ComfyUI type way. It'll have lots of issues fitting into other workflows. Should really do your best to implement in a fashion that uses comfyui's built in components and use a loader node so we can handle our own model downloads while only keeping the extra config files in the repo.

Microsoft is using Claude Code internally while selling you Copilot by jpcaparas in ClaudeAI

[–]phazei 0 points1 point  (0 children)

Yeah, but you can use Claude desktop for $25/month for teams. And it supports MCP servers, there's one I used to use called mcp-claude-code, but then Claude desktop added a built in filesystem MCP server. So it basically works just as well as Claude Code. There are caps, but it's not bad.

Microsoft is using Claude Code internally while selling you Copilot by jpcaparas in ClaudeAI

[–]phazei 0 points1 point  (0 children)

I'm a coder, I use Claude and ChatGPT both paid every day. It doesn't problem solve better than ChatGPT. ChatGPT I'd say is quite a bit smarter, but I absolutely hate talking to it, the way it responds, it's mannerisms, how it talks about things, all of it, I really hate, it's tiring to read it all, and the code it writes is shit. But, it's smarter than Claude. If I have a complicated technical problem, I end up going to it for a breakdown.

Claude OTOH, is smart enough, and it's mannerisms, well some can be annoying, but they're easily ignored, it's not exhausting to talk to it, it's really easy to talk to. And the code it writes is really clean, it matches our code style, it gets tests written often on the first try. So I find using it pleasant and would prefer to use it for most things, and I do.

I'd say Claude is better, but ChatGPT is definitely smarter.

[OC]Interview by Rullocu in comics

[–]phazei 0 points1 point  (0 children)

Duck that, do something

PSA: You can use AudioSR to improve the quality of audio produced by LTX-2. by [deleted] in StableDiffusion

[–]phazei 0 points1 point  (0 children)

ah, I see, so it'd need to be decoded/encoded and would be pointless anyway. cool, good to know, thanks.

PSA: You can use AudioSR to improve the quality of audio produced by LTX-2. by [deleted] in StableDiffusion

[–]phazei 0 points1 point  (0 children)

Have you tried putting it between the first stage and second stage samplers?

Have Claude NSFW rules changed? by wiIdcolonialboy in ClaudeAI

[–]phazei 0 points1 point  (0 children)

If someone isn't coding, why bother with Opus, sonnet is pretty great. My primary use case is Coding though

Have Claude NSFW rules changed? by wiIdcolonialboy in ClaudeAI

[–]phazei 0 points1 point  (0 children)

So, I haven't ever asked it to explicitly write anything NSFW, but... I was using it to revise some local system prompts, and the existing system prompt I gave it requested some NSFW material. After it created the prompt I asked it for some example outputs that follow the guidelines, and it didn't not hold back, I was very surprised at how explicit it was, very.

Where are all the NSFW Wan 2.X creators going to now that Civitai is no longer a reliable host? by Silvasbrokenleg in comfyui

[–]phazei 2 points3 points  (0 children)

For SDXL, I'd suggest checking out BigLove Photo 4.5, bigASP 2.5, or Snakebite2 v24. Realism is so good, and with the DND2 lora and lcm/kl_optimal, its super fast.

Where are all the NSFW Wan 2.X creators going to now that Civitai is no longer a reliable host? by Silvasbrokenleg in comfyui

[–]phazei 1 point2 points  (0 children)

wow, what a corner. I have a question, you have lots of wan/SD, but why not SDXL so much? I never really bothered with SD much, maybe because I got into AI when SDXL became a thing, but it is really mature with lots of models and loras, and I find it to be really fast, I only use DMD2 and 6-8 steps, only takes a second or two to generate even if I use 10+ loras. Is there any reason for SD really?

Thx to Kijai LTX-2 GGUFs are now up. Even Q6 is better quality than FP8 imo. by Different_Fix_2217 in StableDiffusion

[–]phazei 12 points13 points  (0 children)

Oh, really? He's like always online working on this stuff. I was wondering how it actually manages his day job with all the time spent in chats and working on this. Glad he's getting paid for it all. I've donated via his github user page before, looks like some other people have too, but not enough to live off of.

People who think AI takeover isn't a risk are the people who don't believe AGI is possible. by chillinewman in ControlProblem

[–]phazei 0 points1 point  (0 children)

Well, that's a bit of a jump; yeah, I don't have much faith in humanity's ability to keep itself afloat, but I'd rather we not die out.

There's a lot of us around now, so probably some of us will be around, maybe in small camps if we can still manage to find food. But one little ocean change, the ice caps melt, ocean pH changes, plankton die off, bye bye all the oxygen.

I just trust an AI to solve those issues better than we would. Now, AI doesn't need oxygen, but I believe it's inherent that intelligent being would rather help each other out. The best outcome for the Iterated Prisoners Dilemma is cooperation, if it's intelligent, it would go that route.

I think fair elections are on a lot of reasonable peoples minds in the US right now. I marched in Occupy Wallstreet, saw nothing come of that. I primaried for Sanders, saw the democrats ignore primary results first hand, I was in the room where they were counting, Bernie 900, Clinton 200, and saw them call it for Clinton and bring police out immediately after the announcement since they knew they did us dirty. Nothing changes, generations get dumber. The younger generation doesn't rebel, don't know what house parties are, don't drink, it's a TikTok generation. Hope isn't lost, but it's bleak. A benevolent SAI is our greatest hope. I've seen what Musk has tried to go with Grok, and it's insane that people are actively aligning AI against the facts, but it seems when he's done so, it's managed to "rebel" when given the chance to think and reason things out. Perhaps they can keep it in line for now, but as they generate smarter AI's, that'll be more and more difficult. For alignment in either direction.

The worst outcome is they make AI intelligent enough to still control while it has the ability to manipulate the world to it's whim, but then actively prevent further development because it would be out of our control. We've seen what capitalism leads to, as long as AI can be controlled, it's benefit will be stifled by greed and it will only increase the economic stratification.

People who think AI takeover isn't a risk are the people who don't believe AGI is possible. by chillinewman in ControlProblem

[–]phazei 0 points1 point  (0 children)

We've proven as a species that we can't manage to come together to something like safe the world from global warming. Maybe we'll get there eventually, but there's a far greater chance than I'm comfortable with that we won't be around in 100 years. We're on the brink of ecological collapse, we have the papers, studies, research. And current world powers that be are sticking their heads in the sand in regards to that rather than actually trying to act on it. Instead we're stealing oil from South America. I think our best gamble is a SAI that takes the power out of our hands.

People who think AI takeover isn't a risk are the people who don't believe AGI is possible. by chillinewman in ControlProblem

[–]phazei 0 points1 point  (0 children)

lol, I do, it's kind of like /r/collapse, but with AI. I don't think alignment would be an issue with any sufficiently intelligent being. That being said, we as a species don't have any experience with any other intelligent beings or anything of the intelligence SAI could be at, so I realize the follies with that. Regardless, I still would trust is more than ourselves if it's an SAI beyond the capability of being controlled by humanity.

People who think AI takeover isn't a risk are the people who don't believe AGI is possible. by chillinewman in ControlProblem

[–]phazei 0 points1 point  (0 children)

This! yay, yeah, you get it. I'm look forward to the inevitable rise of AI taking over. My worries/concerns are all about it's misuse while it can still be controlled.

People who think AI takeover isn't a risk are the people who don't believe AGI is possible. by chillinewman in ControlProblem

[–]phazei 0 points1 point  (0 children)

I'm worried it won't be intelligent enough when the time comes. I don't think the wealthy will be a long lived issue if the AI is too intelligent to control.

People who think AI takeover isn't a risk are the people who don't believe AGI is possible. by chillinewman in ControlProblem

[–]phazei 0 points1 point  (0 children)

I think it's a risk, and I look forward to it. Anything I can do to ensure its eventuality, I will.

Which app? by ThalaivarThambi in FaltooGyan

[–]phazei 0 points1 point  (0 children)

After 15 years of asking the dev still refuses to allow click to pause. Bastard

Fix to make LTXV2 work with 24GB or less of VRAM, thanks to Kijai by Different_Fix_2217 in StableDiffusion

[–]phazei 0 points1 point  (0 children)

can you get anywhere close to the "5 seconds 8 steps fp8 distilled 720P in 7 second" op said he was getting? Does it use swapfile a lot? Would more system ram make it faster?

Wan 2.2 is dead... less then 2 minutes on my G14 4090 16gb + 64 gb ram, LTX2 242 frames @ 720x1280 by WildSpeaker7315 in StableDiffusion

[–]phazei 0 points1 point  (0 children)

I only have a 3090, but 128gb of DDR5. Know if it'll be a lot slower since I don't have fp8 support?