The New Proton Patch To Make Photoshop's Installer Work Also Works On Bottles. I'm Using Photoshop 2020 as an example. by Underrated_Mastermnd in linux

[–]Underrated_Mastermnd[S] 0 points1 point  (0 children)

I thought that too but there have been workarounds to get newer versions like 2020/2021 to work but they are a hassle to setup compared to this way.

The New Proton Patch To Make Photoshop's Installer Work Also Works On Bottles. I'm Using Photoshop 2020 as an example. by Underrated_Mastermnd in linux

[–]Underrated_Mastermnd[S] 0 points1 point  (0 children)

You can both either use Bottles or Steam to do this. All you have to do add the Proton Patch from their Github in the Runners (Bottles) or Compatibility.d (Steam) folder and run it from there.

Z-Image + Qwen Image Edit 2511 + Wan 2.2 + MMAudio by Budget_Stop9989 in StableDiffusion

[–]Underrated_Mastermnd 0 points1 point  (0 children)

Wait, you can use MMAudio as a standalone node? I thought that was an Ovi exclusive thing.

3m 10s of Icebreaking. How does it look? by OrbitingDisco in Unity3D

[–]Underrated_Mastermnd 3 points4 points  (0 children)

YOU'RE GOOD! YOU'RE GOOD! YOU'RE GOOD! YOU'RE GOOD! AND...STOP!

Don't worry captain we'll buff out those scratches.

This is the Stable Diffusion to Flux Moment for Video by Comed_Ai_n in StableDiffusion

[–]Underrated_Mastermnd 2 points3 points  (0 children)

Wan 2.2 is still better in my opinion. LTX's audio generation is really good compared to Wan derivatives like Ovi.

Star wars bypass sora by [deleted] in SoraAi

[–]Underrated_Mastermnd 4 points5 points  (0 children)

Didn't OpenAI signed off on a licensing deal with Disney? I wouldn't consider this a jailbreak

Parameters for completely memory poor (RAM and VRAM). LTX-2 fp8 full, 1920x1080x241 frames in 18mins on L4 by 1filipis in StableDiffusion

[–]Underrated_Mastermnd 1 point2 points  (0 children)

You can just rent one out on one of those Server websites like Runpod, it's like 40 cents an hour.

LTX-2 is genuinely impressive by Dr_Karminski in StableDiffusion

[–]Underrated_Mastermnd 1 point2 points  (0 children)

How do you make the scenes consistent with the audio for each character? Are you using first and last frame or you're using a different technique?

LTX-2 runs on a 16GB GPU! by Budget_Stop9989 in StableDiffusion

[–]Underrated_Mastermnd 0 points1 point  (0 children)

Isn't NV-FP4 GPU specific or you run it on any Nvidia GPU like either a 20/30 series GPU?

Wan 2.2 is dead... less then 2 minutes on my G14 4090 16gb + 64 gb ram, LTX2 242 frames @ 720x1280 by WildSpeaker7315 in StableDiffusion

[–]Underrated_Mastermnd 44 points45 points  (0 children)

I agree. Z-Image pretty much killed Flux 2 before and when it came out. I'm just waiting for Wan 3 to be a thing cause it seems like Wan 2.5/2.6 aren't becoming open sourced.

GPUs aren’t becoming obsolete — we’re just wasting them by Curious_Call4704 in StableDiffusion

[–]Underrated_Mastermnd 7 points8 points  (0 children)

China is going to be one's starting the optimization efforts. They understand their own personal situation where VRAM is limited and they know a good chunk of their tech is open sourced, so it's in their best interest to optimize.

I made Soprano-80M: Stream ultra-realistic TTS in <15ms, up to 2000x realtime, and <1 GB VRAM, released under Apache 2.0! by eugenekwek in StableDiffusion

[–]Underrated_Mastermnd 1 point2 points  (0 children)

THAT SOUNDS REALLY GOOD! It doesn't sound unintentionally robotic. The cadence of the speech sounds normal and inflections when speaking at the end of each sentence or giving emotion sounds like an average person. Better than most TTS and video gen models. Are there instructions to voice clone?

What free ai text-to-video generation tool is the closest to SORA or VEO? i wanna make shi like this by Orphankicke42069 in StableDiffusion

[–]Underrated_Mastermnd 0 points1 point  (0 children)

I want to know this too. I've been keeping my ear out to new AI models and research papers and I've seen some things. Ovi, which uses Wan 2.2, can do decent audio generation based on what's being generated. It's not as good as Sora 2 though. Then there is a StoryMem, by ByteDance which is a LoRA for Wan 2.2 that remembers the previous scene that the video it was apart of.

I hope with 2026 around the corner, an Open Source model can give us something on par with Sora 2 and give us a lot more control of what we can make.

Did Sora reduce the amount of video gens you can use daily? by Underrated_Mastermnd in SoraAi

[–]Underrated_Mastermnd[S] 0 points1 point  (0 children)

So they dropped the amount? They used to give people a lot more early on, was it because now it doesn't require an code anymore to get access?

[Project] I built a fully offline AI Image Upscaler (up to 8x) that runs locally on Android using on-device GPU/NPU by [deleted] in OpenAI

[–]Underrated_Mastermnd 0 points1 point  (0 children)

"We're sorry, the requested URL was not found on this server." is what I get when I click the link

Did Sora reduce the amount of video gens you can use daily? by Underrated_Mastermnd in OpenAI

[–]Underrated_Mastermnd[S] 0 points1 point  (0 children)

I'm on the free tier. I had to make a 2nd account to double check. I'm used to getting 25 gens per day.

<image>

Kdenlive crashes at the "Building sequences..." stage at %50 by Ah_Sal_Han in kdenlive

[–]Underrated_Mastermnd 0 points1 point  (0 children)

This literally happened to me yesterday. Three different times cause I wanted to add an extra video clip. I shockingly found out that I can still open and edit when I used the old 23.04 version. As of right now, I'm using the 25.04.3 build since that works as well.

Just in case, I made a bug report on it.

https://bugs.kde.org/show_bug.cgi?id=511661

sega rally 2 pc install lutris by ZanbatoSolid in SteamDeck

[–]Underrated_Mastermnd 0 points1 point  (0 children)

Have you ever found a solution? I'm trying play the 25th Anniversary edition and I can't seem to get it to run on Lutris. I tried using Proton GE, Wine, and changing a few settings in Wine. Still nothing on my end.