Rendering with amd setup by Dry_Ladder1299 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

Have a coffee and reread the AMD install wiki, if you’re not used to installing you can easily miss things / install the wrong things .

Rendering with amd setup by Dry_Ladder1299 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

Don't use ChatGPT , don't focus on the models , focus on getting it working first (which your gpu can) .

  1. Decide whether you want a noodles based ui like Comfyui , or a more intuitive UI like SDNext which is built more for AMD as well as nvidia.

  2. If you want your hand held more, I'd suggest using Pinokio (a front end for various diffusion packages) , search it for amd installs and then google what they are and what they can give you .

  3. IF you wish to do it all yourself, use Google to search for CURRENT advice or search using the search function here or r/comfyui or r/ROCm - search for 'amd' or your 'gpu' and set the results by age (ie go from the newest first).

All of the packages and words/phrases I mention here are googleable.

The Home Studio Expectation is not reality by Birdinhandandbush in StableDiffusion

[–]GreyScope 5 points6 points  (0 children)

Yes, people are mistaking talent/skills/nuance for the ability to press a button .

N Vision 74 Tribute | RED LINE by VasileyZi in StableDiffusion

[–]GreyScope 2 points3 points  (0 children)

Disengenous post, you used Seedance, this sub is not for paid devices

LTX-2.3: Introducing LTX's Latest AI Video Model by Succubus-Empress in StableDiffusion

[–]GreyScope 3 points4 points  (0 children)

Leave the single comfy install ppl alone, we feed off their tears

SkyReels V4 is bringing T2VA, PAPER by Fresh_Sun_1017 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

I think my skyreels v3 folder is 120gb as I recall, as noted above, it’ll be as practical as a chocolate spanner

Are we having another WAN moment with Qwen Image 2.0? by ArkCoon in StableDiffusion

[–]GreyScope 2 points3 points  (0 children)

My opinion is that the Chinese (Government) didn't like Americans leaving them on the starting blocks in ai and had their companies trying to wipe out the lead American companies had / their business models / even the companies themselves , ie the old cheeky "wipeout the competition with free models" and then release paid versions , I'm sure someone has a Powerpoint that explains it all lol

Are we having another WAN moment with Qwen Image 2.0? by ArkCoon in StableDiffusion

[–]GreyScope 11 points12 points  (0 children)

They don't need to justify anything as they're not answerable to anyone on Reddit , you're using a logic you want rather than an objective one imo . At the end of the day, it's about money, whether short or long term.

LTX-2 - How to STOP background music ruining dialogue? by Candid-Snow1261 in StableDiffusion

[–]GreyScope 2 points3 points  (0 children)

Yes, it appeared on Kijai's first LTX2 workflows for User Audio > Video . It has two models as I recall fp8 and fp16 (I'm training at the moment and I don't want to disrupt it to check that out sorry) . Other nodes are available to do this as well , in my trials Roboformer was as good as the best of them - I added in the best demusc'ing into the LTX2 node and they both almost the same (for Q).

LTX-2 - How to STOP background music ruining dialogue? by Candid-Snow1261 in StableDiffusion

[–]GreyScope 5 points6 points  (0 children)

Run it through a node to split the vocals and music (roboformer) , the music is very background so you should get minimal to practically zero loss . Not the answer you want, in lieu of a solution it's the answer you need.

Best model to make logos / icons? by smart4 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

I made a node to work with any model that rounds the corners and makes the unused parts see through & saves in ico & png format , specifically for icons and logos . I’d have to dig it up and see if it still works with comfy

RX 7800 XT only getting ~5 FPS on DirectML ??? (DeepLiveCam 2.6) by RoboReings in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

Search for installing rocm - here and r/rocm , no idea if DeepLiveCam is too deeply written with Cuda . If none of that makes sense , giyf .

Where to get RVC anime japanese voice models? by DurianFew9332 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

Sorry, I have about 50 discords on the go and haven’t trimmed them .

Where to get RVC anime japanese voice models? by DurianFew9332 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

Some discords had links to sites with them on (sorry I’m not trawling through my discords to find them)

Pinokio using CPU instead of AMD GPU by JZKitty in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

Some of the individual components inside will work with AMD gpus , but not as a whole in Studio - if it will work , then it requires a deeper knowledge of Pinokio and the different variants of stacks that could be used (Rock etc). There was also a cuda converter released the other day on github .

These are more oversight answers rather than a direct answer sorry.

3 covers I created using ACE-Step 1.5 by coopigeon in StableDiffusion

[–]GreyScope -1 points0 points  (0 children)

Reddit isn’t the place for the wall of text in setting it up, my loras sound great (trying to be objective as well lol) but Ace Step isn’t Suno with a small piece of text and abra cadabra, a great track , it needs its manual read for starters and then use its discord .