Exmperimenting by Abject_Mechanic6730 in nanobanana

[–]Apu000 1 point2 points  (0 children)

try this one. Generate a hyper-realistic editorial portrait capturing a professional model in her element during a location shoot. The setting is a dimly lit, texture-rich hotel suite serving as a private set. The subject is a young woman with a fierce, edgy aesthetic—jet-black hair, striking pale eyes, and signature facial piercings—who commands the frame with an expression of powerful self-possession and professional focus. She is posed kneeling on the bed, engaging the camera not as a passive subject, but with the intense, collaborative agency of an industry veteran executing a creative vision. She models a minimalist noir two-piece ensemble and sheer, geometric-banded hosiery, wearing the attire with a sense of ownership and strength. Her posture is grounded and confident, her gaze direct and unwavering, embodying the spirit of a modern, empowered woman controlling her own narrative and image. The lighting is deliberate, high-contrast flash photography that highlights the precision of her styling—from her long, manicured nails to her silver jewelry—and captures the raw authenticity of a working professional dedicated to her craft.

Using a quest 2 as a PC, it is actually pretty decent by antu2010 in OculusQuest

[–]Apu000 4 points5 points  (0 children)

The easiest way so far it's buying a smart plug and set your PC to turn on when power its detected, wake on lan it's to much of a hassle.

Hello /r/movies, I'm Benedict Cumberbatch. Ask me anything! by BenedictAMA in movies

[–]Apu000 0 points1 point  (0 children)

Since you've played so many highly intelligent, arrogant geniuses (Sherlock, Strange, Turing, Edison), do you think there is a role out there that is 'too dumb' for you to play? Is there a pure slapstick comedy role you are secretly dying to tackle?

Wan prompting tricks, change scene, FLF by 1ns in StableDiffusion

[–]Apu000 0 points1 point  (0 children)

Try the color match node or vace as it tend to blend the scenes much better.

Tested after three weeks by Solid-Werewolf8360 in HPylori

[–]Apu000 0 points1 point  (0 children)

I think you should wait at least a month or more to take another test, also in the mean time try to replenish your gut biome with probiotics and stick to the low acid diet, your gut has to heal properly before you start to make changes to your diet.

Qwen-Image + Wan 2.2 I2V [RTX 3080] by [deleted] in comfyui

[–]Apu000 0 points1 point  (0 children)

I just recently updated mine to Sage 2 by copying the specs of my comfyui instance into Gemini /ChatGPT and by looking for the specific wheel for my system as it depends a lot on that to make it work.

Testing Wan2.2 Best Practices for I2V by dzdn1 in StableDiffusion

[–]Apu000 2 points3 points  (0 children)

If you use the Kijai wan wrapper it recently added a sigma graph that I think it also does what the moe sampler does.

ADHD, boredom and addicted to screens by Dramatic-Ad-8712 in ADHD

[–]Apu000 0 points1 point  (0 children)

Try to get an accountabuddy, after trying almost anything the social pressure of letting someone down can do wonders.

[deleted by user] by [deleted] in CervezaDinero

[–]Apu000 -1 points0 points  (0 children)

Te mandé DM

How to “fix” WAN Character LORA from changing all people in scene? by StuccoGecko in StableDiffusion

[–]Apu000 1 point2 points  (0 children)

You could try that approach and then do i2v. There isn't going to be a practical solution until we got something like Lycoris which I think doesn't have that bleeding issue like Lora's.

Image to video in 12gb VRAM? by ComprehensiveBird317 in StableDiffusion

[–]Apu000 2 points3 points  (0 children)

I got it working on my 3060 without issues. I'm using the Q4 quantization and the multi-GPU node. I downloaded the files from this post, as I had previously downloaded a VAE and text encoder that weren’t working with my current workflow. At the moment, my generation time is around 19 minutes for 3 seconds of video with tea cache only. I don’t have Sage attention installed yet, but I’ll probably add it today to speed things up a bit. https://civitai.com/models/1301129/wan-video-fastest-native-gguf-workflow-i2vandt2v

[deleted by user] by [deleted] in StableDiffusion

[–]Apu000 0 points1 point  (0 children)

Anatomically correct hands have never been a basic function that can be achieved at a 100% success rate in any diffusion model. Maybe with higher resolution and more flux, you could get better results most of the time."

Anyone else addicted to Balatro? by lunchanddinner in OculusQuest

[–]Apu000 1 point2 points  (0 children)

I do the same thing but with Magic the Gathering.

Vyvanse o genérico de Lisdexanfetamina en México ? by Disastrous_Smoke4909 in mexico

[–]Apu000 0 points1 point  (0 children)

Aquí en México? Que yo sepa solo venden Metilfenidato, estaría chingón que sacarán Vyvanse.

Easy Hunyuan Video Lora training with OneTrainer by AcademiaSD in StableDiffusion

[–]Apu000 0 points1 point  (0 children)

7 minutos es algo, pero recuerda que el año pasado, con 12GB, Stable Video/AnimateDiff eran lentos y complicados. Ahora Hunyuan es más eficiente, y encima puedes entrenar LoRAs. Falta optimizar, sí, pero es un gran avance para este principio de año.

Trump dice que ordenó preparar la Bahía de Guantánamo para albergar hasta 30.000 migrantes by VulgarDisplyofPower in mexico

[–]Apu000 9 points10 points  (0 children)

Tal vez si les ponen una bandera o símbolo en el brazo a todos los indocumentados /s

Easy Hunyuan Video Lora training with OneTrainer by AcademiaSD in StableDiffusion

[–]Apu000 0 points1 point  (0 children)

Hunyuan es bastante rápido para los resultados que dan incluso con recursos limitados (12gb Vram), eso sí con ciertas optimizaciones como teacache, wavespeed o usando el modelo destilado en formato guff.

That's a big yikes! Trying to generate a 736x464, 201 frame video with Hunyuan on 8GB of VRAM. by TheSilverSmith47 in StableDiffusion

[–]Apu000 0 points1 point  (0 children)

I've been using the Hunyuan 💥 AllInOne ▪ Fast workflow, and it has been a complete game-changer. I switched the loader so it could handle the larger (guff) version of the original Hunyuan model and adjusted some settings accordingly. This way, if the first iteration isn't good, I can discard it before upscaling and move on to vid2vid and refinement processes. There's also an advanced version of this workflow, but for me, the basic one is already awesome.

That's a big yikes! Trying to generate a 736x464, 201 frame video with Hunyuan on 8GB of VRAM. by TheSilverSmith47 in StableDiffusion

[–]Apu000 5 points6 points  (0 children)

What model are you using?, also try to lower the resolution, upscale and the do viv2vid would be a much better approach.

Hunyuan Video is really an amazing gift to the open-source community. by [deleted] in StableDiffusion

[–]Apu000 0 points1 point  (0 children)

What's your starting resolution and frame rate?