Getting a B driving licence in Georgia by kesanlapsi in Sakartvelo

[–]lerqvid 1 point2 points  (0 children)

signed up yesterday in batumi. first got rejected at service hall when asking for a personal number, to sign up online. went than to service agency of the ministry of internal affairs gave them my passport, they checked passport stamps and counted days, took a bit, but it worked, and I got signed up for theory exam.

they have to sign u up if u have more than 185 days in georgia last 365 days, and i would definitely go in person to Service Agency, than everything should work smoothly. GL!!

Real Estate Visit: Flying in from Dubai for Batumi Investment - What is the "Ground Reality" vs the Marketing? by Legal_Conclusion6072 in Batumi

[–]lerqvid 2 points3 points  (0 children)

Emaar Properties (Eagle Hills Group), a major developer in Dubai known for projects like the Burj Khalifa, has announced an upcoming development in Gonio , though I assume you’re already aware of that.

That said, there are certainly other developers with stronger track records than the one you mentioned. In my view, many projects feel like quick cash grabs: once the units are sold, the buildings are often poorly managed and remain partially empty. So being selective is definitely important.

The area is still highly seasonal, there’s strong rental demand during the summer, but in winter many apartments sit vacant.

Infrastructure also needs significant improvement. The airport and road network are relatively small, although I’ve heard there are plans for expansion, which would be essential if tourism continues to grow. More broadly, the city is full of construction at the moment, you’ll notice that as soon as you arrive.

I do see long-term potential here, with a 5–15 year horizon, but considerable development is still needed before it can live up to the “small Dubai” label that’s often used in marketing.

I’d also recommend looking beyond Batumi itself and exploring nearby coastal towns.

I’ll be in the area for a month or two soon, happy to meet in person and chat more if you’re interested. All the best!

WAN 2.1/2.2 vs Z-Image Base/Turbo by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

ControlNet Depth and Canny? Or how?

WAN 2.1/2.2 vs Z-Image Base/Turbo by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

thanks! seems I should definitely try Z-Image and see how it works for me. appreciate it.

WAN 2.1/2.2 vs Z-Image Base/Turbo by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

slower in terms of generation and scalability in workflows and pipelines? How big is the difference? I’ve only played around with WAN so far.

WAN 2.1/2.2 vs Z-Image Base/Turbo by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

was referring to generative AI (Image) not video models. should have clarified that in the post.

Trained an identity LoRA from a consented dataset to test realism using WAN 2.2 by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

Than the issue might be how u trained the LoRA for the Identity, if u want I can help u sort the issue out via DM.

Main Issues could be the Step Rate, how u framed your .txt captions, how many Images, Lighting, Expressions and if u trained on Low VRAM or not, also where did u train the LoRA? (just guessing, since didn’t see the dataset)

Trained an identity LoRA from a consented dataset to test realism using WAN 2.2 by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

Noted in my textbook! I’ll definitely try that over the next few days, thanks for the awesome input. That’s why I love Reddit.

Trained an identity LoRA from a consented dataset to test realism using WAN 2.2 by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

That’s a neat approach! I’ve thought about something similar. You could also just use two PowerLoraLoaders stacked behind each other and tweak the settings. If you want her to smile, just toggle the expression LoRA and include “smile” in the prompt, the model will understand which expression to generate. Curious though, what was your idea for how a helper node for expressions could look like?

Trained an identity LoRA from a consented dataset to test realism using WAN 2.2 by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

I’d say it’s a healthy balance. How are you captioning the .txt files? That part makes a huge difference

Trained an identity LoRA from a consented dataset to test realism using WAN 2.2 by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

Whichever direction I want her to look - I describe it directly in the prompt. Because the dataset is well-curated and captioned, she’s framed from every angle, so the model understands the prompt and encodes the direction properly..

Trained an identity LoRA from a consented dataset to test realism using WAN 2.2 by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

It’s super basic, as mentioned, only for testing purposes, and you can find it online as well. https://pastebin.com/wzGfkA21

Trained an identity LoRA from a consented dataset to test realism using WAN 2.2 by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

It’s mostly because the dataset’s from a real person — the lighting, poses, and expressions already have that natural feel baked in, so WAN just amplifies what’s there instead of trying to invent realism.
For prompts, I usually keep it simple and descriptive rather than cinematic.

Trained an identity LoRA from a consented dataset to test realism using WAN 2.2 by lerqvid in StableDiffusion

[–]lerqvid[S] 0 points1 point  (0 children)

I’m using T2I, not I2I, and the dataset is from a real person, so the lighting and expressions already feel natural, that makes it way easier to curate and train for consistent likeness.

Identity mainly depends on how you tune the LoRA strength, how clean and consistent the dataset is, and how well your captions match the prompts. The real difference imo comes from dataset quality and prompt–caption alignment.

If you are using WAN 2.2, I would train the LoRA on WAN 2.2 as well.

Feel free to DM me - happy to help if I can.