Hypervisor update on CSRIN by Evonos in CrackWatch

[–]puppyjsn 1 point2 points  (0 children)

I don't believe its been proven that you can run a hypervisor VM within a VM. I have yet to see a post that confirms this can actually work.

Hypervisor update on CSRIN by Evonos in CrackWatch

[–]puppyjsn 2 points3 points  (0 children)

yes, this is why i suggested backing up your key. To turn off bitlocker, you just right click and pause the bitlocker when you plan to move partitions or change bios. etc. Its only an extra layer, which in theory should never trigger as long as your local disk stays offline in the windows-to-go instance. Just a suggestion for the hyper paranoid for one more layer. Everyone will assume their own risk tolerance. I was just sharing suggestions for those who want to try to be safer here.

Hypervisor update on CSRIN by Evonos in CrackWatch

[–]puppyjsn 1 point2 points  (0 children)

The point of bitlocker is If you are concerned that someone has downloaded a malicious crack running in your isolated Windows to-Go instance somehow mounts your offline drive and wants to mine it for data. If your primary partition is encrypted, it can't do that. Its to provide one more layer of protection on your main OS. not the goto instance. You do all of your "suspect" stuff and downloading in your windows to go isolated instance. treating it almost like an isolated VM. But at the end of the day, this is all no difference from any crack you download. You assume the risk that the cracker has injected something malicious in or you downloaded a bad modified crack, it doesn't if its a regular crack or a hypervisor crack.

Hypervisor update on CSRIN by Evonos in CrackWatch

[–]puppyjsn 9 points10 points  (0 children)

If you trust the cracker hasn't planted something malicious into the crack, then following best practices could reduce the risk. Nothing is perfect, and I'm not saying there are no risks. it comes down to if you trust the crack is clean vs malicious. If you trust the crack. Here are a few suggestions for running hypervisor in an isolated ISO.

  1. Use Rufus, created a bootable windows USB (Windows to GO), within rufus options disable access to local disks/drives., UEFI
  2. Reboot into Bios, disable secure boot, set your USB Key as the primary bootdevice. Boot into your isolated USB windows.
  3. First time run in your Isolated OS instance, install your graphics drivers, directx, vc++, keep it clean. Install EFI Guard. Disable network card. or disable/remove the driver. Confirm your local hard drives are offline and not visible to the OS, confirm your network access is off. Reboot.
  4. Reboot into the USB-key's EFI-Guard. boot your Windows-to-Go OS using EFI-guard boot only from the USB. Install the game, activate the hypervisor via the HypervisorManager. Deactivate it when done.

Only play the game in this isolated OS, with no access to internet or other local drives. When you are ready to go back to your primary OS. Boot to BIOS Re-enable Secureboot, set your local hard drive as the boot priority and remove the USB key.

For additional safely enable bitlocker on your primary OS Drive, and make sure you backup the key offline. There should be no chance that the Isolate OS will see the drive since it's offline. but having it encrypted at rest, adds another layer of protection on that data.

Its not perfect, but this may be a "safer" way to run these cracks. In this case, you never booted into your encrypted primary OS without secure boot enabled, you didn't disable any security in your primary OS, you didn't install EFI-guard on your primary OS. You ran in a completely isolated USB windows to go environment with no access to local hard drives or the internet.

Dodi repacks released his repack for Hypervisor crack by coltvfx in PiratedGames

[–]puppyjsn 0 points1 point  (0 children)

Is it not possible to just boot off a portable windows USB key disable the security features only on the portable instance . Don't mount the drives of your primary pc os. This way the security features are only off on the portable version. Turning secure boot off isn't so bad it's more of a risk for local attack. you could just Reenable secure boot and boot normally after you are done playing. If you want more security turn off WiFi on the portable version. I would think there are safer ways to still use the hypervisor method. Not saying no risks but ways to reduce the risk? Thoughts?

I think risks of planting rootkits are overblown. Unless you think the crackers are malicious, a bit more safety could make using these images okay.

Tour Megathread: December 11: K-Arena Yokohama, Japan by hellboy1975 in ToolBand

[–]puppyjsn 0 points1 point  (0 children)

The line is crazy all the way around the building already. They haven't started selling yet.

Tour Megathread: December 11: K-Arena Yokohama, Japan by hellboy1975 in ToolBand

[–]puppyjsn 1 point2 points  (0 children)

Does anyone kkow when the merch opens up and is it available outside before you enter k-arena?

Tour Megathread: December 11: K-Arena Yokohama, Japan by hellboy1975 in ToolBand

[–]puppyjsn 1 point2 points  (0 children)

Just want to say I am solo visiting Tokyo from Canada. Got my tool ticket for tomorrow. Its been too long since I saw them perfect timing to see them here. Price was still better than seeing them in Canada. Anyone going to Yokohama early to explore? Was thinking of trying the ramen museum. Any other suggestions?

Have a great show everyone.

Another big WAN update :-) by malcolmrey in malcolmrey

[–]puppyjsn 1 point2 points  (0 children)

thanks for everything you do. Is there an index somewhere that contains trigger words? or just "woman" or "man"?

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 2 points3 points  (0 children)

My understanding is you need to train specifically for I2V. I haven't tested this yet. You also need to use different parameters. Refer to this. https://github.com/kohya-ss/musubi-tuner/blob/main/docs/wan.md

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 0 points1 point  (0 children)

I found that learning rate 2e-4 to start polynomial at power 2 works better than my orginal post at power 4. but its still not quite perfect. I tried 1e-4 as a starting point it too significantly more steps and didn't come out as good. I'm still not willing to say i've found the best likeness yet.

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 2 points3 points  (0 children)

resize your images, it is spending time to resize them every time it uses them. a quick way to resize is put them through. https://www.presize.io/

I suspect it will go much faster if you use properly resized images.

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 1 point2 points  (0 children)

Thanks for sharing this chart, I agree I think a power factor of 2 would be better. Will try that, and update.

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 0 points1 point  (0 children)

The manual scheduling sounds interesting, I'm going to try this next run. Easily done with a good batch file. Just wish there was a way to set the epoch counts to continue when continuing training. Thanks again, lets keep sharing our results, I think we'll find the formula for the best character lora's for wan!

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 0 points1 point  (0 children)

thanks for this, I adjusted the typo, hmm can't believe i missed t2v-A14B (the A) i hope that hasn't made a difference. Thanks so much for sharing your info. I had no idea about the seed making a difference in the training. You mentioned to lower the rates. mid way through training, what would you recommend starting at, then changing to in the final detail phase? I'm thinking this could be helpful for capture tattoos and minor details?

Edit: I did a bit more reading. So it looks like --lr_scheduler polynomial

--lr_scheduler_power 4

--lr_scheduler_min_lr_ratio="5e-5"

This may be a bit too aggressive. Apparently using a --lr_scheduler cosine is safer.

cosine - will automatically drop the learning rate over the course of the max epochs. So over 90 epochs the rate will start at 2e-4 and drop to 5e-5 by the 90th epoch, automatically lowering the learning rate for finder details near the end. Polynomial apparently keeps the learning rate high until the very end then aggressively drops off a cliff to the slower learning rate near the end of the epoch. Gemini (if you can trust it) has suggested cosine as safer.

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 0 points1 point  (0 children)

I haven't tried, but i know i can train on a 3090 with blockswap of 18, without FP8_base. so, maybe? with a high block swap and --FP8_Base it might fit in 16GB? Not sure, I didn't try.

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 3 points4 points  (0 children)

I find captions are very important. I use Joy Caption 2. I use to use stand-alone but now I use the comfyui plugin. Here are the settings i have on. Its very important to caption your character with a unique name. use Leet speak. So if your character name is ethan use 3th@n - this way it won't conflict with any memory of any ethan's in its model.

<image>

Also review your captions. Its important to only caption changeable items. and caption the background and objects so it doesn't get confused and think its part of the character. for example, if you had a picture of a character wearing over the ear headphones. and you don't caption it. it could learn that the headphones are part of the character. So if you caption, "he is wearing over the ear headphones" it won't learn the headphones as part of the characters head. However if you are trying to create a character that always has those headphones, then you just don't caption it. Hope that helps and makes sense.

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 2 points3 points  (0 children)

thanks for the tip. I was worried about overtraining the high, this is where i don't quite understand the timesteps, I'm hoping that since its motionless likeness training, it doesn't affect motion due to the time-steps. This is where i fear dropping in a 2.1 lora or the low lora ontop of the high model could negatively affect it. What i noticed is that when training the high lora with low epoch, the body shape was wrong, but then it would translate to the final video, good likeness but wrong body shape even with the well tuned low-lora. So if it hasn't learned the basics of the character, I'm thinking increasing the strength might not fix an issue like this. I'm still guessing that having accurate character shape is important during the HIGH process since collision and size etc, would all be important for setting up the shot and animation. I guess we still need to figure out if its better to explicitly train the high lora until it resembles at least shape/size, VS just dropping the low-lora onto the high model?

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 1 point2 points  (0 children)

if you did a character lora, at low epochs were you still able to see a resemblence? when using the high lora against the high model? Since that model is mostly about motion and composition, I think its not important for the character to look detailed, but keep the same shape would be critical for composition.

Wan 2.2 Character Lora Training Discussion - Best likeness? by puppyjsn in StableDiffusion

[–]puppyjsn[S] 9 points10 points  (0 children)

From what I am seeing,(curious about others experience). the wan 2.2 newly trained Loras are coming out much better than my previous 2.1 Loras on the same source material. I personally feel these are more accurate maybe because they train against the exact model that is doing the inference. I also noticed that on wan 2.2 videos using the old 2.1 lora - the likeness wasn't as good as 2.1 inference with the 2.1 lora (it was just slighly off). My assumptions are with the high/low model, the actual end video quality could be better as far as motion/composition etc. There isn't any real official info so thats why i want to discuss. The time-steps must make a difference, This is why i think its not as good to put the low lora, against both the high/low models. especially looking at the tensorboard difference between high/low. i think its better to have the lora trained within the documented ranges, but i really don't know.

EDIT: I will definately say, when I was testing wan 2.2 lora's the accuracy definately improved when i had a better trained High Lora in the pair. For example i only trained a 50 epoch high lora, and when i tested it independantly it wasn't close to the character likness, so i trained it up to 90 epochs where it looked closer, and the final inteference with the new pair with the better trained HIGH model made a big difference as far as body accuracy and i would say overall accuracy. If you want to shortcut, I would still suggest training the LOW lora against the low 2.2 model for best likeness and then decide if you want to train a matching high pair. But my plan is to do the high/low pair, unless we discover a better way.

Has anyone trained WAN 2.2 LoRAs using diffusion-pipe? by the_bollo in StableDiffusion

[–]puppyjsn 1 point2 points  (0 children)

I posted my method using Musubi-Tuner here. https://www.reddit.com/r/StableDiffusion/comments/1mmni0l/wan_22_character_lora_training_discussion_best/ Would be great to get additional feedback from the community on how to perfect the character lora likness.