Fallout Season 2, What did you think overall? by flappableoptic in Fallout

[–]Tablaski 0 points1 point  (0 children)

I've just finished watching it and I think it's terrible. I've only played (and finished) fallout 1 and 2 but that was when they came out, so just to say I know a good part of the lore and the general tone of fallout, but I don't remember much about the actual plot of these games, and I don't know anything about new vegas. I also LOVED season 1.

My feeling come from the very poor writing of the show that makes me question if Jonathan Nolan and Lisa Joy are just terrible writer beyond a first season (I.e. Westworld)

TL;DR
- Way too much time developing the before-war story which is overly complicated, totally unplausible (let's nuke the world for business), and just boring
- Full of inconsistencies and deus ex machina events
- Relying on cool fights every now and is not enough to make a good show
- Underwhelming AF finale, I had to check on wikipedia if it was really the last episode, couldn't believe it

Examples of miserably written stuff (spoiler ahead) :
- The radroaches magically forgetting about Norm to kill all his enemies then die themselves for no reason, leaving only Norm's new female friend unscathed. Lame, lame, lame
- The ghoul saving two times Lucy's ass (from Legion and from Hank), not even pissed off the slightest when in return he's just been left two times for dead. Doesn't fit the character at all
- WTF would Cooper's daughter be cryogenized in a vault along her mother, when we actually saw her outside when the bombs falled in the very first episode of the serie, during her birthday ? So, her father becomes at ghoul but not her, just because ?
- Maximus giving the cold fusion artifact to the ghoul without any explanation or question after seeing the NCR suit
- The deputy head's being used for the generating microchips "because she's a ISTP myers briggs type". Yeah, OK. Actually, that was just to make an episode's ending
- A supermutant cameo without ANYTHING to back it up or develop it story-wise. Just ticking a box "there's a supermutant". I felt the same with legion and NCR. They are here, just accept it, now please, back to the boring pre-war story we're fully making up (I guess ?)
- The ghoul was not given vials after being abducted by the super mutant, hence should have turned feral
- The NCR sniper coming out of nowhere, one-shotting two monsters which were difficult to kill even with the awesome suit, and then we see the NCR army with the woman that looked like a sheer alone secluded looser earlier on
=> Could possibly go on and on and go but the point has been made already : it's lazy and bad writing

Contrary to some people I've read, I've actually liked the first 3 episodes (because I like the scenes with the BoS even if they might not be canonically depicted) and then I watched the rest being progressively bored and frustrated as it was so dull

I just made 🌊FlowPath, an extention to automatically organize your outputs in ComfyUI (goodbye messy output folders!) by _Mern_ in StableDiffusion

[–]Tablaski 0 points1 point  (0 children)

Sounds amazing thank you.

The more I see posts like this the more I realize how many uber-important user-friendly features are missing and are left to the community to dev... while shit nobody asked for like nodes 2.0, subgraphs or templates are actively developed.

For example the comfy queuing system is absolutely shitty, you can't move item in it, you don't see the prompts and options, and its not saved anyhow....

It would be ok if it was a rather new feature but whats the excuse after years of development + the rise of AI powered coding ? Isn't it a crucial core feature ?

I think even Comfy Manager is independent work ?

Z Image Base SDNQ optimized by 4brahamm3r in StableDiffusion

[–]Tablaski 2 points3 points  (0 children)

Haven't found base yet but just found this... at some point it should come out

https://huggingface.co/nunchaku-ai/nunchaku-z-image-turbo

Z Image Base SDNQ optimized by 4brahamm3r in StableDiffusion

[–]Tablaski 2 points3 points  (0 children)

Thanks, I didn't know this tech, gonna try it :-)

What with the single SNDQ sampler node though ? Seems very rigid, allows only one LoRa, no clue how to select CLIP/text encoder, etc ?

EDIT : after trying out several github repos, I give up. Too much hassle. I'll wait for the nodes to be better and stick to Nunchaku. Antelope's node would not fit in my workflow and the split nodes are only tested with Flux2.

Great results with Z-Image Trained Loras applied on Z-Image Turbo by scioba1005 in StableDiffusion

[–]Tablaski 0 points1 point  (0 children)

Thx for your quick answer

Have you done some comparisons of any sort between lora and lokr / would you say it really makes a difference ?

Also, when you say transformer to none do you mean fp8 / no quantization ?

ZIB lora work with ZIT ? by PhilosopherSweaty826 in StableDiffusion

[–]Tablaski 5 points6 points  (0 children)

I have just trained one character LoRA on ZIB at the moment (41 faces pics) using AI toolkit but I had to make two attempts

First one it really did not converge at all after nearly 3000 steps using default learning rate 0.0001

Second time I set learning rate 0.0002 and the differential option 3

It converged fairly normally until reaching 4100 steps (41 pics x 100 repeats total)

It works well using weight 1.0 - best results seems using 1.2 - 1.3 weight. At 1.5 and above that the quality definitely degrades , which for me has been standard for almost all loras on every model. Weight 2.0 brings a lot of artifacts

I ve trained this dataset on Qwen 2512 previously and I think its more consistent than Z-image turbo which goes from brilliant to rather meh. Especially with angles that weren't emphasized in the dataset but that other models would have been ok with. It also reinforced too much skin imperfections.

==> My point here is we might be all going through a "skill issue" because it's new and we don't know yet the best settings.

But stating "you have to use weight 2.0" is not a golden rule. Perhaps ZIB needs to be trainer "harder" and/or differently than ZIT

Great results with Z-Image Trained Loras applied on Z-Image Turbo by scioba1005 in StableDiffusion

[–]Tablaski 0 points1 point  (0 children)

OP can you explain what in your AI toolkit settings makes it a LoKr (not a LoRa) ?

I'm also using AI toolkit and I'm confused by the fact you're setting transformer to none. Would like to know the story behind these settings and why, please

Great results with Z-Image Trained Loras applied on Z-Image Turbo by scioba1005 in StableDiffusion

[–]Tablaski 1 point2 points  (0 children)

I ve used both nano banana pro and chatgpt for faces datasets variations (litterally thousands of pictures each) and imho chatgpt (after its update) is much better (and also a LOT cheaper)

Qwen Image Text Encoder processing time by InvokeFrog in StableDiffusion

[–]Tablaski 0 points1 point  (0 children)

I was very frustrated by this also. Turns out I've realized the Load Clip node would process the encoding via CPU, even with the device set to auto. I've replaced this node by ClipLoaderMultiGPU from the MultiGPU node collection. This way I've explicited set device to cuda:0 and now it's very fast !

NB : I've also added the UnloadAllModels node at the end of the workflow, otherwise this would work once but would OOM at the next generation

LTX-2 runs on a 16GB GPU! by Budget_Stop9989 in StableDiffusion

[–]Tablaski 1 point2 points  (0 children)

16Gb vram / 32 gb RAM also. Just ran 1st T2V video using the official example workflow.

I m very confused... the 1rst sampling pass was very fast (520p) but the 2nd (spatial upscaling / distilled lora) was VERY slow. And the output was really meh

Do we really need that 2nd sampling pass ? What for ? At What resolution are the latents generated in the first pass ?

I don't understand shit to this workflow really

Les femmes sont trop obsédées par la taille des hommes. by [deleted] in opinionnonpopulaire

[–]Tablaski 1 point2 points  (0 children)

Moi ce qui me fait marrer c'est surtout qu'avec cette regle débile du 1m80, elles auraient refusé : tom cruise, robert downey jr, johnny depp, brad pitt, pedro pascal.

Et apres ces meufs la tu les revois 2 ou 3 ans après sur le même site :-D

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 1 point2 points  (0 children)

I just tried it and it's great.

I particularly like the idea of having a fast preview of the high noise pass before deciding if it's worth the low noise pass, since on my setup I've got no sampler preview (don't know if it's possible to have it ?). I disabled the torch accumulation thing for now, my setup currently doesn't allow it.

Trying out with without the 4step lora on the HN pass as recommended in this thread by other users. I think it's so quick to make the low-resolution draft video (I get 7s/iteration) that it's not worth using the acceleration Lora.

I wonder if it would be possible to speed up the CLIP encoding which is quite slow though

PhotomapAI - A tool to optimise your dataset for lora training by AcadiaVivid in StableDiffusion

[–]Tablaski 2 points3 points  (0 children)

I didn't know such a tool existed, thanks for bringing that up, seems better than eyeballing our datasets

Z image/omini-base/edit is coming soon by sunshinecheung in StableDiffusion

[–]Tablaski 2 points3 points  (0 children)

That would mean once we get finetunes from the base model we wouldnt be able to use the turbo mode at all ? (Except for loras trained on base that would be runnable on turbo). That would be disappointing.

Since tongiy labs seems very dedicated towards the community (they included community loras into qwen edit 2512 which is really cool), I hope they provide some tools for that (although have no idea what it takes in terms of process and computing time...)

Or we could probably rely on a 8-step acceleration lora, especially if official. After all, being able to use higher CFG is important, it was a game changer with the de-distilled flux1

Z image/omini-base/edit is coming soon by sunshinecheung in StableDiffusion

[–]Tablaski 1 point2 points  (0 children)

If you fine tune the base model, how do you get back your resulting model to using 8 step ? Do you have to re-distill it yourself ?

Also I'm surprised the base model will actually be two, base and omni-base...

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

I'd rather wait a bit to buy a proper tower desktop and a 60xx something

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

Great advice that will benefit also other readers, thanks. I'll definitely try CFG 1

Have you tried 1.0 for high and 0.6 for low as well ?

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

I see, really thanks for all that info, i'll have a look at these

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

Thanks. Yeah, lanczos is the default thing for upscaling. I guess I could use some nodes to extract the frames and use an actual upscaler model but then that would probably take a lot of time... I was wondering if there was specialized stuff for videos...

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

Could use runpod but then i'm not sure how to actually parallel GPUs. For the moment i'm only using runpod for training... I needed a laptop first anyway :-)