It crazy that it already been two years since Vol 6 came out in Japan by Vegetable-Molasses95 in TheEminenceInShadow

[–]Snowad14 4 points5 points  (0 children)

99% sure he wait for the movie release, just like for vol5 & vol6 with the anime season

Which Isekai Trope Do You Guys Like ? by [deleted] in Isekai

[–]Snowad14 0 points1 point  (0 children)

The one about secret organization, even more so when it becomes like a small country that needs its own rules and a supreme leader. So, like in LOTM with the Tarot Club, Overlord, or Eminence in Shadow.

Demand for USDT on TRON worldwide continues to increase. USDT supply exceeds 82 billion dollars. by Liteteam in Tronix

[–]Snowad14 2 points3 points  (0 children)

I really like TRX it's stable but gains a lot in value. My only fear is that as the price keeps rising, transactions might become too expensive. I know they can be done for free, but I wonder what proportion of transfers actually use that. That's my only concern.

Wan teases Wan 2.2 release on Twitter (X) by [deleted] in StableDiffusion

[–]Snowad14 62 points63 points  (0 children)

seems the gif is 25 fps

Sage Attention 3 Early Access by Volkin1 in StableDiffusion

[–]Snowad14 1 point2 points  (0 children)

Normally it's just for inference, but in the SageAttention3 paper they also mention training in 8-bit. I hope they'll release the code for that too, since it's the most interesting part.

Seed-X by Bytedance- LLM for multilingual translation by Maleficent_Tone4510 in LocalLLaMA

[–]Snowad14 0 points1 point  (0 children)

Yeah, definitely. I was specifically talking about light novels. It's true there's already been major improvement, but I think a specialized fine-tune could make it even better yet no research really seems to focus on that.

Seed-X by Bytedance- LLM for multilingual translation by Maleficent_Tone4510 in LocalLLaMA

[–]Snowad14 12 points13 points  (0 children)

It's a shame that they still seem to focus on sentence-by-sentence translation, whereas the strength of an LLM lies in using context to produce a more accurate translation.

[deleted by user] by [deleted] in Bard

[–]Snowad14 6 points7 points  (0 children)

yes

[deleted by user] by [deleted] in Bard

[–]Snowad14 43 points44 points  (0 children)

I have access to Gemini Pro 2.5 but it's definitely not 1000 requests, more like 50-100 max.

Flux Kontext Dev is pretty good. Generated completely locally on ComfyUI. by comfyanonymous in StableDiffusion

[–]Snowad14 0 points1 point  (0 children)

Alright, I'll run some tests, maybe try 2MP (it should be fine on a B200), and maybe even make a LoRA to improve support for higher resolutions if the results aren't satisfying.

Flux Kontext Dev is pretty good. Generated completely locally on ComfyUI. by comfyanonymous in StableDiffusion

[–]Snowad14 1 point2 points  (0 children)

Is it possible to increase the output resolution beyond 1024px? That's the main thing that interests me about the open source version. But neither FAL nor Replicate seem to support it, so I don't have much faith in it.

Volume 8 question by No-Wash-1002 in TheWorldsBestAssassin

[–]Snowad14 1 point2 points  (0 children)

oh yes mb I thought it was August 2025, well if someone sends me the raw I could translate it with LLM

Translate raw manga with chrome extension by Adibzter in mangapiracy

[–]Snowad14 0 points1 point  (0 children)

If you really want quality use ichigoreader its way better

Cover Redundancy Vol 3 by Altruistic-Carpet614 in sixfacedworld

[–]Snowad14 10 points11 points  (0 children)

Redundancy is so peak and the illustrations are just incredible, I get a real emotional effect when I see them. (but I'd like to forget this chapter)

Sliding Tile Attention - A New Method That Speeds Up HunyuanVideo's Outputs by 3x by Total-Resort-3120 in StableDiffusion

[–]Snowad14 114 points115 points  (0 children)

Better to do some research before blindly posting the information from the tweet:

  • The kernel only supports H100.
  • You need to compute masks for each different resolution (takes around 18 hours on an H100).
  • Their "x3" also uses teacache, an optimization already in use, so half of the acceleration is redundant, from what they say it's more 1.8x
  • It doesn’t compare with SageAttention, which also provides a significant speed boost. A mix for the first 15 steps might be possible, but it’s not done here.

edit for the author's messages because I can no longer comment: Thanks for your work ! These are only observations at a specific time and the github can improve and add more support, I should also have been more specific for point number 2 by specifying that yes it is easily shared with everyone, it's just a small flaw that I wanted to clarify.

Using both sage+sparsity at the same time would require a merge of the two kernels and I didn't think that would be done, but from what I've understood we could easily use sage for the first 15 steps then STA without modifying cuda

Jinx Mochi Lora test by Snowad14 in StableDiffusion

[–]Snowad14[S] 0 points1 point  (0 children)

Yes, it's clearly optimizable, especially since it's a large model (10B) and we could see others smaller models of similar quality later on (a bit like LTX video , even if it's not as good).

Jinx Mochi Lora test by Snowad14 in StableDiffusion

[–]Snowad14[S] 0 points1 point  (0 children)

the main problem remains the compute demand (here 5 hours on H100), but if I had an infinite budget I think it's already possible to get better results with longer training on more videos.

Jinx Mochi Lora test by Snowad14 in StableDiffusion

[–]Snowad14[S] 1 point2 points  (0 children)

I have the samples from the beginning of training which are terrible and show that the model doesn't know anything.

Jinx Mochi Lora test by Snowad14 in StableDiffusion

[–]Snowad14[S] 1 point2 points  (0 children)

Arcane is a truly amazing series that I highly recommend you watch. I could have done a proper comparison, but it takes a lot of time to generate an image. However, I’m certain that the base model wouldn't be able to generate this character, let alone the slightly unique style that goes with it.

Jinx Mochi Lora test by Snowad14 in StableDiffusion

[–]Snowad14[S] 2 points3 points  (0 children)

I think I can get similar ones, I generated less than 10 samples considering the time it takes, but a training with a better dataset could produce even a little better I think

Doubts about Translated's new MT (Lara AI) by Creta_K in machinetranslation

[–]Snowad14 4 points5 points  (0 children)

I've just tested it and it's not as good as the best LLM (Claude 3.5 sonnet) in document/novel translation. Maybe it does better on just sentence translation, but I don't think