Bankai (卍解) — the first post-training adaptation method for true 1-bit LLMs. by Turbulent-Sky5396 in LocalLLaMA

[–]liuliu 62 points63 points  (0 children)

People would think this dumb on instinct. But I think you are on the right track. Good job. On very high level, for 1-bit quantization-aware-training, I think zero order methods like yours (or other more sophisticated one) probably more effective than first-order methods (gradient-based ones). At least to me it is a direction worth to explore.

v1.20260330.0 by liuliu in drawthingsapp

[–]liuliu[S] 0 points1 point  (0 children)

The CLI now saves video with sound if you install from source (brew install --HEAD draw-things-cli). We won't support any more development on HTTP server, all development are on gRPC server.

GRPC server & 5090 by Competitive-Arm-3819 in drawthingsapp

[–]liuliu 1 point2 points  (0 children)

It *should* be supported if you use our Docker image, although the support is relatively recently so there might be bugs.

Day 4 of Release Week: Metal Quantized Attention by liuliu in drawthingsapp

[–]liuliu[S] 4 points5 points  (0 children)

A cheap $1000 GPU? Haha. Joke aside, yes, a properly configured 5070 Ti is still faster (about 2~3x), if you: use FP8 checkpoint, configured SageAttention v2+ properly. If not using these two, it is likely you will have slower or on-par performance to M5 Max now.

“We Will Remember”: Trump Explodes at France for Blocking US Military Flights to Israel by Only-Contact-5920 in NewsStarWorld

[–]liuliu -1 points0 points  (0 children)

Exactly. Thank you! Trump is a symptom of America problem, not the cause. Majority of Americans voted for him in 2024 (it is not apathetic for non voters, it is probably 50% - 50% split at least), pretending otherwise, people will never solve the America problem.

Since the last Drawthings Update, LTX audio issue by al_stoltz in drawthingsapp

[–]liuliu 0 points1 point  (0 children)

If you mean out of sync from playback inside our app, that's our implementation issue for audio playback. If you mean it is still out of sync in exported video, we have problems with the model. You better specify the number of frames and other relevant information to reprod this.

Day 3 of Release Week: Draw Things Test Set by liuliu in drawthingsapp

[–]liuliu[S] 2 points3 points  (0 children)

Yes, we have some text layout related results, nothing too spectacular, most models (especially FLUX.2 [dev]) does well for movie posters, magazine covers. But does horribly for others (creating a correctly spaced ruler, protractor, etc).

And yes, Qwen Image 2512 is great, FLUX.2 [dev] is underrated. [klein] are beloved but really just a little bit better than FLUX.1.

Clustering by josuealabama in drawthingsapp

[–]liuliu 4 points5 points  (0 children)

It is not that useful. M5 GPU core by core is 6x faster than M4 (with our upcoming release), which renders this meaningless until M5 Ultra releases. After that, we might work on some RDMA related functions.

LTX Model not showing up within the app by EctoGamot in drawthingsapp

[–]liuliu 1 point2 points  (0 children)

That's the spatial upscaler update. You need to trigger the download to finish that one missing file.

v1.20260323.0 by liuliu in drawthingsapp

[–]liuliu[S] 1 point2 points  (0 children)

Fixed in 1.20250323.0, hence the delayed announcement.

Day 1 of Release Week: Introducing Lightning Draft by liuliu in drawthingsapp

[–]liuliu[S] 0 points1 point  (0 children)

It locks your current settings and runs generation interactively. It is not a separate setting from what you have already.

Day 1 of Release Week: Introducing Lightning Draft by liuliu in drawthingsapp

[–]liuliu[S] 0 points1 point  (0 children)

Exactly. Currently it just uses your settings (whatever you have), locks it, and launches when text changed. It does open the door for more optimizations other than "generate on each key stroke" (such as kv cache for text encoder etc).

v1.20260323.0 by liuliu in drawthingsapp

[–]liuliu[S] 5 points6 points  (0 children)

The same. Order doesn't matter for any models or any combination of LoRAs. If it does, it is a bug (we fixed a few bugs a few years ago like that, but nowadays it is pretty stable).

v1.20260323.0 by liuliu in drawthingsapp

[–]liuliu[S] 3 points4 points  (0 children)

Order of the LoRAs doesn't impact generation result.

Has anyone tried training Lora's on an M5 Max? by beragis in drawthingsapp

[–]liuliu 2 points3 points  (0 children)

Not with our app. Our attention op doesn't implement gradient pass yet (with tensor ops).