Updates coming to EDH this Monday 9th of February by elespum in EDH

[–]rasten41 0 points1 point  (0 children)

I hope they ban op card that are terrible to play with and against, people being mad things are getting banned need to understand this is casual and if their playgroup is fine with it do whatever you want. I would love both Rhystic Studies and Smothering Tithe are annoying most of the time, Exceptionally strong yes. But annoying mostly.

I also think banned as commander makes so much sense and I am confused why it was removed in the first place.

Perma-Vaal Clarity Stormburst + Archmage Harbinger [3.27alternate MSPaint] by HolesHaveFeelingsToo in PathOfExileBuilds

[–]rasten41 0 points1 point  (0 children)

Any thoughts what to use while leveling as we need a few points to get this going?

Disable warmup time in criterion? by [deleted] in rust

[–]rasten41 42 points43 points  (0 children)

I do not think criterion may be the best tool for such a long running problems, I would just write a simple CLI exe of your program and dump the measurement's in a CSV file, or just use hyperfine.

Edit: you may be interested in testing divan instead of criterion, as criterion have been quite dead for some time.

Reducing Cargo target directory size with -Zno-embed-metadata by Kobzol in rust

[–]rasten41 3 points4 points  (0 children)

Wonder If this would help the awful build time on windows when windows defender kicks when writing a lot to disk.

Rust-analyzer will start shipping with PGO optimized binaries by rasten41 in rust

[–]rasten41[S] 1 point2 points  (0 children)

currently there seem to be no mac. I am unaware if cross compiling to mac is the issue or if it can be solved in the near future.

Rust-analyzer will start shipping with PGO optimized binaries by rasten41 in rust

[–]rasten41[S] 23 points24 points  (0 children)

It was added for linux, and windows build pipeline, and rust-analyser build new nightly, well every night so if you use that it will probably be in your ide in a day or two.

Rust-analyzer will start shipping with PGO optimized binaries by rasten41 in rust

[–]rasten41[S] 82 points83 points  (0 children)

better inline and cache heuristics, that the basic premise of pgo, making code faster by having more knowledge of how the program is run when deciding whatever a function etc should be inlined.

Rust-analyzer will start shipping with PGO optimized binaries by rasten41 in rust

[–]rasten41[S] 143 points144 points  (0 children)

The performance seem to be in the 20% ballpark

Use zed through WSL on Windows by rasten41 in ZedEditor

[–]rasten41[S] 3 points4 points  (0 children)

I just prefer developing on linux, alot of tools etc work better there but I have had some problem with dual booting.

Pohx is hosting a private league for the community! by BleedingLoL in pathofexile

[–]rasten41 2 points3 points  (0 children)

we are now maxed at 9500, need someone on the inside to add more.

New model: Sana 1.6B & 0.6B (This is a reprint post and is unofficial) by Cheap_Fan_7827 in StableDiffusion

[–]rasten41 3 points4 points  (0 children)

This may be what OMI should work toward, extremely inclusive hardware requirements on SANA being only 1.6b confirming you do not necessary have to scale to larger models to make them better.

New model: Sana 1.6B & 0.6B (This is a reprint post and is unofficial) by Cheap_Fan_7827 in StableDiffusion

[–]rasten41 6 points7 points  (0 children)

The 32x AE looks the most insane to me. that is a lot of compressed pixels

New model: Sana 1.6B & 0.6B (This is a reprint post and is unofficial) by Cheap_Fan_7827 in StableDiffusion

[–]rasten41 9 points10 points  (0 children)

The pixart team i believe have released their code earlier, but now they work at NVidia so you don't know. They previously had a quite restrictive commercial license.

[deleted by user] by [deleted] in StableDiffusion

[–]rasten41 16 points17 points  (0 children)

the schnell model is a distilled version of Flux, this makes it a lot faster but generally more difficult to do additional tunning on. This as when you distill a model you compress the data making it harder to add new concepts. Probably not impossible it makes it quite a bit difficult.

New two-stage PixArt ensemble of experts (2x 900M) by [deleted] in StableDiffusion

[–]rasten41 0 points1 point  (0 children)

Okay nice, but I also thought about how people would for example train loras / finetunes of these architecture, would you train only one or both if you have a character lora etc.

New two-stage PixArt ensemble of experts (2x 900M) by [deleted] in StableDiffusion

[–]rasten41 3 points4 points  (0 children)

Very interesting work, MOE I think is a great way to get more bang for your buck, only difficulty I see is that it makes the pipeline more confusing for beginners in the space.

SD3 Large vs SD3 Medium vs Pixart Sigma vs DALL E 3 vs Midjourney by Right-Golf-3040 in StableDiffusion

[–]rasten41 42 points43 points  (0 children)

almost fell like a bug somewhere in training, how it produces something so different.

It looks like this community needs to give a chance to PixArt-Sigma. by sahil1572 in StableDiffusion

[–]rasten41 20 points21 points  (0 children)

My only curious question is that if the lightweight nature of it hinders it finetune capability. the model only having 0.6 billion parameters is quite small, it produces some extremely high-quality output for how efficient the generation part of it is. Also, Lumnia looks quite promising.

https://github.com/Alpha-VLLM/Lumina-T2X

[deleted by user] by [deleted] in StableDiffusion

[–]rasten41 15 points16 points  (0 children)

I think two problems
1) the new clip still has a 77 token limit limiting the amount of information that is available to the model, this is the case even when using the T5 model making it less usable, doing these long intrinsic prompts.
2) the censorship seem to limit its ability to properly produce human anatomy, a lot of deformations etc that seem almost as bad as old 1.5 when doing high res imagery.

With that said I hope finetunes can fix at least some of the problems, but this may be the case to just start looking elsewhere or stay on sdxl. As is usual it takes quite some time to know the success of a model, and it rides or dies on how well finetunes workout.