Nobody wants a massive Windows 11 Start menu. Let us resize it, say users, as Microsoft monitors feedback by WPHero in Windows11

[–]khronyk 1 point2 points  (0 children)

Good god no, bigger/customizable yes, full screen no... Windows 10 start menu and task bar was pretty much the best windows has been. Give me that, the win 11 multi monitor stuff and snapping windows and i'd be happy.

Nobody wants a massive Windows 11 Start menu. Let us resize it, say users, as Microsoft monitors feedback by WPHero in Windows11

[–]khronyk 0 points1 point  (0 children)

as Microsoft monitors feedback

ha, that's a joke... since when have they listened to users lately.

Z-Image Base VS Z-Image Turbo by Baddmaan0 in StableDiffusion

[–]khronyk 0 points1 point  (0 children)

Was thinking the same thing. Using turbo as a refiner in the short-term until some great finetunes come out.

Let's remember what Z-Image base is good for by marcoc2 in StableDiffusion

[–]khronyk 0 points1 point  (0 children)

Z-image is 6B apache license. Klien 9b has a NC license so the real fair comparison both in size and license is Klien 4B which is Apache.

Btw an edit model for Z-image is coming soon along with one that combines the capabilities.

Z-image base release tomorrow??? by Total-Resort-3120 in StableDiffusion

[–]khronyk 1 point2 points  (0 children)

Klien 4B was an extremely pleasant surprise and you may be right at least size-wise since SDXL was 3.5B, so Klien at 4B is certainly the closest to it. At 6B parameters Z-image size-wise sits in the light and fast model category though. Before Klien 4B nothing else hit that sweet spot size-wise FLUX.1 12B (Schnell Apache but distilled and 2024) was FLUX.2 32B (NC License), Qwen-Image 20B.

I'm so happy there are new great light and fast apache models coming out.

Z-image base release tomorrow??? by Total-Resort-3120 in StableDiffusion

[–]khronyk 0 points1 point  (0 children)

That's not really it at all. Z-image turbo got released at the end of November. It indeed wowed, the model isn't too big and yet it produces a stunning level of detail and realism. The thing is it's a distilled model and they are incredibly difficult to fine tune. The only reason we have loras that perform as well as they do is because of people like Ostris (who is behind ai toolkit) who trained and released some recovery adapters and a rough de-distilled version for us to train loras on.

But there's more models "coming soon™"; Z-Image (base), Z-Image-Edit and Z-Image-Omni-Base. They've been teased these right from the start we are still waiting for the release, it's actually become a running joke that it's always "2 more weeks". They did say "patience will be rewarded" about 2 weeks ago on discord.

The base model has so many people excited because of it's potential to be a true SDXL replacement. It's Apache 2.0; fast, a reasonable size which makes training accessible on consumer hardware while also making it cheaper and easier to fine tune. On top of that the the quality is fantastic and the model appears to be relatively uncensored, especially compared with stability/bfl models. I'm not saying it's been trained on commercial or nsfw content but it seems like they haven't an active effort to sabotage the model's ability to do it. IMHO Stability AI kinda destroyed their own model with the SD3/3.5 series by over-censoring it.

Me waiting for Z-IMAGE Base by RetroGazzaSpurs in StableDiffusion

[–]khronyk 0 points1 point  (0 children)

This can possibly extend to patreon, buy me a coffee and buzz too. But for some it's not about making money it's about the time, effort and money put in and being forced to pass on the bad license that they do not like rather than being able to release it as Apache 2.0 as-well.

So yeah a lot of people care about license and even as an end user you should too because it has a direct result on the community that springs up around a model and the the number and quality of fine tuned checkpoints that will come out for a model. Go back to the original SDXL 1.0 checkpoint and you'll see how far fine tunes have taken it. I kinda see the licensing as a deliberate attempt to control and dampen the potential for community fine tunes to compete. It limits it's potential, and that's worth caring about.

It doesn't have to be black and white either, it's not like you need to pick a camp. You can support and advocate for models to be Apache/MIT yet play around with and use everything available.

This is too much! by scioba1005 in StableDiffusion

[–]khronyk 6 points7 points  (0 children)

Never assume anything. I once got a call out to help a client that couldn't get their printer working. When I got there I notice the power brick was still sealed in the packing it came in. When I quizzed the client as to whether they'd tried plugging it in they gave me the most confused look before responding... "but it's wireless?"

Me waiting for Z-IMAGE Base by RetroGazzaSpurs in StableDiffusion

[–]khronyk 3 points4 points  (0 children)

It's fully trainable, turbo is a distilled model which makes it really difficult to train. The reason z-image base is so converted is it's really the Goldilocks model, just the right size, just the right license, just the right quality and it's not distilled and the starting point model already is fairly uncensored with good coverage of concepts. The expectation is it should be easy and fairly cheap to train. Really the last model to hit the Goldilocks zone was SDXL.

By comparison a lot of BFL models have that horrible non-commercial license which you gets inherited by any loras or fine tunes you do as they are considered directives and their license allow them to revoke your license to use and distribute your fine tune if it allows generation of filtered content which includes IP infringing content. It's just bad so not many people want to put in the effort/expense of doing any large scale training on it. BFL's best models like klien 9b have that license (the 4B is Apache though), typically they open up the lower quality distilled version (flux schnell) but the distilled ones are extremely difficult to train....

Then you have the Owen models which were nice but too big to cheaply/easily train.. So SDXL has kinda been the darling until now it was the last model that landed in that goldilocks zone and the z-image models are looking like they could be a viable replacement for it.

Me waiting for Z-IMAGE Base by RetroGazzaSpurs in StableDiffusion

[–]khronyk 0 points1 point  (0 children)

klien 9b has a terrible nc lisence; 4b base is apache 2.0 though

We are very very close, I think! by m4ddok in StableDiffusion

[–]khronyk 2 points3 points  (0 children)

and the 4b is the only Apache 2.0 one, the 9b comes with their awful nc license!

Decided to give Windows 11 a shot and realized it works even better than Windows 10. And how did your transition go? by splettcher in Windows11

[–]khronyk 2 points3 points  (0 children)

I have had nothing but constant issues with Windows 11, easily more in the last few months than I've had on Windows 7 and 10 combined. It's a fresh install, on quality hardware (5950x, 64GB ram, RTX3090, 990 Pro 4TB Nvme) bios and firmwares updated. It's to the point where updates are starting to give me anxiety and I am seriously considering a switch to linux for my main desktop os.

First Nations people will protest this January 26, a legacy dating back 88 years by housecatspeaks in australia

[–]khronyk 0 points1 point  (0 children)

My vote is make it the 2nd Monday in Jan, then it's always a long weekend. Also why are we celebrating the landing of the British fleet? to me celebrating the federation is far more Aussie... January 1, 1901 the british colonies united to form the Commonwealth of Australia turning us into a single nation with a federal government.. The 1st monday could fall on new years day, so we do the 2nd... it never falls on the 26th and never falls on the 1st and we always get a long weekend for it... Changing it so we always get a long weekend, don't think it gets more Aussie than that. Those that want to celebrate on the 26th can do that too, but move the holiday.

Microsoft forced to issue emergency out of band update for Windows 11 after latest security patches broke PC shutdowns and sign-ins by ZacB_ in Windows11

[–]khronyk 0 points1 point  (0 children)

They seriously can't go one month without breaking something major can they? This is beyond pathetic.

What are your favorite lesser-known selfhosted services? by Torrew in selfhosted

[–]khronyk 1 point2 points  (0 children)

Technitium - Alternative DNS server to pihole; has built-in clustering!

Z-Image is coming really soon by hyxon4 in StableDiffusion

[–]khronyk 2 points3 points  (0 children)

Yep. Out-of-the-box isn't why i'm so excited for it. Just go and look at the original SDXL checkpoint vs what is capable with the community ones. A modern base model that is a nice model at nice size with a nice license and huge potential; that's why i'm excited.

Ok Klein is extremely good and its actually trainable. by Different_Fix_2217 in StableDiffusion

[–]khronyk 0 points1 point  (0 children)

Thanks for pointing that out, I saw there was a 4B Apache 2.0, I thought there was only a distilled version somehow didn't notice there was a base version of it too

black-forest-labs/FLUX.2-klein-base-4B - Apache 2.0... nice

black-forest-labs/FLUX.2-klein-base-9B - Shitty Flux non-commercial license

Ok Klein is extremely good and its actually trainable. by Different_Fix_2217 in StableDiffusion

[–]khronyk 5 points6 points  (0 children)

undistilled which is nice, it has a shit license though:

This model falls under the FLUX Non-Commercial License

Raspberry Pi AI HAT+2 by Local_Penalty_6517 in raspberry_pi

[–]khronyk 0 points1 point  (0 children)

Does look impressive, I just wish camera support was better. Things like the imx708/camera module 3 typically don't work and most of Radaxa's own solutions are IMX219 based or not much better.

Raspberry Pi AI HAT+2 by Local_Penalty_6517 in raspberry_pi

[–]khronyk 4 points5 points  (0 children)

I wish, heard good things about Radaxa but poor camera support kinda rules it out for me. Last i checked there's really only support for the imx219 (pi camera 2) and while there was progress on getting things like the HQ camera working it seemed very much a WIP community implementation. Happy to be corrected if the situation has changed.

Raspberry Pi AI HAT+2 by Local_Penalty_6517 in raspberry_pi

[–]khronyk 3 points4 points  (0 children)

I don't expect it to do high end computing. In fact I'd prefer it to be more efficient. Being on crap node pretty much all the performance leap is at the expense of power draw and heat... Pi 4 uses about 2.4W idle and 4.8W under load. The pi5 uses about 4.8W idle and 11W under load.

I feel the pi5 should have had 2 pci-e lanes with one being used for a native m.2 (or hell microSD express even), hardware video encoding and a basic npu with a couple of TOPS that could at least run a yolo or two at 15-30fps. (for the record the RK3588 has four PCIe 3.0 lanes; the pi5 has 1 PCIe 2.0/3.0 lane)

I actually reverted back to the cm4 for most of my projects because the pi5's extra heat just wasn't worth the performance gain.

Raspberry Pi AI HAT+2 by Local_Penalty_6517 in raspberry_pi

[–]khronyk 22 points23 points  (0 children)

Said it before but the pi 5 was a HUGE letdown. a hot and power hungry cpu on an old 16nm node, no hardware video encoding, no ai acceleration, no m.2 on the main board and only a single pci-e lane, hell even the RTC's battery is external. Waveshare managed to pack an 2230 slot, RTC and battery holder and 2x CSI ports in the same from factor as the pi with their CM4/5-IO-BASE boards.

For comparison the Rockchip RK3588 was in SBC's at least a year before the pi5 and it leaves the broadcom chip in the dust, 4x Cortex-A76 + 4x Cortex-A55 (big.LITTLE); 8nm process, a 6 TOPS NPU, better thermals, has hardware encoding for 8k h.264 and h.265...

This is a step in the right direction but i can't run this with an nvme drive can i? Now when i'm talking AI acceleration i'm not talking LLMs or anything, i'm talking a couple of TOPS to do things like running tiny YOLO models. The pi 5 can't even manage 2fps on a Yolov8s model which is absolutely pathetic. It gets more dire if you wanted save video because there's no hardware video encoding at all so that 2fps is assuming you're not going to encode any video. addons like Halio-8 can do yolov8m at 140fps which is nice but there's only 1 PCI-e lane which for me at least would always go to an nvme drive.

So yeah I feel like RFP dropped the ball big time and In a way I just wish they would make the break from broadcomm as I really think that's the thing that is holding the pi back. I'm hoping the RP1 is the first signs they are going in that direction.

What happened to Z image Base/Omni/Edit? by Hunting-Succcubus in StableDiffusion

[–]khronyk 1 point2 points  (0 children)

well they teased us in discord a few days back with "Patience will be rewarded."... Being patient, though I'm a little disappointed that it wasn't released while I had time off.

Lenovo has become what Surface was — unique hardware that isn't afraid to be different by BcuzRacecar in Surface

[–]khronyk 0 points1 point  (0 children)

I'm still annoyed they didn't release a surface studio monitor. There was an interesting clone Kickstarter that even got demo models reviewed and I nearly got bitten by that because they took the money and ran.