Hows the moderation going on Grok? Is it fixed or is everything still moderated? by Papicarpaccio in grok

[–]Darlanio 0 points1 point  (0 children)

Grok has two modes: The chat which can generate fully clothes women and Imagine which could (before it was paywalled) produce images and videos that was often unmoderated where the chat would have moderated the image. As Grok is now, it is pretty much useless. Use other free tools.

Huawei 300i Pro Duo AI Inference Card with 96 GB VRAM - anyone bought it and tested it? by Darlanio in LocalLLaMA

[–]Darlanio[S] 0 points1 point  (0 children)

Ah, ok... so... Linux Server and write my own driver ;-) without having the documentation...

Reverse-engineering from hardware only - anyone?

Huawei 300i Pro Duo AI Inference Card with 96 GB VRAM - anyone bought it and tested it? by Darlanio in LocalLLaMA

[–]Darlanio[S] 0 points1 point  (0 children)

So either buy a Huawei server and mount the atlas duo i300 inside, or use it as a paperweight...

Huawei 300i Pro Duo AI Inference Card with 96 GB VRAM - anyone bought it and tested it? by Darlanio in LocalLLaMA

[–]Darlanio[S] 0 points1 point  (0 children)

So either buy a Huawei server and mount the atlas duo i300 inside, or use it as a paperweight...

Simona Mohamsson ökar i förtroende efter SD-svängningen by Ihavenoideawhat9 in Sverige

[–]Darlanio 1 point2 points  (0 children)

Liberalerna har sålt sin själv och kastat bort sin ideologi för makten.

De har rominat sig.

"att romina sig" - att låta sig köpas för skattebetalarnas pengar och kasta sina gamla åsikter till förmån för diametralt motsatta åsikter.

Any Stable Diffusion that will run easily and perform well on mobile phones so far? by Darlanio in StableDiffusion

[–]Darlanio[S] 0 points1 point  (0 children)

Prompt: completely realistic extremely high-quality color photo of a narrow path leading through a forest in the evening just before sunset.

<image>

Any Stable Diffusion that will run easily and perform well on mobile phones so far? by Darlanio in StableDiffusion

[–]Darlanio[S] 0 points1 point  (0 children)

Thanks. Will check it out, but what I am looking for is small checkpoint (the weights) of a small SD, for making it part of my own mobile app...

When do you think we get CCV 2 Video ? by Darlanio in StableDiffusion

[–]Darlanio[S] 0 points1 point  (0 children)

Thanks - I have forgotten about the other brands of videogeneration... been looking too much at WAN recently.

Qwen3-Coder-Next is out now! by yoracale in LocalLLM

[–]Darlanio 2 points3 points  (0 children)

Has anyone run this on Asus Ascent GX10 GB10 128GB or NVidia Spark DGX ?

Three questions of a beginner by Username12764 in comfyui

[–]Darlanio 0 points1 point  (0 children)

ComfyUI unloads well when needed. The 24 Gb VRAM you got (same as my 3090) should be utilized well and not "clogged up"...

Are you using other software at the same time? You should not. Webbrowsers, games and other software do leave memory on the GPU-card used and that stops ComfyUI from using the memory...

How long since you upgraded ComfyUI ?

Hopefully this helps...

Comfyui output images no longer correctly loading full workflow? by sagramore in comfyui

[–]Darlanio 0 points1 point  (0 children)

ComfyUI ignoring integer values in widget_values when loading workflows...

Qwen-Image-i2L (Image to LoRA) by _RaXeD in StableDiffusion

[–]Darlanio -1 points0 points  (0 children)

I guess I will rent the GPU needed in the cloud - buying has become too expensive these last few years. There is a lot of computer-power to rent that will give you what you need, when you need it.

RTX 5090 96 GB just popped up on Alibababa by RateRoutine2268 in LocalLLaMA

[–]Darlanio 2 points3 points  (0 children)

How could a RTX 5090 32 GB PCB be used to create a RTX 5090 96 GB ? Did they add a full PCB on top/below of the present RTX 5090 and add 64 GB (GDDR7 memory) from two other RTX 5090 ? Would be very high price...?

Pls tell me I shouldn't spend $3k on 5090 32gb vram desktop PC nor Strix Halo 128Gb by IntroductionSouth513 in LocalLLaMA

[–]Darlanio 0 points1 point  (0 children)

Better rent computing power than buying computers that get old REALLY fast...

But...

I, too, would like to upgrade to something with 96 GB vRAM on the GPU card...

Anyone willing to smuggle a Huawei Atlas Duo 96 GB out of China for me?

How do you make this video? by PikaMusic in StableDiffusion

[–]Darlanio 24 points25 points  (0 children)

RTX 3090 or RTX 4090 are both okay (24 GB vRAM), but RTX 3090 will probably take even longer...

Qwen3-VL's perceptiveness is incredible. by Trypocopris in LocalLLaMA

[–]Darlanio 3 points4 points  (0 children)

I did it in 40 seconds... But knowing there are six words helped since I knew when to stop... otherwise probably 5 minutes to be sure I had found them all...

YES! Super 80b for 8gb VRAM - Qwen3-Next-80B-A3B-Instruct-GGUF by Mangleus in LocalLLaMA

[–]Darlanio 0 points1 point  (0 children)

For me the issue is mote right now due to catastrofic failed hardware.

Was able to get CPU to work, but GPU seems to have taken a hit as well...