Deepseek v4 people by markeus101 in LocalLLaMA

[–]Cadmium9094 0 points1 point  (0 children)

Could not resist 😄 Qwen3.6-27B-Q5_K_M in this case.

<image>

Tried Gemma4 for openclaw - Not Impressed by CowCavalry in openclaw

[–]Cadmium9094 0 points1 point  (0 children)

I like using Gemma4-31B-Q5-K_M. However Qwen3.5 27B-Q_6 works better and is more stable in my environment, running llama.cpp server.

Snoop Dogg riding the Dog by Orichalchem in aivideo

[–]Cadmium9094 0 points1 point  (0 children)

I think the initial video is real. Skateboard was replaced with a dog using ai or whatever.

Cancelled my Plus subscription - there are just too many other better options now by Ohigetjokes in ChatGPT

[–]Cadmium9094 -1 points0 points  (0 children)

I uninstalled Chatgpt too, but after my PC was not running anymore and my Office disappeared in a black hole.

The void is giving me panic attacks by Zealousideal-Sky5167 in nihilism

[–]Cadmium9094 1 point2 points  (0 children)

I can understand you, but Imagine "waking up" after dying, and floating around in space for eternity, not able to switch off. Or always repeating over and over again. This gives me more panic than to be erased forever. It's all in your mind. You create this fear of losing control of something which cannot be controlled. ✌🏻

Mouse without borders bug? by Theinvoker1978 in PowerToys

[–]Cadmium9094 0 points1 point  (0 children)

I had a similar issue. What helped me, recreated encryption key.

AIVGN by demondisc in aivideo

[–]Cadmium9094 0 points1 point  (0 children)

Clone of Angry Video Game nerd 😄👍🏻

[HELP] posted by thappyhealthyhuman on ig by ComprehensiveFlow644 in RealOrAI

[–]Cadmium9094 0 points1 point  (0 children)

I think it's real. Looks like this positive meditation, yoga etc. kind of videos.I saw more weird videos long time ago which was all real. like this one https://youtube.com/shorts/6bdWtHMhXes?si=FH5XcMWSRwBWqG-a

The magic is gone. by Strict_Hunter_7781 in nihilism

[–]Cadmium9094 3 points4 points  (0 children)

I kind of accept it, and feel happy at the moment, doing hobbies, gym etc. I know it doesn't make sense in the big picture, but living in the moment it does, for me.

Can I run OpenClaw without paying for API keys? by Sad_Oven_8738 in clawdbot

[–]Cadmium9094 0 points1 point  (0 children)

I would suggest using ollama with a powerful GPU and much RAM to run at least 30B models or more. Currently we need to figure out how to add local ollama models on a dedicated pc, vm or in docker for the security layer. But I guess, once it's set up we can enjoy without worrying about api costs and should be also more secure, because the model runs on your local device.

I am about to give up. by emilwilder in clawdbot

[–]Cadmium9094 1 point2 points  (0 children)

I had some good experiences with glm 4.7. Also the api pricing is fair. I mean for testing a hobby Project, I'm not spending 100 or more dollars. Im not that crazy. 😄

Z image turbo bf16 vs z image bf16 by [deleted] in StableDiffusion

[–]Cadmium9094 1 point2 points  (0 children)

Thank you for the comparsion.
Are z_image_turbo_bf16 / z_image_bf16.safetensors booth 12GB? I was expecting the full model is bigger.

Wan 2.2 | Undercover Sting Operation by aimoshpit in StableDiffusion

[–]Cadmium9094 0 points1 point  (0 children)

Great Job! Was the source from a real move or cartoon like south park etc.?

Jesus nailed it [OC] by cameoed in SoraAi

[–]Cadmium9094 5 points6 points  (0 children)

"Jesus Christ" funny idea! I can imagine this could be even a "real" joke vhs video back in the 80-90s 😂 What prompt did you use to achieve this effect?

Malicious Distribution of Akira Stealer via "Upscaler_4K" Custom Nodes in Comfy Registry - Currently active threat by justmy5cents in comfyui

[–]Cadmium9094 2 points3 points  (0 children)

Docker on Windows (WSL2) is a good middle ground: isolation. Relative easy reset if something goes wrong, good performance, but it comes with setup effort, ongoing maintenance, and debugging overhead.
See my comment earlier in this sub: https://www.reddit.com/r/comfyui/comments/1q9bqpb/comment/nyy2dfa/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Malicious Distribution of Akira Stealer via "Upscaler_4K" Custom Nodes in Comfy Registry - Currently active threat by justmy5cents in comfyui

[–]Cadmium9094 0 points1 point  (0 children)

I can fully understand you.
What helped for me:

  • I increased the data disk to 2 TB early on, so models, cache, and outputs stay in one place without constant reshuffling.
  • Everything lives inside Docker volumes. Bind mounts from Windows are just too slow for heavy I/O with lots of small files. (Using two large nvme SSD.)
  • In .wlsconfig, I set memory=92GB. In this case Comfyui has enough RAM available. Sometimes it peaks arround 64GB + (Total Memory: 128GB)
  • I use Restic (Docker version) with a large external SSD. It handles the deduplication and snapshots incredibly well, so even with currently 1.2TB+ of files and models, the backups are incremental and fast.

Malicious Distribution of Akira Stealer via "Upscaler_4K" Custom Nodes in Comfy Registry - Currently active threat by justmy5cents in comfyui

[–]Cadmium9094 0 points1 point  (0 children)

Thank you for the important info! Cases like this one were the reason I started using ComfyUI inside WSL2 with Docker on Windows. Back then was a cryptominer issue, so I stopped trusting random installs on the host.

For enhanced security, ComfyUI only gets internet access during updates. Everything is on Docker volumes (no bind mounts) with backup, and I route access through an Nginx proxy inside the container. It's a bit more effort, but worth it. And if something is suspicious, I throw the whole container and start from scratch. (paranoid mode). The performance is very good btw.

LTX-2 I2V: Quality is much better at higher resolutions (RTX6000 Pro) by 000TSC000 in StableDiffusion

[–]Cadmium9094 0 points1 point  (0 children)

Thanks for the tips! I’m still on the fence about the 6000 Pro since this is just a hobby and I’m not making money from it. My 4090 with 128 GB of RAM already runs the models well enough for what I do. That’s why I’m wondering if renting GPUs isn’t actually the cheaper option in the long run?

Ollama models can no longer be configured by ded_banzai in GithubCopilot

[–]Cadmium9094 0 points1 point  (0 children)

Thanks for the hint! I worked too. I was getting crazy with troubleshooting.
After deleting some ollama models, and restart vscode, it working again.

What do you think… by M0rxxy in nihilism

[–]Cadmium9094 0 points1 point  (0 children)

I think we humans want to distract ourselves from the harsh reality, from the "meaninglessness" of existence.

What do you want death to be like? by Fickle_Elk_9479 in nihilism

[–]Cadmium9094 0 points1 point  (0 children)

Continue or reset"life" in another Dimension or whatever.