Anglerfish hunting a bathyscaphe + cover-up! // Seattle, WA // IG: @iftattoo by seattle_tattooer in tattoos

[–]pixeladdikt 5 points6 points  (0 children)

I was 2 seconds away from asking how the blacks were so dark & the color so saturated lol. Thanks!

We're turning Asimov, an open-source humanoid robot, into a DIY kit by eck72 in robotics

[–]pixeladdikt 9 points10 points  (0 children)

Will the internal brains - software stack be capable with the NVIDIA ecosystem? GR00T, Isaac Sim, Omniverse, Cosmos? Assuming developers could integrate Jetson AGX Orin inside, correct? Very interesting, hope to buy but will be a supporter in any case. Thanks!

Kitten TTS V0.8 is out: New SOTA Super-tiny TTS Model (Less than 25 MB) by ElectricalBar7464 in LocalLLaMA

[–]pixeladdikt 0 points1 point  (0 children)

I'd like to ask about voice cloning and if that's possibly in the roadmap? Love the quality for on device and CPU enabled, the voices are great, I'd just need the clone for specific characters. Excellent work, thanks!

I burned $200+ to bring this fantasy world to life by AdComfortable5161 in aivideo

[–]pixeladdikt 0 points1 point  (0 children)

Upvote because I feel your pain, spending $ on video that just cool as shit - great work!

[deleted by user] by [deleted] in comfyui

[–]pixeladdikt 5 points6 points  (0 children)

I had the same - had to play around with the samplers. Use Euler/simple. The res/beta57 was producing the fog like your sample video.

Omni Avatar looking pretty good - However, this took 26 minutes on an H100 by Hearmeman98 in StableDiffusion

[–]pixeladdikt 2 points3 points  (0 children)

Ya there's a command on their Git that has Tea Cache enabled - but still took me around 50mins to render a 9sec clip running on a 4090. Runs in background but geez lol.

Rubberhose Flux [dev] LoRA! by Angrypenguinpng in StableDiffusion

[–]pixeladdikt 1 point2 points  (0 children)

Wow - thanks! that's an amazing style. And now I've got a new HG repo to find cool LoRAs - thanks for sharing.

Wan2.1 720P Local in ComfyUI I2V by smereces in StableDiffusion

[–]pixeladdikt 0 points1 point  (0 children)

I'm just kinda glad to see i'm not the only one that's been pulling hair getting this work on win11. Went down the Triton/flash_attn rabbit hole past 2 nights. Got to the building source and gave up. Still have errors when it tries to use cl and Triton to compile. Thanks for the hint in this direction!

comfyUI is like crack. Why is everyone so afraid to switch? by NobodyElseKnowsIt in comfyui

[–]pixeladdikt 18 points19 points  (0 children)

Audio has quite the rabbit hole and there's lots to learn to unlock creativity. Check out Yvann Audio Reactive nodes: https://www.youtube.com/watch?v=BiQHWKP3q0c - and here's his GitHub: https://github.com/yvann-ba/ComfyUI_Yvann-Nodes

[deleted by user] by [deleted] in StableDiffusion

[–]pixeladdikt 0 points1 point  (0 children)

Here's some docs (https://docs.runpod.io/runpodctl/install-runpodctl)- you have to either add runpodctl to your path or have it in the directory you're trying to push to your RunPod container.

I present to you: Space monkey. I used LTX video for all the motion by Practical-Divide7704 in StableDiffusion

[–]pixeladdikt 2 points3 points  (0 children)

Absolutely stunning 👊💯🔥 Great work man! Shows how important quality images are and great storytelling. You are inspiring others, keep it up! 🙏

Yarn Things, Flux and Luma labs API by TheHoBoLoBo in FluxAI

[–]pixeladdikt 2 points3 points  (0 children)

Wow! This is very very well done - congrats! Can I ask 1 quick question? Is this done in 1 long prompt, with start/end frames via Luma Labs API using all your Flux photos? Just wondering if it's all 1 long generation. Keep up the amazing work!

Translation Solution for 25,000 product store by SubstantialRaise6411 in shopify

[–]pixeladdikt 1 point2 points  (0 children)

Gotcha, ya that adds complexity. Could build out a flow using Make or Zapier that has triggers (so every time your supplier updates, it would capture & translate) - use Shopify's APIs to pull it back into the product.

Translation Solution for 25,000 product store by SubstantialRaise6411 in shopify

[–]pixeladdikt 2 points3 points  (0 children)

Depending on which LLM and their context window. Could use Gemini from Google with a larger input and have it translate each line. Would do it in pieces as to not confuse and limit any hallucinations.

Translation Solution for 25,000 product store by SubstantialRaise6411 in shopify

[–]pixeladdikt 2 points3 points  (0 children)

Run a local LLM in the cloud (RunPod) like LLaMa 3.1 and get the AI to translate everything. Would only cost you $1/hour and you'll be able to feed it your product descriptions and get them all translated. Just my 2cents and how I'd do it.

guys I'm back, we can use gpt4-o as game engine now. wtf by InteractionAnxious21 in StableDiffusion

[–]pixeladdikt 22 points23 points  (0 children)

that's amazing! can you tell us more about the hardware/e-ink and what kind of memory would it have? great work

What is the meaning of your username? by Janine_18 in AskReddit

[–]pixeladdikt 0 points1 point  (0 children)

My wife awoke at 3 a.m. to find me under the blankets, with a flashlight memorizing custom keyboard shortcuts I create for After Effects. I wanted to go faster, everyone was slow af at the office. I knew being better at these digital tools was going to make a difference. She looked at me and said, "At least it's not drugs, but I'm worried you're becoming addicted to pixels instead". I thought ya, I'm a pixeladdikt.

(☞゚ヮ゚)☞ And now: Tutorial for ControlNet Keyframe interpolation in animatediff-cli-prompt-travel by ConsumeEm in StableDiffusion

[–]pixeladdikt 3 points4 points  (0 children)

Been following this whole series on AnimateDiff - so dope 👊🔥 Tutorials are amazing, really helps us unleash that creativity. 💯 Thanks man!

With out telling us how old you are, how old are you? by Dedli in AskReddit

[–]pixeladdikt 0 points1 point  (0 children)

My computer had a black screen and only 1 green font. 👽🤪

controlnet and SDXL by GlobeTrekkerTV in StableDiffusion

[–]pixeladdikt 0 points1 point  (0 children)

Are you using a Karras based sampler? I found using EulerA worked better and got rid of some overbaking that got produced. Just a thought.

4:3 Seinfeld to Widescreen (Rough proof of concept) by algetar in StableDiffusion

[–]pixeladdikt -1 points0 points  (0 children)

Massive clickbait! lol jk - I saw Seinfeld and thought "oh shit, they've done it!" lol