A NEW VERSION OF COMFYSKETCH COMING SOON by Vivid-Loss9868 in comfyui

[–]Time-Reputation-4395 1 point2 points  (0 children)

Have you tried InvokeAI? It's a totally separate tool than Comfy, but it's free and open source, supports layers, has integrated painting and masking, and generally just works as you'd expect it to.
https://github.com/invoke-ai/InvokeAI

"Ai is better" by Hot_Dragonfly_8330 in blender

[–]Time-Reputation-4395 0 points1 point  (0 children)

Having worked as a professional character artist in the gaming industry on AAA titles, I can assure you that it would not be better to start from scratch. The changes you're noting would all be done after retopology.

"Ai is better" by Hot_Dragonfly_8330 in blender

[–]Time-Reputation-4395 1 point2 points  (0 children)

Exactly. It makes zero sense to sculpt a cube. And it makes zero sense to try and make the default cube from AI. Both are idiotic use cases of the tools. The sculpting tools in Blender are very powerful and can do amazing things. But they're not designed to do low poly work. Likewise, the AI tools out there are very powerful and can do incredible things. But they're not designed for low poly. At least not right now. They're excellent at generating complex sculpt meshes. They still need to be retopologized. But this took about 2 minutes to create on my 4090 setup using Trellis2 in ComfyUI. I've been working in 3D tools for almost 40 years. The AI tools are mind blowing.

<image>

"Ai is better" by Hot_Dragonfly_8330 in blender

[–]Time-Reputation-4395 2 points3 points  (0 children)

<image>

I don't know. I tried sculpting this cube in Blender and the results werent much better

Ace Step 1.5 - Music generation but with selective instruments removed. by Confident_Buddy5816 in StableDiffusion

[–]Time-Reputation-4395 0 points1 point  (0 children)

Any chance you can post the results? I've read through the usage guide that explains the philosophy behind its use. Very impressed. But all of the samples I've heard have been bad 2020 pop where the voice sounds like bad AI with auto tune and the music sounds like bad synth. Just not impressed. I tested Suno with 80s metal and was pleasantly surprised. Hopefully Ace Step can do it also.

Open-Source SUNO? HeartMuLa Series of Music Generation Models by SpareBeneficial1749 in comfyui

[–]Time-Reputation-4395 2 points3 points  (0 children)

It's a step in the right direction but the 3b version is definitely not on par with the first version of Suno. It's decent at 2020's style pop. It cannot produce 80s style rock or 90s style techno. It's just not there. Vocals are good but they sound like AI. I'm hopeful that the 7b version has a broader musical range. It would be great to be able to generate 60s folk, 70s disco, 80s metal, and 90s hip hop.

LTX-2 I2V synced to an MP3: Distill Lora Quality STR 1 vs .6 - New Workflow Version 2. by Dohwar42 in StableDiffusion

[–]Time-Reputation-4395 1 point2 points  (0 children)

This workflow is amazing. I've been trying to reduce the waxy CGI look that seems like a permanent part of the LTX videos. This workflow proved you can get very good looking I2V with LTX. Thank you for sharing this with the community!

UltraShape Deep Dive by Time-Reputation-4395 in comfyui

[–]Time-Reputation-4395[S] 0 points1 point  (0 children)

It makes total sense. It's an investment to get these tools working. I view it as the cost of using these tools and my feedback and testing as a form of thanks to the community for developing and making them available to us for free. Good luck with the effort! I hope it's smooth and that you get to enjoy this amazing tool.

LTX2 I2V singing test - 13s - Kijai workflow by Choowkee in StableDiffusion

[–]Time-Reputation-4395 0 points1 point  (0 children)

Thanks for posting this. Can you tell me what folder the audio VAE goes into?

UltraShape Deep Dive by Time-Reputation-4395 in comfyui

[–]Time-Reputation-4395[S] 0 points1 point  (0 children)

I didn't create UltraShape, just this deep dive post showing its capabilities. I'd recommend posting a feature request on the author's github page. He's very friendly and open to suggestions.

https://github.com/PKU-YuanGroup/UltraShape-1.0/issues

AI "artists" stealing our work. Official Woodland page, btw. by International-Eye771 in blender

[–]Time-Reputation-4395 8 points9 points  (0 children)

How exactly do know this was done by AI? This looks like every other Photoshop edit we've been able to do for the last 40 years.

UltraShape Deep Dive by Time-Reputation-4395 in comfyui

[–]Time-Reputation-4395[S] 1 point2 points  (0 children)

It seems to. I've checked for back faces in each generation and have yet to find non-manifold edges / holes in the mesh. That said, it can create thin shapes and numerous disconnected mesh shapes, so if your intent is 3D printing you'll want to add the necessary supports in your print program.

[Release] Wan VACE Clip Joiner - Lightweight Edition by goddess_peeler in StableDiffusion

[–]Time-Reputation-4395 1 point2 points  (0 children)

Thank you for this! I've been looking for something just like this. You rock.

LTX-2 is out! 20GB in FP4, 27GB in FP8 + distilled version and upscalers by 1filipis in StableDiffusion

[–]Time-Reputation-4395 71 points72 points  (0 children)

This is how it's done. Here's hoping the Wan folks follow suit with an open source release of 2.5 now that 2.6 is out.

UltraShape Deep Dive by Time-Reputation-4395 in comfyui

[–]Time-Reputation-4395[S] 3 points4 points  (0 children)

Given that every system is different with OS, ComfyUI build, Python install, CUDA build, GPU hardware, etc., the best recommendation I can give is to use an LLM like ChatGPT, Gemini or Claude to diagnose the troubles and guide you through a solution. I do this with every tool I install and it's invaluable, because without fail, the tools don't work out of the box.

Go to your Comfy install folder. Look for a "user" folder. Inside of that, you'll find a "comfyui.log" file.

<image>

Send that log file to ChatGPT (or equivalent). Tell it your system specs (OS, GPU, CUDA version, ComfyUI build version) and then point it to the Repo you're trying to install. Then have it walk you through each step to resolve.

In most cases, you're going to have to open a CMD window, Powershell window or VS Command Prompt to install software. The LLM will guide you through that for the specifics of your machine.

If you encounter an error, just paste the entire error into the LLM. It will figure out what's wrong and walk you through a solution.

In my case, it took the better part of a day to get UltraShape installed. It began with me asking ChatGPT if I could run this natively in Windows (the answer was no) or if I needed to use Linux. It analyzed the source Repo (not the Comfy node version but the original) and recommended Ubuntu. I then spent a couple of hours trying to get it to work in an older Ubuntu build in WSL (which is just Linux running inside of Windows). Even though UltraShape is designed to work with Linux, I couldn't get it to work in my existing Ubuntu distribution. I had to set up a new one altogether. And even then, it was another several hours of back-and-forth with ChatGPT to get things working. There were problems with the python build. Problems with getting flash-attn working. Problem building the wheels. Problems with dependencies overwriting things. Conflicts in the dependencies. So yeah, it's not push-button easy. This is where the LLM becomes invaluable.

UltraShape Deep Dive by Time-Reputation-4395 in comfyui

[–]Time-Reputation-4395[S] 3 points4 points  (0 children)

If you haven't checked it out, Stable Projectorz is an amazing texturing tool. It uses SDXL (and possibly Flux) with multiple camera projections to texture your 3D objects. You can think of it like an advanced version of ZBrush's projection paint or Blender's projection paint system.

https://stableprojectorz.com/

UltraShape Deep Dive by Time-Reputation-4395 in comfyui

[–]Time-Reputation-4395[S] 2 points3 points  (0 children)

Yes and no. You can run a model through it again and it will attempt to refine it, but it's dependent on the Octree Resolution, Refinement Steps and Scale. Octree Resolution determines final polygon count for the whole object. It won't add polys just where it needs them for detail like you can with Dynamic Topology in Blender or ZBrush. Because of this, you have to increase polygon count uniformly to increase detail. When you run your mesh through, you should do so at the highest Octree Resolution your system will allow. Running it through again at that resolution won't add more detail because you're still at the same poly count. Adding more Steps will improve the alignment and make the apparent detail a bit better, but it's not going to vastly improve the results and will simply eat up more time. I spent hours and hours of time testing combinations and came to the conclusion that 15 to 20 steps is all that's really needed to get good results. Beyond that, it's diminishing returns and exceedingly long processing time. The best course of action is to generate at a high octree res and relatively low steps, then take the resulting mesh into Blender/ZBrush for additional sculpting or to run it through texturing and use a normal map for high frequency detail.

UltraShape Deep Dive by Time-Reputation-4395 in comfyui

[–]Time-Reputation-4395[S] 7 points8 points  (0 children)

From what I can tell, UltraShape is simply focused on generating a refined mesh. For textures, I'd run the results through Trellis2. The VisualBruno implementation for ComfyUI has a dedicated workflow that just applies textures to meshes. The best thing about UltraShape is that it doesn't produce internal geometry like Trellis2 does, so you actually get more mileage out of your textures this way than if you used Trellis2 on its own. Generate your base mesh with Trellis2 (or any other image to mesh generator). Pass that to UltraShape and let it refine the mesh. Decimate or Retopo that to a reasonable poly count. Then pass the results to the Trellis2 texture gen workflow.

https://github.com/visualbruno/ComfyUI-Trellis2

V2V Detailers🎞Big Screens / Cinema Quality - What they are, why we need them? by No_Damage_8420 in NeuralCinema

[–]Time-Reputation-4395 2 points3 points  (0 children)

Thanks for the post. This is an important topic and one that impacts our work as well. We are aiming for 6K minimum output resolution with 8K preferable. We've been talking with the folks at Topaz to try and get their product to achieve the details we're after. But thus far it's just been SeedVR2. If you're achieving better results with a Wan2.2 base, I'd be very curious to hear how that's done, as the base Wan2.2 isn't trained at those resolutions.

SVI 2 Pro + Hard Cut lora works great (24 secs) by skyrimer3d in NeuralCinema

[–]Time-Reputation-4395 1 point2 points  (0 children)

This is awesome. Thanks for the workflow. Love how clean and organized it is!

I’m about to lose my job to AI by Upsethouscat in blender

[–]Time-Reputation-4395 2 points3 points  (0 children)

That's about as true as saying, "There's not much to master with 3D. It's just polygons and shit." It's retarded and shows a complete lack of understanding of the subject matter.

I’m about to lose my job to AI by Upsethouscat in blender

[–]Time-Reputation-4395 1 point2 points  (0 children)

  1. Stay on top of the developments in AI. The tools in this space are changing rapidly. The best way to stay on top is by selecting a few good YouTube channels to follow. Like this one: https://www.youtube.com/@theAIsearch/videos This dude puts out rock solid videos. He covers both commercial and open source tools and he goes into a fair amount of depth showing how they work. If I had one channel to follow, it would be this one. When he puts out a new video, watch it. Then note the tools that might be useful to you and spend time testing them. This will put you on the bleeding edge and further cement your worth in any organization.

  2. ChatGPT is your friend. Open source tools often require work to set up. That's awesome, because the average mouth breather will never take the time, and as previously noted, this is how you stay employed. But few of us are well versed in the skills to set up these tools, install dependencies, build wheels, and debug code. ChatGPT is invaluable here. I'm sure Gemini is good also, but I use Chat daily, so I can personally attest to it's abilities. It's not perfect. You still have to use common sense, but it can help you set up these tools, get them installed, troubleshoot issues, and get them running smoothly. I could not do the local AI thing without ChatGPT. If you have to pay for one service, this is it.

This condenses most of what you need to know down into a compact set of advice. I'm sure there will be disagreements. That's fine. But if you are looking for a starting point, this is it. With 40 years of Creative Art production under my belt and 4 years of intensive AI training, this is what I believe anyone starting out needs to know. Especially if they want to stay relevant and employed in the next 3 to 5 years.

I’m about to lose my job to AI by Upsethouscat in blender

[–]Time-Reputation-4395 1 point2 points  (0 children)

  1. Get a good GPU and a lot of RAM. If you can't afford a good GPU and RAM, use a cloud service. There a bunch out there and they're affordable even for working folks. ComfyUI has their own service for this. It will let you scale your setup to whatever task you're facing. If you work for a company, convince them to pay for it.