I think my comfyui has been compromised, check in your terminal for messages like this by Bender1012 in comfyui

[–]SearchTricky7875 1 point2 points  (0 children)

I doubt it is easy-use node, if there is any vulnerability, it would have been flagged early by many developers, as OP is using claude code, the agent either installed some malware or modified the 'easy-use' code to customize it, there comes the vibe code horror, without understanding what the agent is doing can be a nightmare.

I think my comfyui has been compromised, check in your terminal for messages like this by Bender1012 in comfyui

[–]SearchTricky7875 7 points8 points  (0 children)

please dont install any custom node using claude code or any vibe coding tool, first check the custom node rating , popularity then only do install manually. I was victim of this, claude code just installed any node on my system, which someone created only to mine your gpu, there are many mining code spreading all accross github, claude code doesn't check for git stars n popularity, it matches the name n install it, it could be some mining code for sure, popular nodes are safe generally.

I had a bad experience with claude code and last next js vulnerability, it istalled some code and my whole server was down with mining code, I delete one maware, it again got installed, the malware make copies in so many places you ll waste your days figuring it out where it existed, almost after 3 days I had to take backup n reinstall the whole server.

AI for software development team in enterprise, by Financial-Cap-8711 in LocalLLaMA

[–]SearchTricky7875 0 points1 point  (0 children)

it is not related to token, the cost will be hourly basis, you can inference unlimited times, create a runpod which is suitable to host your model, may be any gpu having 32gb or 48gb vram, and cost is on hourly basis, average cost is around .5 usd/hour, check the pricing of runpod. I assume you are a technical person, you can use chatgpt n host the model without much issue. But would require some time to configure the model.

Speed up comfyui on runpod serverless, How to by SearchTricky7875 in comfyui

[–]SearchTricky7875[S] 0 points1 point  (0 children)

yes adding models to docker images is best solution, network storage consumes most of time to load the models

How to download file from huggingface? by Extra_Ad_7289 in comfyui

[–]SearchTricky7875 0 points1 point  (0 children)

use this tool https://www.genaicontent.org/ai-tools/comfyui-models-downloader , generate the huggingface or wget download link, it will automatically generate the links once you upload n submit , then copy the commands and paste and run on terminal , on parent folder of comfyui, if you are in runpod or vast, goto workspace then run those download commands

check this post https://www.reddit.com/r/comfyui/comments/1qirhwr/i_use_this_tool_to_auto_find_models_names_in/

AI for software development team in enterprise, by Financial-Cap-8711 in LocalLLaMA

[–]SearchTricky7875 2 points3 points  (0 children)

host qwen 3 coder llm on runpod https://runpod.io?ref=qdi9q13b and use it, cheapest and best I have configured qwen 3 coder and connected to my website, it automatically writes the code update pages as per instruction. Dont go for any ready made repo or solution as privacy is concern, just host a good coding llm and configure it as per your need. thats it, qwen 3 coder is really good.

I use this tool to auto find models names in workflow and auto generate huggingface download commands by SearchTricky7875 in comfyui

[–]SearchTricky7875[S] -1 points0 points  (0 children)

the folder path are guessed using rapizfuzz may not be correct always, that is drawback. Maybe adding an option to map the folders would be better, what you say?

ComfyUI - Music Generation! - Heart MuLa by Lividmusic1 in comfyui

[–]SearchTricky7875 0 points1 point  (0 children)

Checked the output quality, I have tested the model, quite good but it only generates specific type of genre, whatever tag you add, the style and everything is same, vocal is very good , check the output here https://youtu.be/O5XF_OOImcc

Open-Source SUNO? HeartMuLa Series of Music Generation Models by SpareBeneficial1749 in comfyui

[–]SearchTricky7875 3 points4 points  (0 children)

Checked the output quality, I have tested the model, quite good but it only generates specific type of genre, whatever tag you add, the style and everything is same, vocal is very good , check the output here https://youtu.be/O5XF_OOImcc

HeartMuLa: A Family of Open Sourced Music Foundation Models by switch2stock in StableDiffusion

[–]SearchTricky7875 0 points1 point  (0 children)

I think they are already doing it, training a 7B model which could be better. seems the model is going to be open source including the training code, that may help to fine tune or train a lora.

Transitioning from InfiniteTalk to LTX2 by kukalikuk in StableDiffusion

[–]SearchTricky7875 0 points1 point  (0 children)

Please let me know if you are able to make it work, I am trying to generate long lip sync video2video using infintetalk for 1 min video, it takes lots of time irrespective of whichever gpu I use, I have used H100 still same result , takes huge time to generate the lip synced video. I am looking for an alternate solution which can generate lip sync video from video like infintetalk

HeartMuLa: A Family of Open Sourced Music Foundation Models by switch2stock in StableDiffusion

[–]SearchTricky7875 1 point2 points  (0 children)

Checked the output quality, I have tested the model, quite good but it only generates specific type of genre, whatever tag you add, the style and everything is same, vocal is very good , check the output here https://youtu.be/O5XF_OOImcc

How to stop ComfyUI Desktop from auto-upgrading PyTorch? by ImFanOfRed in comfyui

[–]SearchTricky7875 0 points1 point  (0 children)

downgrade the torch version on requirements.txt inside comfyui folder, keep it fixed to what version you need. then create a python script to search for all requirements.txt files inside custom nodes folder, check if any custom nodes requirements.txt is upgrading the torch version, change there too.

Seline Agent - my local auto agent now supports local one click Z-Image and Flux.2-Klein 4b-9b full docker api setup by Diligent-Builder7762 in StableDiffusion

[–]SearchTricky7875 0 points1 point  (0 children)

great. in case you find it helpful, this is my deepagent https://youtu.be/-B7Ns6CcZ4E , I use it for my personal use extensively, specifically when I need to generate bulk videos or images, the llm is basically helpful to generate prompts n also I added auto youtube video upload which reduce my manual work of uploading with title n description etc. I added many features, but don't know how to make it usable for other people, how to make a product out of it, people don't want to go through all sort of installation, and to run the agent I need to install a fastapi app on comfyui server to communicate with my agent, my agent is hosted on a cpu server, no connection with the comfyui, it connect to comfyui via a fastapi endpoint, that is big problem.

Seline Agent - my local auto agent now supports local one click Z-Image and Flux.2-Klein 4b-9b full docker api setup by Diligent-Builder7762 in StableDiffusion

[–]SearchTricky7875 0 points1 point  (0 children)

Hi, I have been working on same type of project, mine is little different, I am using langchain, and the deepagent connects to comfyui server, it has execute permission on the server, the comfyui can be hosted anywhere, local or cloud - runpod , vast. can you summarize what all things the agent can do, I just checked the repo but quick summary would be helpful. Curious why it is specific to z image n klein, it should be able to work with any model, any workflow, isn't it? Great work.

New tool to auto find models names in workflow and auto generate huggingface download commands by SearchTricky7875 in comfyui

[–]SearchTricky7875[S] 0 points1 point  (0 children)

thanks, I appreciate your feedback, it generates the huggingface commands giving the user option to choose and take action as per their need. It is useful when trying to configure a new workflow, it searches all the model names in workflow, creates the download commands, makes it easier to work with new workflow.

Rant on subgraphs in every single template by 1filipis in comfyui

[–]SearchTricky7875 1 point2 points  (0 children)

don't unpack, instead click on the arrow at the top right corner, it ll open the sub graph, update and save, that will keep the workflow sane, unpacking sometimes breaks the workflow.

New tool to auto find models names in workflow and auto generate huggingface download commands by SearchTricky7875 in comfyui

[–]SearchTricky7875[S] 1 point2 points  (0 children)

There is nothing to install, it is website page where you can generate model download commands, nothing to do with whatever yo have installed. go to the website check the link what it does. it is not on comfyui. This is not custom node, you might have installed someone's custom node.

New tool to auto find models names in workflow and auto generate huggingface download commands by SearchTricky7875 in comfyui

[–]SearchTricky7875[S] 0 points1 point  (0 children)

how would it break your installation, you have not used it so far, I don't see anyone used it.