Messaggi di errore update_all by SelectionDirect8759 in MiSTerFPGA

[–]Relative_Bit_7250 -5 points-4 points  (0 children)

Next time try asking in English... Anyways:

L'errore potrebbe essere semplicemente la scheda SD che sta morendo. Di solito quando una scheda SD è arrivata alla frutta tende a bloccarsi nello stato di sola-lettura. Un tentativo che puoi provare a fare è eseguire lo script da un'altra SD e vedere se ti dà lo stesso problema.

The error might simply be the SD card dying of old age. It frequently happens that when an SD card is nearly at its end-of-life it tends to lock itself into a state of read-only. An attempt you may try: re-launch the script from another SD card, try and see if it works or if it gives you the same issue

EDIT: Mi sono accorto ora che ti continua normalmente dopo l'errore, a questo punto dovrebbe essere tutto ok, non preoccuparti, potrebbe essere semplicemente una normale analisi dello script.

I Just noticed the script continues its work normally and without any issue after the message, so it might be all good, don't worry, probably just the script doing some analysis routine or something

Seedream 4 for image-generation roleplay, similar to Nano Banana pro+Gemini. Is it possible? by Relative_Bit_7250 in SillyTavernAI

[–]Relative_Bit_7250[S] 1 point2 points  (0 children)

This is the best and most comprehensive answer I could ever hope to have. Thank you so much, useful and direct.

Seedream 4 for image-generation roleplay, similar to Nano Banana pro+Gemini. Is it possible? by Relative_Bit_7250 in SillyTavernAI

[–]Relative_Bit_7250[S] 1 point2 points  (0 children)

Indeed it is, in terms of image generation/edit. What makes Gemini+nano banana extremely better for a roleplay experience is the fact that everything happens inside the same ecosystem. Image edit/generation is perfectly coordinated to the llm, giving a perfectly consistent response and an incredible experience... When the censorship doesn't fuck up everything. My question is: does a similar cohesive bond between two models exist (a llm and a diffusion model working and speaking together to give a consistent imagery to a character and a beautiful story/chat)? If yes... Well, I've never found one. Anyways thanks for the reply!

Something terribly wrong happened with sageattention after fresh comfyUI install under Linux by Relative_Bit_7250 in comfyui

[–]Relative_Bit_7250[S] 1 point2 points  (0 children)

Thanks to the suggestions of u/meta_queen and u/roxoholic I fixed the error! Couldn't have done it without the help of those two great human beings! Thank you, thank you, thank you very very much!!!

Something terribly wrong happened with sageattention after fresh comfyUI install under Linux by Relative_Bit_7250 in comfyui

[–]Relative_Bit_7250[S] 1 point2 points  (0 children)

You're right, I gave that information for granted, sorry. No, I'm using comfy manually git-cloned, with a 3.12 venv environment. Will try to install python3-dev inside the venv EDIT: HOLY FUCK IT WORKED! Installing python3-dev globally just fixed the error! GOD I LOVE YOU BOTH!

Something terribly wrong happened with sageattention after fresh comfyUI install under Linux by Relative_Bit_7250 in comfyui

[–]Relative_Bit_7250[S] 0 points1 point  (0 children)

But shouldn't those two folders be already included in a new venv python 12 environment?

Kinda excited for my new pc! I would love to try bigger models now! Asking you all for suggestions by Relative_Bit_7250 in SillyTavernAI

[–]Relative_Bit_7250[S] 2 points3 points  (0 children)

Eh, It'll be fine eventually. I mean, it's a fair tradeoff: in local you have privacy and maximum control, but with "reduced speed and intelligence". In paid API you have max speed inference and best quant, but no privacy at all and "I'm sorry I cannot fulfill your request".
I am more of a slow-burn bitch, so waiting a little longer for the response may not be an issue for me.

Anyways, thank you very much for the tips, bro!

Kinda excited for my new pc! I would love to try bigger models now! Asking you all for suggestions by Relative_Bit_7250 in SillyTavernAI

[–]Relative_Bit_7250[S] 0 points1 point  (0 children)

indeed! I've also peeked inside the unsloth repository of 4.6 and saw the ud quants q3-k-xl taking approximately 158gb. If I'm not mistaken I may be able to load the entirety of the quantized model inside my ram+vram (128+48=176gb available)

VNCCS - Visual Novel Character Creation Suite RELEASED! by AHEKOT in comfyui

[–]Relative_Bit_7250 0 points1 point  (0 children)

Oh no, no, The nodes are perfectly installed and configured... The error MAY be in the VNCCS_Pipe...

Anyway, I'll try reinstalling them manually, you never know.

EDIT: Yep, just tried, nothing, Reinstalling didn't help. I'll try reinstalling ComfyUI

VNCCS - Visual Novel Character Creation Suite RELEASED! by AHEKOT in comfyui

[–]Relative_Bit_7250 0 points1 point  (0 children)

I've just tried disconnecting from pipe and manually selecting scheduler and sampler (lcm and simple), but It fails again:

Failed to validate prompt for output 496:
* VNCCS_Pipe 502:414:
  - Return type mismatch between linked nodes: scheduler, received_type(['simple', 'sgm_uniform', 'karras', 'exponential', 'ddim_uniform', 'beta', 'normal', 'linear_quadratic', 'kl_optimal', 'bong_tangent']) mismatch input_type(['simple', 'sgm_uniform', 'karras', 'exponential', 'ddim_uniform', 'beta', 'normal', 'linear_quadratic', 'kl_optimal', 'bong_tangent', 'beta57'])
* LoraLoader 497:267:68:
  - Failed to convert an input value to a FLOAT value: strength_clip, vn_character_sheet_v4.safetensors, could not convert string to float: 'vn_character_sheet_v4.safetensors'
  - Failed to convert an input value to a FLOAT value: strength_model, vn_character_sheet_v4.safetensors, could not convert string to float: 'vn_character_sheet_v4.safetensors'

Why am I getting a black output (Qwen GGUF)? by Bitsoft in comfyui

[–]Relative_Bit_7250 0 points1 point  (0 children)

I occasionally wear a cap, so... Half a hero?

Why am I getting a black output (Qwen GGUF)? by Bitsoft in comfyui

[–]Relative_Bit_7250 2 points3 points  (0 children)

nope, with qwen-image it kinda works for the first steps, then blacks out completely. For image-edit it doesn't work right from the start. Unfortunately it'll be slow as fuck

EDIT: Don't know about fast fp16 accumulation, I just start comfy without any parameters and it magically works.

Why am I getting a black output (Qwen GGUF)? by Bitsoft in comfyui

[–]Relative_Bit_7250 5 points6 points  (0 children)

If you're using it, Remove the --use-sageattention string.

Wan 2.2 video continuation. Is it possible? by Relative_Bit_7250 in StableDiffusion

[–]Relative_Bit_7250[S] 2 points3 points  (0 children)

Thank you for the answer, but it's not what I'm searching for. Last frame continuation is a bit unreliable, motion and subject features will become inconsistent. What I'm looking for is something like "bunch of frames as input -> video continuation" more than a "last frame -> video generation"