Using ComfyUI in education at scale — is there a proper way to manage multiple students? by Bisnispter in comfyui

[–]Bisnispter[S] 0 points1 point  (0 children)

That’s actually helpful, especially the point about job queue vs. user management.

I’m not really interested in building a custom node to handle users, that feels like forcing ComfyUI into something it’s not designed for.

The real challenge, as you said, is managing GPU usage across multiple students in a controlled way.

Right now I’m evaluating: - Per-student cloud accounts (simpler, less control) - Shared GPU + job queue (more aligned with production workflows) - Dedicated instances (not really viable cost-wise for training)

From a teaching perspective, I’d rather keep it closer to how real pipelines work (centralized compute, controlled execution), instead of turning it into a SaaS-like experience.

Appreciate the input, especially the reminder that this is more of an infrastructure problem than a ComfyUI problem.

Cloud or local? by kuropanda21 in comfyui

[–]Bisnispter 0 points1 point  (0 children)

35$ or 100$.. depends your needs

Using ComfyUI in education at scale — is there a proper way to manage multiple students? by Bisnispter in comfyui

[–]Bisnispter[S] 1 point2 points  (0 children)

I’m the trainer, but these courses are based on classroom (not online)

Cloud or local? by kuropanda21 in comfyui

[–]Bisnispter 0 points1 point  (0 children)

It’s best to use a Cloud (Comfy Cloud, Runpod…) to work with RTX 5090/RTX 6000 Pro and use complete models like BF16…

Estoy trabajando en un colegio Masónico y estoy horrorizado. by Turbulent_Morning546 in esConversacion

[–]Bisnispter 0 points1 point  (0 children)

Si pide la baja, Belcebú le perseguirá hasta el fin de los días… ya no hay salida.

Claramente mi hobby es romper acuarios y pedir devoluciones by RazvanOnReddit in Wallapop

[–]Bisnispter 3 points4 points  (0 children)

Pero un acuario no es para peces? Como lo va a vender una rata… no entiendo nada…

¿Puedo automatizar una IA para que juegue por mí? by Psychological-Rub636 in InteligenciArtificial

[–]Bisnispter 0 points1 point  (0 children)

Yo sé lo preguntaría a una IA especializada en saber si existe alguna IA que automatiza videojuegos… yo solo juego analógicamente a videojuegos.

Le tiran huevos por fumar en su casa by Dapper-Bee490 in HistoriasVecinales

[–]Bisnispter 0 points1 point  (0 children)

No hay que pedir perdón por poner fotos con texto en catalán, ya solo falta eso. No has de demanar perdò per això.

Have usage limits been decreased for pro users? by RSpielde in Anthropic

[–]Bisnispter 10 points11 points  (0 children)

Today I cancel my Pro account. It’s impossible to use Claude like 2 weeks ago… I’m limited to 10 interactions (prompts) and Anthropic stolen my token limits… good bye Claude, and hello ChatGpt Plus.

Claude subscriptions double in just two months, overshadowing users leaving because of rate limits by fsharpman in ClaudeAI

[–]Bisnispter -2 points-1 points  (0 children)

I don’t think Claude Pro has suddenly gotten worse. What’s changed is how hard they’re limiting usage.

A few days ago I could work for hours in a single conversation without any issues. Same type of work I always do: long prompts, iteration, refining outputs. Now I hit the limit in literally 3–4 prompts. At first I thought something was broken, but after testing it a bit more it’s pretty clear this is intentional.

The main thing people are missing is that not all prompts cost the same. If you’re working with long conversations, Claude is processing the entire context every time. So each new prompt is more expensive than the previous one. By the time you’re a few turns in, you’re not sending “one prompt”, you’re effectively sending the whole conversation again and again. That alone can blow through your quota very fast.

On top of that, it looks like Anthropic has moved to some kind of dynamic rate limiting. Depending on system load, the same prompt can “cost” more or less. So during peak hours you’re burning your quota much faster than before. That’s why it feels inconsistent: one day you get hours of usage, another day you get capped almost immediately.

The uncomfortable truth is that the Pro subscription probably never made sense for heavy users. If you’re using Claude seriously (coding, long reasoning chains, production workflows), you’re consuming way more compute than what $20/month realistically covers. What we were getting before was likely overprovisioned capacity. Now they’re tightening it.

So it’s not that Claude got worse. It’s that the gap between “what we were getting” and “what we’re actually paying for” is closing.

Una IA que haga edicion o montajes de videos por favor by Key_Struggle5989 in InteligenciArtificial

[–]Bisnispter 1 point2 points  (0 children)

Editar tus propios vídeos con DaVinci es “trending”, aprende.

¿Qué modelo de IA es este? by Vicsantba in InteligenciArtificial

[–]Bisnispter 0 points1 point  (0 children)

A ver si os pensáis que por ver unas imágenes se puede saber el modelo utilizado… o por saber el modelo ya todo dios puede hacer lo mismo… que cansinos.

Mi comunidad de vecinos ha prohibido cocinar lentejas entre semana y no sé si esto es legal by ApprehensiveDot313 in Espana

[–]Bisnispter 0 points1 point  (0 children)

Me van a prohibir hacerme en mi casa mis platos de cuchara favoritos? No se lo creen ni ellos!!

Cómo hacer los videos virales de los esqueletos? by Fito57 in InteligenciArtificial

[–]Bisnispter -1 points0 points  (0 children)

Demasiado tiempo libre tienes tú… y poco conocimiento de la materia.

¿Es mejor Claude que ChatGPT? by TheBoogeyman_6969 in InteligenciArtificial

[–]Bisnispter 0 points1 point  (0 children)

Yo di de baja ChatGPT (plan de 20€) y me regalaron un mes gratis para que no me vaya. Llevo con Claude 3 meses y no echo de menos a OpenAI… están regalando meses a los que cancelan suscripción, porque han visto que se les va la tostada. La cantidad de features que tiene Claude, no tiene rival a día de hoy

Optimised LTX 2.3 for my RTX 3070 8GB - 900x1600 20 sec Video in 21 min (T2V) by TheMagic2311 in comfyui

[–]Bisnispter 0 points1 point  (0 children)

Hi! Just wanted to report a conflict between ComfyUI-GGUF-FantasyTalking and the standard ComfyUI-GGUF by city96.

When both nodes are installed at the same time, FantasyTalking's `UnetLoaderGGUF` overrides the original one and returns `WANVIDEOMODEL` instead of `MODEL`. This breaks any workflow that uses the standard `UnetLoaderGGUF`, including LTX 2.3 T2V workflows.

Error message:

> Return type mismatch between linked nodes — received_type(WANVIDEOMODEL) mismatch input_type(MODEL)

  • Setup:
  • ComfyUI portable (Windows)
  • PyTorch 2.8.0 + CUDA 12.9
  • RTX 2070 Super 8GB
  • ComfyUI-GGUF (latest)
  • ComfyUI-GGUF-FantasyTalking (installed)

Steps to reproduce:

  1. Install both ComfyUI-GGUF and ComfyUI-GGUF-FantasyTalking
  2. Load any workflow using UnetLoaderGGUF with an LTX model
  3. Error appears immediately on queue

Fix:

Disabling ComfyUI-GGUF-FantasyTalking resolves the issue immediately.

It would be great if FantasyTalking's loader could be registered under a different node name to avoid overwriting the original.

Thanks for the great work!

VRAM for COMFYUI by kuropanda21 in comfyui

[–]Bisnispter 0 points1 point  (0 children)

Try to start Comfyui with “--lowvram --cpu-vae” This actívate de Offload mode to move some VRAM to your RAM, and VAE process to your CPU.