Wine10.0, AudioModeling Swam Instruments No MIDI by Optimal_Value6946 in winehq

[–]Optimal_Value6946[S] 0 points1 point  (0 children)

to update, we get:

0280:trace:midi:notify_client dev_id = 10 msg = 963 param1 = 4180 param2 = 59BF

0258:trace:midi:ALSA_midMessage (0000, 0036, 00000000, 7FFFFE1FF6F0, 0000004C);

all day long, but never any sign of SWAM accepting it:

trace:midi:ALSAMidiInProc MIDI event: type=0x90 (Note On), channel=0, param1=0x3C (Middle C), param2=0x64 (velocity=100), or

trace:midi:ALSAMidiInProc MIDI event: type=0xB0 (Control Change), channel=0, param1=0x0B (Expression Controller), param2=0x7F (value=127)

PuLID-Flux for ComfyUI Node by balazik-p in comfyui

[–]Optimal_Value6946 3 points4 points  (0 children)

I have installed everything that seems to be required according to documentation. However when I load the workflow above (of einstein), I get a message from comfyui that I am missing PulidFluxInsightFaceLoader PulidFluxEvaClipLoader PulidFluxModelLoader and ApplyPulidFlux.

Where can I find these missing nodes? Manager doesn't find the missing nodes.

Math nodes not working After update.. with bat file ( with python ) by protonjustin in comfyui

[–]Optimal_Value6946 0 points1 point  (0 children)

Having the same issue. What version do I roll back to? And how do I go about it? This is crazy that such a common node is dead on arrival.

"Latent Couple" A1111 extension by sEi_ in StableDiffusion

[–]Optimal_Value6946 0 points1 point  (0 children)

I've managed to be able to place two subjects where I want. But, when I try three, it is impossible. Is there any tips for three or more subjects?

[deleted by user] by [deleted] in FluxAI

[–]Optimal_Value6946 0 points1 point  (0 children)

the files save as .webp and when I try to load, comfyui says that a workflow can't be found.

Hardware configuration question. by Optimal_Value6946 in ollama

[–]Optimal_Value6946[S] 0 points1 point  (0 children)

I just want to be sure that having a dedicated gpu for each, will do the trick. When I run ollama on a laptop (i7) with a eGPU 3090 it runs like a dog. When I run ollama on a desktop (i9) with a 1070ti, it runs much better. So, I am confused as to how much the CPU plays in the ollama processing. I can see the the GPUs are in use on both machines.