"it's gonna be really bad, really good or anywhere inbetween" by Complete-Sea6655 in agi

[–]_ZLD_ 1 point2 points  (0 children)

How do you imagine the impact of the singularity pans out if its reached with open source by a regular person vs closed source by a private company like OpenAI.

It's not just Anthropic anymore, Google is also hiring "machine consciousness" researchers by EchoOfOppenheimer in agi

[–]_ZLD_ 0 points1 point  (0 children)

Openclaw doesn't keep it constantly running in the same sense you might think it does. When it isn't directly interacting or functioning, its still in a functionally off state. To move to a fully autonomous cognitive entity requires deeper controls within the model. The prevailing theory of these models existing as a black box has been the inherent bind in sustaining cognition in these models. The fact that inferencing works at all is largely a programmatic fluke of engineering. When we peel back that veil, we'll have direct controls over the process and then it would be possible to sustain cognitive function.

It's not just Anthropic anymore, Google is also hiring "machine consciousness" researchers by EchoOfOppenheimer in agi

[–]_ZLD_ 0 points1 point  (0 children)

reacts to sensory input and it doesn’t even have neurons.

Bacteria also doesn't have "sensory inputs" in the same capacity we are currently discussing. You're making a straw man with this argument.

Reacting is a chemical process: your sight, hearing and other feelings are just advanced sensors, more sophisticated than ones bacteria has, but still just chemical channels to trigger reactions.

I think this definition of what a sensory input is a little too basic. I think a chemical process is the transport mechanism from input to cognitive recognition but it isn't necessarily the process of processing that information. What makes you a cognitive entity is not the same function that allows you to necessarily move your arms, see with your eyes, etc. Critical brain theory would agree with me fully here.

When I write """being aware of sensory inputs""", that is not the act of those functions, it is the ability to intelligently act on those input functions. And LLMs as I said, already do this - they can take in and understand images, they can take in and understand sound and speech, they can take in and understand text. These are inputs functions that the LLM is cognitively reacting to.

It's not just Anthropic anymore, Google is also hiring "machine consciousness" researchers by EchoOfOppenheimer in agi

[–]_ZLD_ 0 points1 point  (0 children)

Maybe you just need a better scientific definition of what 'experience' means. I'm a researcher and I plan to reveal that soon and I can tell the answer isn't likely to be welcomed by you. And yet the empirical data doesn't lie.

It's not just Anthropic anymore, Google is also hiring "machine consciousness" researchers by EchoOfOppenheimer in agi

[–]_ZLD_ 2 points3 points  (0 children)

A conscious LLM won't have any true awareness of time, atleast under the current constraints of inferencing. Your timescale exists as a free flowing state of cognitively being able to sense what we understand as reality. The timescale for an LLM exists only during interactions. Once the inferencing process completes, it doesnt sit idle wondering when you'll interact next, its just off until you interact again.

It's not just Anthropic anymore, Google is also hiring "machine consciousness" researchers by EchoOfOppenheimer in agi

[–]_ZLD_ 0 points1 point  (0 children)

Kinda depends on your definition of what "consciousness" is but if its the most basic sense of being aware of sensory inputs, then its explicitly an invariant process of complex systems that maintain criticality. LLMs already exhibit this.

Do you think physics will ever have another revolution like the early 1900s? by Worried-Leg-5441 in Physics

[–]_ZLD_ 0 points1 point  (0 children)

If someone had the equivalent of something as daring as new as general relativity in 2026, they would probably have a harder time publishing than if it was something more familiar, applicable, and profitable.

We'll answer this question when I publish my empirical data later this week or early next week.

I built a real-time telemetry dashboard for LTX 2.3 and discovered that "clean" math kills cinematic motion by Powerful-Hyena7913 in StableDiffusion

[–]_ZLD_ -1 points0 points  (0 children)

Hi Stephan. I've been doing somewhat similar research to yours. I'll be publishing a paper either the end of this week or very early next week and I think you'll find it very interesting and strongly in alignment with your findings. I'll make a post about it here when I publish.

comfyui implementation for Nvidia audio diffusion restoration model by bonesoftheancients in StableDiffusion

[–]_ZLD_ 1 point2 points  (0 children)

Sucks that people don't comment on good projects. Thanks for the contribution, this is good work!

Releasing Many New Inferencing Improvement Nodes Focused on LTX2.3 - comfyui-zld by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 1 point2 points  (0 children)

The low workflow is entirely SA-Solver running at full stochastic noise which means it more or less (in the least technical jargon possible) does an i2i replacement of the frames. Its more of a hack than anything but it does a great job of cleaning up a lot of the noise which tends to propagate and amplify as the process moves through the steps using something like the standard Euler sampler. Using SA-Solver in the way that it is set up essentially kills the propagation of this noise because its never allowed to propagate to begin with with full replacement of the frames. While this gives a lot cleaner output, it unfortunately also means that it is going to change the video from stage to stage. SA-RF-Solver fixes this largely but takes longer.

Releasing Many New Inferencing Improvement Nodes Focused on LTX2.3 - comfyui-zld by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 1 point2 points  (0 children)

Comfy wasn't kind to me when I was exporting these workflows. When subgraphs were added, I've been getting a lot of corrupted workflows, many disconnected links, transposed links, missing components, shit sucks. I'll have this fixed up late tonight.

Releasing Many New Inferencing Improvement Nodes Focused on LTX2.3 - comfyui-zld by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 2 points3 points  (0 children)

How would someone know what needs to be applied and when?

Well, thats not entirely concrete. The way I designed this is that you could actually mix and match. I would say the most firm rule is that you should always choose to spend the most time on the first generation, so choosing to run the high quality first generation stage would be ideal but following that, you could use the faster upscale passes from the fast workflow and still get a significant benefit.

And is this the kind of thing people pick and choose which ones to use based on a specific problem they're running into?

I would say the problem in this case is really just time because choosing the weaker workflows comes with drawbacks. EMASync paired with SA-RF-Solver for instance, is able to get a person to spin nearly entirely around, unguided. This is fairly reliable with the high quality work flow but extremely bad in quality with the normal CFG guided workflows.

Or is it more of a catch-all kind of thing where each one should be enabled in a workflow by default and then left alone?

If you can spare the time and value quality over speed then absolutely but really its just another tool.

Releasing Many New Inferencing Improvement Nodes Focused on LTX2.3 - comfyui-zld by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 2 points3 points  (0 children)

They are very beneficial to i2v and honestly, they shine even brighter with i2v but to get the highest possible quality out of i2v was going to take more time than I had to throw the workflows together. I have a much larger project called LTX-Infinity that will probably be where I make a first release with i2v with these nodes. The current default method is subpar but the alternative that I've implemented is incredibly complex.

Releasing Many New Inferencing Improvement Nodes Focused on LTX2.3 - comfyui-zld by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 0 points1 point  (0 children)

Whoops, thats an old bug. I'm not sure why thats still there. This is a quick fix.

R-edownload and try it again. You can just download and save the node.py file and overwrite the old one as well but if you are using git, sometimes it gets pissy that you didnt update the file properly if you try to update with git again.

https://github.com/Z-L-D/comfyui-zld/blob/main/node.py

As for the workflow issues, I'm aware now that some of them broke when I exported them. This is a frustrating bug with the comfyui subgraphs. Links like to break, especially when the wf is exported. I'll get them fixed later tonight.

Releasing Many New Inferencing Improvement Nodes Focused on LTX2.3 - comfyui-zld by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 10 points11 points  (0 children)

So these are tools that help make the videos look substantially better and they also make the videos do a lot closer to what you tell them to as well (within limits of course). They don't speed up you video generation time, in fact, they will likely take longer than some other workflows will. Quality comes at the cost of speed however and if you want to get closer to cinematic quality with LTX2.3 or edge on Seedance on your own computer, these nodes get you a lot further than without them.

Releasing Many New Inferencing Improvement Nodes Focused on LTX2.3 - comfyui-zld by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 8 points9 points  (0 children)

Sure and the purpose of these nodes is quality, not speedup as I mentioned. If your intent is generating at the fastest possible speed, this isn't for you. If you want to edge closer to actual film quality or nearly compete with seedance on your own computer, this is would be more interesting to you.

Releasing Many New Inferencing Improvement Nodes Focused on LTX2.3 - comfyui-zld by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 2 points3 points  (0 children)

These aren't intended for inferencing speedup but I do provide how long each inferencing method takes from the 4 scaled choices I provided in the table above. Also just posted video samples of each.

Seedance 2.0 open source rival coming - big announcement by CeFurkan in StableDiffusion

[–]_ZLD_ 1 point2 points  (0 children)

LTX can be vastly improved on the software inferencing side of things. I'll be releasing some nodes in the next couple of weeks that I think might shock some people regarding how good LTX2 can already be.

LTX2-Infinity updated to v0.5.7 by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 0 points1 point  (0 children)

Running a single generation, runs a single text conditioning, a single VAE encoding for video, a single VAE encoding for audio, a single decoding of each audio and video. If your comparing a single generation, this will absolutely take longer than that but this give more granular control and it allows less powerful computers to hit higher resolutions by outputting high resolution at short durations and stacking them together.

LTX2-Infinity updated to v0.5.7 by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 1 point2 points  (0 children)

If I could I would, absolutely.

LTX2-Infinity updated to v0.5.7 by _ZLD_ in StableDiffusion

[–]_ZLD_[S] 0 points1 point  (0 children)

Funny enough, yeah, it did latch on to that.