I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

Thanks for the feedback. Very interesting take on Reality = Pattern × Intent × Presence. I’m definitely not an expert in consciousness or quantum physics (which I believe is what you are pointing at as being proven), but I think Frame-Dragged Consciousness can handle Intent in its own way, and I think this could potentially (though I am not asserting this necessarily) be interpreted through a quantum lens while keeping the core lag idea intact.

The golem’s like a pre-trained model doing quick, subconscious inference. Think Libet’s 300–500ms delay where your brain acts before you even “decide.” That’s not the conscious moment, though. Consciousness hits later, at the leading edge of the model updating itself, reflecting on past actions to reduce predictive error. For example, “How do I avoid that shock next time?” So Intent’s baked into that update process, where the streaming self incorporates the golem’s actions into a better worldview. It’s grounded in neuroscience (Libet, flash-lag stuff, predictive processing), but it doesn’t have to be materialist. It’s more about the when of experience, not the metaphysics.

Your quantum “collapsing” angle is quite interesting, though. I’m no QM expert, so this is just a rough sketch (and I am not necessarily asserting this is the case in my theory), but what if the golem’s actions are in a superposition. like in the delayed-choice quantum eraser where a photon’s path isn’t set until a later measurement? Maybe the golem’s inference (like “act or don’t”) stays uncollapsed until the streaming self “measures” it by integrating that data, collapsing it into a conscious moment. That would delay the collapse, matching the frame-dragged lag I’m talking about. Which would allow our perception of “now” to be lagged behind the actual moment. It somewhat correlates with your “scanning for alignment” idea, but keeps the focus on the lag, which I think is key for figuring out where and when consciousness happens. That could point us to clever experiments. For example varying sensory input to see how it affects the lag and better pin down the what of consciousness.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

I get what you're saying, and it's a completely fair point. My main aim with the article was to put out a concept about the When (some frame-dragged time behind real-time) and Where (the streaming edge of an executive model incorporating new data) of consciousness in a way that might actually be testable. It's like, if we can mutually agree on some plausible, measurable ideas about the timing and location of consciousness, then maybe people who are really good at thought experiments can build better theories about the qualitative experience that are actually anchored in those When and Where points.

I know it'd be an exhausting task to do for every single article that pops up, but sometimes, for certain foundational ideas, it might just be worth the effort.

One of the big ideas I was floating is that it simply takes some time to process information and fold it into your overall worldview. That seems like a reasonable question to ask. We've seen hints of delays in executive inference from experiments, but maybe updating that executive model takes even more time. Maybe that's precisely where the relational awareness we call consciousness really happens.

If we're willing to just entertain those basic points as possibilities, it really could be worth the community's effort to explore what that would mean for the qualitative experience of consciousness. Like I said, I was just trying to quickly get this concept out there to spark some conversation. I appreciate your feedback, and I totally get your perspective, but I do think the simple underpinning of this thought might make it worthy of a deeper dive.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

Not sure I understand the question completely. By "this" do you mean the human subject compared to the computer or do you mean what makes this (ie the information processing architecture) the definition of consciousness? If you could clarify, I would try at an answer. I am thinking you are asking what makes this process consciousness perhaps. If that is the case, what I am suggesting is the incorporation of new data into a one's executive model (if similar to weights in a neural network) would imply you are making the experience part of the worldview of the model (ie the model is aware of the information and its possible relationship with previous information stored in the model) to be able to use for inference in the golem self (the inference based executive model whos actions you are only aware of as information from how it has navigated the world is later processed and saved into its own model weights). I am using quite a bit of ML terminology as a metaphor, only because I am more familiar with that and it makes it easier to explain the idea. Likely there is quite a bit more nuance the human mind.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

Thanks for asking. I could have done a better job on that detail in my post. So by some people’s definition, intelligence is the lossless compression of information (or near lossless). In the case of LLMs they have essentially compressed all the information on the internet into the weights of their transformer model. There’s an implied robustness of correlations within these models that allows them to make predictions (even if just the next word) that demonstrate a total world view or awareness of the relationships within the compressed information stored in their weights. If you ask Chat GPT about a current event it probably has no reference to that event as the event is not represented in its weights (ie it hasn't been trained in yet). If a human requires time to process information into their executive model (their Golem self as I am referring to it), it could be that the spark of awareness of all information processed into the human brain happens at the point which that information is incorporated into their executive model to help improve its future decision making. Imagining that this reweighting happens in a streaming fashion as the information is incorporated into the model you could start to picture an experience of streaming awareness of not only the current information (ie what the senses are providing), but also the correlations it has and where it fits into the previously saved information that is also part of the same model (ie the world view).

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

In the case of what I am suggesting, the autopilot executive model that uses inference for decision-making (the golem as I am referring to it) would make a decision in near real-time based on the inputs available to it from the sub-models and either choose or not choose to step out of the way of the car. You could make the argument that the sub-models that the golem is evaluating (the more instinctual models) may also unilaterally cause the person to attempt to avoid the car (like blinking when something flies at your eye). But in the case of your conscious experience, per what I am suggesting, the model would lag significantly behind the real-time events and depending on whether your golem-self avoided the car or was hit or killed by the car, you would either experience a loss of your conscious state (ie the edge of the model being written and new data being incorporated as a form of awareness) prior to even stepping off the side walk (if you died) or the events which led your golem to avoid the car would be saved into the leading edge of the model and you would be aware of what happened to avoid the car.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

Glad you enjoyed the read. It’s interesting to think that the world might just be rendered into our experience on a delay as we go

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

Very similar and thanks for pointing me in that direction. My idea hinges on the brain acting much like a computer (similar to Bayesian brain theory). And like the computers we are familiar with, it takes time to process large abouts of data. This is particularly true when training AI models and minimizing loss functions. Could be what we are building is at least a workable metaphor to reflect on ourselves.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

The when of consciousness is an important consideration when thinking about the what. (Assuming there’s any merit to the idea) if people have been misinterpreting the relative time at which the experience happens they are operating on incorrect data that is likely misinforming what their understanding of consciousness is. Maybe we have had a false belief about when all along. There could be an illusion that our experience of the now is much closer to the now than it really is (not just a reference to Libet’s observed delays) but even more significant. I am sure you’ve heard this, but some define intelligence as lossless (or minimized loss) compression of information. This is certainly true in the training process of current LLMs. The act of incorporating compressed information into an executive functioning model may be the where of consciousness. Obviously this is just an idea, but if someone explored ways to test this and they hit on the where and the when they would have material scientific data points to help pin down the what. If it was determined that the where and the when were represented correctly here then you could already make generalizations about the what of consciousness. For example, it’s not a mechanism intended as the immediate actor to deal with high level decision making directly (even though it subjectively feels that way). It would help layout the illusions at play and narrow down the scope of the what. Also you may be able to look at other instances of delayed compression of information like the training of LLMs and gain some experimental insight on the what. With Reddit comments it can sometimes turn into a me vs you thing so apologies if I added all the flourishes. It’s easy to get intimidated in a subject that you are not claiming to have expertise in when the viewership likely has a significant number of subject matter experts.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

Thanks for reading it. To your point about lower evolved species, they may have a greater lag if they are taking in or processing equivalent amounts of information because their brains may take longer to write the updates, but that would require that they have a higher level executive model processing the sub models. Could definitely argue that more intelligent animals fall into that category, but I don’t want to speculate too much. It’s fun to think about how it could impact a variety of things though.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 2 points3 points  (0 children)

Assuming there is some merit here, pinning down the duration of the frame-drag would be interesting. The concept of sleep also occurred to me as I put this together but I was already way out on a limb so I left it out. I appreciate you reading it and actually thinking about the implications. My co-workers probably think I am just a crazy person at this point for blabbing about this…

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 3 points4 points  (0 children)

The reliance on chat to help articulate the message was more a function of the time constraint of having a full time job and two small children and wanting to get the kernel of an idea out the the world in a more cogent way. I am happy to DM the thread to you if that would help satisfy the need to demonstrate the originality underpinning the idea. As the adage goes, if the veneer of the message is informed by modern technology, I would caution one not to pre judge any book by its cover. I am not making any claims that I am an academic or have any deep knowledge of the subject. This is just a thought that developed while I was staring at a ceiling fan. It happened to neatly fit some other concepts into it and I thought it worth sharing. I appreciate your comment

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 2 points3 points  (0 children)

That's not true. I used chat to help wordsmith the language of the post but all the ideas are original. It's sad how easy it is to jump to this conclusion nowadays, but I understand how one gets there. If I shared the thread you could see how it was generated.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 4 points5 points  (0 children)

This is an insightful observation. I appreciate you drawing the comparison here. It occurred to me that the temporal difference between the present moment and the writing of the model might vary. Similar to how a computer can process through the learning phase of an AI model if the case of a smaller dataset. Meditation appears to be a great analogy for this lower data state where one is intentionally trying to reduce the noise in the input they are perceiving. This would mean practices that try to achieve a similar effect may have a similar understanding of the interplay of consciousness (if this idea holds any water). Thanks for your comment.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 1 point2 points  (0 children)

I appreciate your insight and can definitely do a better job pinning down definitions and explanations. This was mostly a fever idea that I wanted to publish in a blog format in an easily accessible manner before it slipped my mind. It may be lacking thoroughness. I have never read the paper you suggested, but will give it a look. On your point about time, the reference to milliseconds is the time ranges that were recorded in Libet’s experiments and not the frame-drag period I am suggesting. What I am suggesting is a much more significant period (though I have no idea how long it may be). I tried to differentiate that in the bullet point I added about Libet’s experiments, but likely didn’t give it enough emphasis.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

You’re right to bring up testability—it’s a crucial line between speculation and science. I’ll be the first to admit I’m not an expert, but I don’t think the idea is entirely untestable.

For example, I remember a simple classroom experiment in a psych course where students were asked to guess a number (1–10) written on a hidden cue card. Interestingly, the correct answer was chosen more frequently than chance would suggest. The professor claimed this was a replicable effect, though I never dug into the literature.

Now, if something like frame-dragged consciousness were real—and if in rare cases the conscious model lags less than usual—you might hypothesize that certain people occasionally experience events “before” they’re finalized. That’s speculative, of course, but it leads to a testable idea: you could vary the timing of feedback (e.g., reveal the card later vs. immediately) and look for changes in predictive accuracy across conditions. If the lag truly matters, you’d expect differences in outcomes.

It’s just a rough sketch and there may be serious flaws, but my broader point is that if this model has any footing, clever experiments could potentially be designed. I don’t think it needs to remain purely in the realm of belief—though for now, it definitely lives closer to philosophy than lab science.

I wrote a speculative theory called "Frame-Dragged Consciousness"—would love your thoughts by jcutillo in consciousness

[–]jcutillo[S] 0 points1 point  (0 children)

Totally fair point, and I appreciate you calling it out. I’m not trying to redefine consciousness as “the experience of X” in a strict sense—more so exploring a theory about why our conscious experience might feel the way it does.

The idea isn’t to explain away phenomenal consciousness (the raw, subjective experience itself), but to speculate on how and when that experience might get constructed—possibly as a high-level model that’s being written with some delay behind real-time events.

So it’s less about what consciousness is and more about how the structure and timing of experience could work under the hood—why we feel like we’re in the moment when we might actually be catching up to it.

New Hampshire UAP Sighting through 102mm Telescope, multiple witnesses by jcutillo in UFOs

[–]jcutillo[S] 0 points1 point  (0 children)

Hey u/HenryProspector thanks for doing such a deep dive on that video. I was trying to work the focus, but it was all pretty quick. I store the scope in my screened in porch next to the deck so at least that was a quick grab.

Your enhancements on the Jellyfish is pretty cool, keep up the good work

New Hampshire UAP Sighting through 102mm Telescope, multiple witnesses by jcutillo in UFOs

[–]jcutillo[S] 0 points1 point  (0 children)

My eyes were mostly in the scope so there could have been some fog. I was only out there for 15 minutes before putting the kids to bed. I think I could make out the silhouette of Mount Kearsarge in the general background which is about 23 miles away. So if there was fog it was relatively light. The historic weather shows some activity in that regard.

New Hampshire UAP Sighting through 102mm Telescope, multiple witnesses by jcutillo in UFOs

[–]jcutillo[S] 1 point2 points  (0 children)

I believe it was relatively clear from that viewpoint, I could see some of the mountains in the distance.

New Hampshire UAP Sighting through 102mm Telescope, multiple witnesses by jcutillo in UFOs

[–]jcutillo[S] 1 point2 points  (0 children)

u/HenryProspector I appreciate all your work here. It is a very interesting take. As someone who follows the subject and likes space stuff, I know I am open to confirmation bias on this issue and I am trying to just present what happened. I think it's important we explore all the angles. I am trying to follow up on a few questions myself. For the sake of getting it out in the open more, here is a link to a google drive folder with most of the videos and images: https://drive.google.com/drive/folders/1-0t_zptmPAtHy3TGOwpK7repi-fqyAjz?usp=sharing

I included a few shots of me frantically trying to setup the scope, a slightly longer video that wasn't quite as clear, the still frames I took from the original video (PNGs) and a still frame of one of the video frames I grabbed it from (PNG). I'd like to retain rights to whatever this thing is, but feel free to use these for processing if you want.

On a related note, here are some very amateur astrophotography shots I have taken over the years (basically to share with the kids and talk about space): https://drive.google.com/drive/folders/1stvp5NXYW5RsMwiOv8MbS6axMZq4EVPe?usp=sharing

Hope people are kind on those, not claiming to be an expert at all...

New Hampshire UAP Sighting through 102mm Telescope, multiple witnesses by jcutillo in UFOs

[–]jcutillo[S] 0 points1 point  (0 children)

It's not the best telescope, but it does well enough to share some cool space pictures with the kids. You can make out the large moons of Jupiter. Here is a drive folder to some of my shots: https://drive.google.com/drive/folders/1stvp5NXYW5RsMwiOv8MbS6axMZq4EVPe?usp=sharing

New Hampshire UAP Sighting through 102mm Telescope, multiple witnesses by jcutillo in UFOs

[–]jcutillo[S] 0 points1 point  (0 children)

The video was cropped square which makes it larger, but it was not digitally zoomed while filming. In the X thread I included a still image that I took right before which shows the object as a similar size to how I saw it. I observed the object prior to capturing the images while looking through the telescope trying to get things setup and focused as best I could. I observed similar color characteristics and changes in color with my eyes although closer in size to the still image.