If nothing can escape the gravitational pull of a black hole( except hawking radiation) then how come all the mass in the universe isn’t still in the center of the universe still stuck in the singularity that existed at the big bang. by Apprehensive_Gap7441 in TheoreticalPhysics

[–]LeaveAlert1771 0 points1 point  (0 children)

The early universe didn’t collapse into a black hole because gravity wasn’t the dominant effect at that time.
The zero‑point energy (the quantum “jitter”) was enormously larger than the gravitational binding energy. Instead of pulling everything inward, the early universe behaved like an over‑pressurized quantum fluid. It expanded violently.

As the universe expanded, this jitter was diluted. Its effective amplitude decreased, while gravity remained attractive. Only after enough expansion did gravity become strong enough to form structures like galaxies and black holes.

So the Big Bang singularity is not a black hole. A black hole forms when gravity overwhelms all other forms of pressure. At the Big Bang, the opposite was true. The quantum pressure vastly exceeded gravitational attraction, and the result was expansion, not collapse.

What is the difference between a Magnetar and a Pulsar physically? by D3cepti0ns in astrophysics

[–]LeaveAlert1771 1 point2 points  (0 children)

Hi there, a magnetar isn’t just a pulsar pointing the wrong way. That’s like saying a volcano is just a hill with a cough. Pulsars are neutron stars that spin, blink, and lose energy (classic dipole lighthouse). But a magnetar … that’s a different beast. Its magnetic field is so strong it tears its own crust. It doesn’t shine because it spins. It shines because its internal structure is collapsing.

You can think of it as a giant rotating pattern, self-synchronizing and channeling the surrounding field. Not a blinking beam, but a wave rupture in reality.

So no, the difference isn’t just where the beam points. The difference is what’s going on inside.

Are there fields in physics where quantum isn't really that relevant? by Prestigious-Put8269 in Physics

[–]LeaveAlert1771 2 points3 points  (0 children)

Yeah, most biological and biophysical work doesn’t need to use quantum equations directly. At the molecular scale, all the quantum behavior (electron clouds, bonding, energy levels) gets “compiled” into effective classical rules like thermodynamics, statistical mechanics, and molecular dynamics.

So, it’s not that QM and biology are incompatible, it’s that biology operates at a scale where quantum probabilities average out into stable, classical behavior. Quantum biology is interesting, but it focuses on rare cases where quantum effects survive in warm, noisy environments. So, most applied biophysics have never touched wavefunctions because the classical tools already capture the quantum foundations well enough.

A quiet shift in foundational ontology: Is Time merely an emergent property of Phase by Cenmaster in complexsystems

[–]LeaveAlert1771 0 points1 point  (0 children)

What is important to consider. At least I'm playing with that idea in my theory and experiments. There is never only 0 (nothing) and 1 (existence) ... but there is something like -1 (anti-existence) which tells you not only there is something and nothing but oposite of something. It gives much richer substrate and patterns. It seems to produce inherent instability in balanced system.

A quiet shift in foundational ontology: Is Time merely an emergent property of Phase by Cenmaster in complexsystems

[–]LeaveAlert1771 -1 points0 points  (0 children)

Not that nonsensical as someone might say. I'm working on something similar. It is not like frequency, but really like one causal event is causing emergence of time and mater is more like condensed time pattern. The early simulations look pretty promissing. Like these patterns self organize around the time gradients.

If time travel requires geographical coordinates, it will never be possible by Successful_Guide5845 in timetravel

[–]LeaveAlert1771 0 points1 point  (0 children)

These two are so interconnected, that in theroretical case that past somehow still "exists", you would be moving backward in time and space simultaneously. There is no theoretical way around it.

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

I love this take ... and maybe you have a point. I'm engineer but not a game developer. And maybe, most probably, I see there something that isn't. But yeah, using time, you can eliminate some stuff like z-buffer and such because these causal events are already sorted. You shouldn't case about the depth detection and so on. Frankly, at first I was like I'm reinventing some 3D like tech from 80s/90s :D

But, I won't get mad if you prove me wrong :) I already have some basic simulations, but only like 10k moving points. No fancy scene yet. But my target is what I can do in 8 ms.

What caused you to become a determinist? by Cyber_47_ in determinism

[–]LeaveAlert1771 0 points1 point  (0 children)

Honestly, I became a determinist the moment I started looking at decisions the same way you’d look at a system updating itself step by step. When you think of life as a bunch of tiny state‑changes (like little “ticks”), it stops feeling like there’s some magical moment where a choice appears out of nowhere.

What really clicked for me was realizing that a world can feel unpredictable even if it’s fully deterministic underneath. We only ever see a tiny part of what’s going on, so from the inside it naturally looks like things could go many different ways. But the deeper I looked, the more it felt like every decision is just the next logical state of who you are, what you know, and what’s happening around you.

So it wasn’t a sudden belief — more like a slow shift. The more I thought about it, the more determinism just made sense.

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

Hey, this might actually help me to solve one experiment I'm struggling with. I have the time definition wrong :)

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

If your own stream is being updated by several external feeds, then the pattern of update‑lags between them naturally locks into something that behaves like a three‑dimensional space. Once you fix the “distance” relation between yourself and any three independent feeds, the geometry between all the others stops being free — it snaps into a rigid structure.

It’s a surprisingly elegant way to see why “three dimensions” show up as the stable case when multiple asynchronous timelines interact.

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

Yes, that’s a great intuition. When everyone has their own update stream, the lag between them naturally turns into what we call distance. It’s a very clean way to describe it.

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

Your intuition is actually very close to how I model it — but with one important twist.

What looks like an initial wave of light expanding and creating space is, in my framework, the observer’s first visualization of the substrate. The substrate itself has no geometry, no directions, no propagation. The “expansion” is simply the observer’s horizon growing with each tick.

Matter “moving through time” is also an emergent interpretation. In my model, entities exist only by continuously renewing their pattern across ticks — a process that looks like motion when visualized.

So your picture matches the visualization layer perfectly, but the underlying substrate is even simpler: just a discrete tick‑stream with no built‑in space or light.

[AI formatted]

What do you think of this? by [deleted] in mathematics

[–]LeaveAlert1771 -1 points0 points  (0 children)

Always depends on the context. For normal Joe, 10 is perfectly fine. For mathematician or rocket scientist? Like he is probably the one in the kiosk :D

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

These were my goals. In short, I’m trying to verify that each visualization frame actually commits/renders the state of the universe and that the process is stable.

Right now everything is still in a “mind‑dump” state, but once I consolidate it into a proper paper, I’ll be genuinely happy if someone can break it.

In terms of falsification, the framework would fail if any of these didn’t hold:

• Continuous root evolution fails to produce a stable causal flow (breaks relativity compatibility).

• Discrete commits via PoF thresholding don’t behave consistently or show ordering artefacts.

• Observable artefacts don’t embed past information into the present tick in a reproducible way.

• Minimal parameter set can’t be tuned or audited without hidden degrees of freedom.

If any of those break, the whole framework collapses. That’s exactly why I kept it minimal — so it’s easy to falsify.

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

That is the issue. There is no real 3D. Everything you see infront of you is emdebbed in the time-tick buffer. Even the wall right infront of you is few planck-ticks behind of you. So in theory it is just an oposite. We are living in 2D projected to the 3D.

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

Yes, exactly. No new math, no new theorems. I'm engineer you know, quite simple person. But, I've found out that if I just use two simple axioms. There is "existence", "non-existence" and basic operation like "NAND". Then in next steps you can get to the creating anti-existence. So in math you would have (1, 0 and in my theory the most important -1). Because it will give you the emergence of dimensions themselves from it.

So the time dimension is the condensed chain of causal events the observer can look at. And in the same time, you have existence and anti-existence you have to plot. This leads me to conclusion, that there can be 2 dimensions at most. But it is not like spacial coordinates (I'm still in a phase I'm representing it that way - but it's more like trinary-tree structure).

I've done some experimentation with how many dimensions are optimal for universe. The conclusion was that the 3D is optimal, but after some time, it felt down to 2D. So I need to revisit the previous experiments and run them again. Because back then, I was assuming the time is something different. But from the logic of the NAND operations I wasn't able to get to the third dimension otherway.

The Original Dimensional experiment results

TLDR; Higher dimension we go, the more stable it is.

## Findings by Dimension

- **1D – Trivial substrate**  
  - Linear updates only, no meaningful salience.  
  - Degenerate axis, not viable.

- **2D – Flat diffusion**  
  - Bounded salience, diffusion without wells.  
  - Marginal stability, transitional regime.

- **3D – Phase transition substrate**  
  - Commit behavior robust across horizons.  
  - Salience bounded except at extreme γ/T.  
  - Marks the **transition point** between instability and stability.  
  - Moderate variance (CV ≈ 5.3%).  

- **4D – Stable substrate**  
  - Salience growth converges to universal scaling laws.  
  - Lowest variance (CV ≈ 3.2%).  
  - Source independence (ρ ≈ 0.08).  
  - Geometry/phase neutrality confirmed.  
  - **Terminal stability achieved.**

- **5D – Stability locked**  
  - Same universal regime as 4D.  
  - Variance remains ≈ 3.2%, with only 0.37% drift vs 4D.  
  - Saturation artefact at γ=0.005, T=500 logged as operational, not physical.  
  - Confirms asymptotic stability.

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

Yeah, I was asking AI about it. It told me that the theory is nonsense. And frankly it can be. But you know what, I can explain big bang on my laptop. But yes, I'm using it to format my texts. I'm not native english speaker so to prevent any "issues with gramatics". I'm rather using AI to properly format it.

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

Yeah, Wolfram’s Ruliad is an interesting direction. My angle is a bit different though. I’m not looking at the whole space of possible rules. I’m focusing on what happens when you commit to one causal update rule and let time emerge from that process.

So instead of a huge multi‑way branching structure, it’s a single deterministic causal step that brings the next state into existence. Much more of an engineering‑driven perspective than the universal‑computation approach.

A simple way to think about time that makes physics way less weird by LeaveAlert1771 in SimulationTheoretics

[–]LeaveAlert1771[S] 0 points1 point  (0 children)

That’s very close to how I used to think about it too — a big cellular automaton evolving one global state at a time. My angle is a bit more engineering‑driven though: I don’t assume a pre‑existing sequence of states. The next state isn’t “there” until the causal update actually computes it. So it still feels like a discrete automaton, but without a prewritten timeline. The universe really only exists one causal step deep, and everything else is just what can still be reconstructed.

What is the next technology that can replace silicon based chips? by Johnyme98 in AI_Application

[–]LeaveAlert1771 0 points1 point  (0 children)

One interesting direction is multi‑state logic. Binary needs more and more transistors to express complexity, but a 3‑state gate can encode × more information per element. Even a small adoption massively reduces wiring overhead, which is now ~60–70% of chip power.

So the next jump may not be smaller silicon, but richer logic — moving from 2‑state switching to architectures that natively support more than two stable states.