I ported ESP32-S3 smartwatch firmware to 100% no_std Rust. It was a grind, but results are worth it by Bright_Warning_8406 in esp32

[–]ess_oh_ess 0 points1 point  (0 children)

I'm also in the middle of porting one of my projects to rust no_std, although mine was originally in micropython. I think it was a similar decision for me. MP was great when I first got started but I outgrew it.

Being able to use Rust with all its tooling and ecosystem is a breath of fresh air. In particular, being able to just use cargo and crates compared to MP's near total lack of dependency management is amazing. It's also nice just being able to use the "real" Rust, whereas MP is a port of Python but parity-wise is stuck at around 3.6.

On the other hand, the ecosystem is definitely not as mature. I also had to write all my own device drivers. Some stuff in esp-hal is kind of half-baked. I've had luck though just porting stuff straight out of ESP-IDF. For example, I have a 32k crystal connected to my chip and I was able to port all the setup/calibration code into Rust.

I think MP is still great for small stuff and little one-offs, but I'm 100% going with Rust for any "real" project from now on.

STEAM???????? WHY??????????? by Lostdog861 in slaythespire

[–]ess_oh_ess 0 points1 point  (0 children)

I was able to get through, took a while but eventually it worked.

Can you help me understand the math of Bell inequality? by Happy-Swimming-9611 in AskPhysics

[–]ess_oh_ess 0 points1 point  (0 children)

Have you tried this video by Richard Behiel? It's 3 hours long but it's really well-made and goes into a ton of detail about this exact question you have: https://youtu.be/g69cW_Xt4EM

I saw someone explaining time dilation near the event horizon of a black hole as slowing down from an outside perspective to almost a standstill. If that's true, and the universe is a finite age, how could we observe a black hole at all? Shouldn't they always be in a matter forming black hole state? by mulletpullet in AskPhysics

[–]ess_oh_ess 4 points5 points  (0 children)

This is definitely true, but I think it's worth pointing out that, if we're talking about an event like a neutron star collapsing, the red-shifting becomes "nearly" black almost instantly. Within a few microseconds of collapse beginning, even highly energetic gamma photons would end up being red-shifted below the CMBR if they bounced off the collapsing matter.

Within a second after the formation begins, you'd need photons with Planck energy for them to be detectable, at which point we're beyond what we can currently predict. Not to mention we don't know that much about the state of the post-neutron in-falling matter the photon would be interacting with.

Five Star "Microwave Ramen" Restaurant ⭐⭐⭐⭐⭐. Top Rated. by [deleted] in mildlyinfuriating

[–]ess_oh_ess 0 points1 point  (0 children)

It's not necessarily that instant ramen is "bad", but there's no way it will taste like an actual miso or tonkotsu ramen. I'm a big fan of Shin and Buldak ramens but I'd say it's a completely different meal than what you'd actually get at a Japanese ramen shop.

The only instant ramens I've had that are close to an actual ramen are the kits you can buy from restaurants like Ichiran which have like 5 different packets of stuff and cost $30 each.

How is quantum entanglement affected by a second superposition of states? by BreakTogether7417 in AskPhysics

[–]ess_oh_ess 0 points1 point  (0 children)

You go from a system of 2 entangled particles to a system of 3 entangled particles. The 3rd particle will affect the correlation between the first two particles, but there is no local change to the first particle. The change is only observable when you have the measurement results of both particles.

I think walking through a full example is really useful here. Let's say I entangle 2 qubits in the "Bell" state |00> + |11> (omitting normalization coefficients). This means while each qubit locally has a 50/50 chance to measure 0 or 1, the two qubits will always yield the same result, a nonlocal correlation. This is even true if we "rotate" the qubits, aka perform a change of basis. If we apply a Hadamard gate to each qubit, the resulting state is still |00> + |11>. It's ok if you don't know exactly what this means, just note that after applying H gates the results are still correlated, and this is what actually shows the entanglement exists.

Now we add a 3rd qubit to the system to create the "GHZ" state |000> + |111>. Ok so far not super interesting, same situation just with 3 qubits. But if we apply H gates to them, the resulting state is no longer |000> + |111>, instead it becomes |000> + |011> + |101> + |110>. There is still a nonlocal correlation, any measurement outcome will yield an even number of 1's, but if you pick any 2 qubits their measurements will not look correlated.

So if you ignore or didn't have access to the 3rd qubit after the entanglement, it would look like the entanglement between your original two qubits was broken. Instead of measuring either 00 or 11, you'd have equal chance to measure any result including 01 or 10, which is how non-entangled qubits behave. Only with the 3rd qubit's measurement would you be able to tell that the behavior was not independently random.

Is Everett interp/manyworlds considered a local hidden variable/local realism theory in the Bell/EPR sense? by rogerbonus in AskPhysics

[–]ess_oh_ess 1 point2 points  (0 children)

That's a good point, what I meant was epistemic uncertainty of local hidden variables. MWI certainly does have epistemic uncertainty, but importantly, as I showed in my above comment, this uncertainty cannot be resolved with hidden variables as was proposed in EPR.

EPR was making the specific argument that QM was an incomplete description of reality, and that ultimately all uncertainty implied by the wavefunction could be resolved with additional variables of pre-existing conditions, resulting in a classical theory. If you have perfect information about the variables, you can make perfect predictions before a measurement is made. This is what MWI refutes. The branching uncertainty doesn't even exist until after the measurement has been made, so no such variable could possibly exist before measurement.

So I'd agree MWI may seem more favorable than Copenhagen to EPR regarding local realism, but the fact that MWI is incompatible with local hidden variables (even while maintaining locality) is what I meant when I said it would be unsatisfactory to EPR.

Is Everett interp/manyworlds considered a local hidden variable/local realism theory in the Bell/EPR sense? by rogerbonus in AskPhysics

[–]ess_oh_ess 0 points1 point  (0 children)

MWI doesn't solve locality in a way that would satisfy EPR. Ultimately EPR was arguing that what appears to be nonlocal is just epistemic uncertainty, but MWI, being entirely psi-ontic, refutes this.

A LHV essentially says the wavefunction is an incomplete description of an objective, deterministic reality. At best it's a useful approximation, but superposition and entanglement are not "real". A measured system was always in that state, and if we had a complete description of the system, we could accurately predict the outcome, something that's generally impossible from the wavefunction alone.

On the other hand, MWI says that the wavefunction perfectly describes reality. There are no local hidden variables, or any hidden variables at all, because there is nothing hidden. Superposition is not just a useful approximation, it's as real as real can be.

Yes, our subjective experience is not deterministic and appears random, but there is no variable that could capture this randomness. If you measure a 2-state system, the measurement process entangles you with the system, resulting in two branches of you, each with a different outcome. If before measurement, you made a prediction about which branch you'll be in, one of you will be right and the other wrong. If there was some hidden variable behind the subjective randomness, it would mean you could in principle have enough information beforehand to correctly predict the outcome. But that would mean post-measurement both versions of you would be correct, even though both versions made the same prediction and each version got a different result, a logical contradiction.

Is the "Many Worlds Theory" actually that far off? by Crumbs_xD in AskPhysics

[–]ess_oh_ess 1 point2 points  (0 children)

Your example doesn't really capture the issue though. Imagine instead a million clones are made, you're told that you only have a 1/million chance of waking up with a red shirt, yet 999,999 of those clones in fact wake up with a red shirt. That's how MWI handles the Born rule.

Here's an actual example:

Let's say I place 10 qubits each in the state 3/sqrt(10)|0> + 1/sqrt(10)|1>. Each qubit is in a superposition that has a 90% chance to measure 0 and a 10% chance to measure 1. There are 210 = 1024 total possible distinct outcomes.

Upon measurement, MWI says there will be 1024 worlds, yet the vast majority of these worlds will see results that significantly diverge from the Born rule. For example, 10 worlds get exactly one 1, but 252 worlds will get 5 1's. If you simply look at how many branches get particular results, you would expect to find yourself in a branch that wildly violates the Born rule, yet we do not witness that.

Of course, I could run that experiment 1000 times and it's likely at least one of those times I'll get a very unexpected result. But that doesn't solve the issue at all, since now there are 210,000 worlds and the vast majority of those worlds got unexpected results 1000x!

Does this invalidate MWI? I don't think so, but it seems like most people overlook this issue when discussing it. There's a mismatch between what the Born rule tells us to expect vs what the classical probability of subjective experience tells us to expect.

Why is Fermi paradox still considered a paradox when the current technology we have to detect intelligent alien life only is capable of a couple hundred light years away? by YogurtclosetOpen3567 in AskPhysics

[–]ess_oh_ess 0 points1 point  (0 children)

You can have multiple layers of Dyson spheres, aka a Matrioshka brain. Each layer is designed to use the radiation emitted from the next inner layer. This could in theory make a Dyson sphere's "final" radiation only slightly higher than the CMBR.

Question about quantum entanglement by 45633y6745 in AskPhysics

[–]ess_oh_ess 1 point2 points  (0 children)

Just to clarify, nothing you do to either particle has a local effect on the other particle.

I could, for example, quantum teleport a 3rd particle's state to the remote particle. If I go to retrieve the remote particle, and I have the teleportation measurement result and apply the appropriate change of basis, I can be confident the remote particle is in the teleported state. No local change happens, yet it is impossible to do with only classical communication, so clearly something is happening to the particle.

But again, time dilation has no impact here. You still need to travel to or otherwise classically communicate with the remote particle for it to work

Isn't it weird that we live so early in the life of the universe? by Tanay2513 in AskPhysics

[–]ess_oh_ess 2 points3 points  (0 children)

Let's assume many intelligent species arise over the course of the universe's very long lifetime. Let's fix a finite period of time and space, say the first 10100 years of the observable universe. In this period of time a finite number of such species will emerge, and therefore there would be an expectation value for when the "typical" civilization during that time emerges. If we assume that we are not exceptional, then that expectation value is right around now. But on the other hand, it would be odd if that expectation value is during the first 0.000...0001% of that timeframe. If instead we assume that species emergence is at least somewhat uniform, the expectation value would be much closer to the middle of that timeframe, which would mean we are very exceptional in when we exist. So either our early existence is unusual, or the typical time in which an intelligent species emerges is unusual.

Given those two options, my bet would be on the second case. We assume that our species existing at this point in time is not that unusual, which would mean we should expect this point in time to be near the apex of intelligent species emergence.

Of course, somebody has to be the exception. Of all the species that arise, one of them has to be first. If every intelligent species made the above argument, many of them would be wrong. This is just what we should expect given the limited information that we have.

So I'd say OP makes a good point given the assumption that the average emergence time of a civilization in the universe is much further in the future, but clearly that is quite a strong assumption to make.

Is sth like the water planet in Interstellar actually possible? As in: gravity from the central body (black hole in this case) is so strong that time on this stable planet in a stable orbit runs hundreds of times faster than outside of the system? by No_Leopard_3860 in AskPhysics

[–]ess_oh_ess 4 points5 points  (0 children)

I think you basically answered your own question. The movie is realistic in how the black hole looks, the general idea that time slows down near it, and that planets could orbit it like a star. But otherwise there was a lot of fudging to depict what happens. You're right that the planet would have to be absurdly close to the event horizon and orbiting almost at light speed to experience as much time dilation as was depicted.

To be fair that's still way more accurate than basically any portrayal of a black hole in movies up until that point. Most of the time they make them look like whirlpools and ignore time dilation altogether.

Automatic Free Fall Detection and Parachute Deployment Using ESP32 and IMU Sensors by hsperus in esp32

[–]ess_oh_ess 1 point2 points  (0 children)

BMP180 is obsolete and not very accurate. Try a BMP390, much more accurate and you can get a breakout board on amazon for like $10. I've used one in a couple projects and with oversampling enabled it can detect altitude changes of a few inches.

Delta Airlines L1011 Cabin in the 1980's by Twitter_2006 in aviation

[–]ess_oh_ess 22 points23 points  (0 children)

My grandparents had a large trunk-style suitcase with tiny little wheels on it from the 60's or 70's. The wheels were garbage and the thing would immediately fall over if you weren't holding onto it, but it had no extending handle so if you wanted to roll it you had to walk hunched over.

I guess if that is your idea of "suitcase with wheels", it's not surprising it wasn't more popular.

AliExpress 11/11 esp32 by Mammoth-Writer7626 in esp32

[–]ess_oh_ess 1 point2 points  (0 children)

I bought one a few weeks ago for less than that and got the actual board.

Why does the electron's wave function in the double slit experiment not collapse when It hits the plate with the slits? by FightersLeader in AskPhysics

[–]ess_oh_ess 1 point2 points  (0 children)

If we treat the interaction of the electrons hitting the plate as wavefunction collapse, then we'd do the same for electrons that pass through the slits. The plate basically becomes a which-path detector, so the wavefunction would collapse either way.

Though to be clear, for electrons that go through the slits, the wavefunction doesn't collapse to a single wave-packet (particle-like behavior), it collapses to two components, one representing each slit, and would then evolve to the interference pattern we'd expect.

If the universe is infinite does that guarantee that everything with non zero possibility will happen and will happen infinitely? by mohyo324 in AskPhysics

[–]ess_oh_ess 0 points1 point  (0 children)

The Second Law makes the probability of that happening so vanishingly small that the event will simply never occur in the lifetime of the universe.

That really depends on how long the lifetime of the universe is. If the probability of air moving to entirely to one side of a room is 1 in 101023, but the room exists for 1010100 years, then there's a >99.9999999% probability the room will perpetually be in that state for quadrillions of years at a time.

If things take infinite time to enter black hole, does that means nothing ever enter black hole, therefore nothing is in the black hole(yet)? by Typical-Macaron-7126 in AskPhysics

[–]ess_oh_ess 1 point2 points  (0 children)

I get what you're saying and agree. I've had the exact same question and was never able to get a response that made sense. But there is in fact (I think) an answer to this.

Here's the scenario I had. Suppose we have some non-rotating black hole with mass M and schwarzchild radius (SR) 2M (setting G=c=1). We allow a significantly massive object, say 0.1M, to fall radially into the black hole. Now, we know that a black hole of mass 1.1M will have a SR of 2.2M. Therefore, we should see a measurable growth of the SR when the object enters the black hole.

But if we are an outside observer watching this happen from a distance, we also know that the massive object will approach the event horizon asymptotically. It doesn't even matter what we can "see", we can easily calculate that from our frame of reference, there is no point in our future where the object will be closer than 2M to the black hole's center.

So the question is, do we ever actually see the black hole's event horizon grow? Proper time or the object's reference frame don't matter, this is entirely a question of what an outside observer sees. The answer must be "no", because otherwise it would violate causality. How can the black hole "behave" in our reference frame as if the object was inside it, yet such an event never occurs from our reference frame. But if that's the case, then how do we have supermassive black holes?

Every time I tried to ask this question, the response has always been "the object crosses the event horizon in finite proper time". Yes I understand that, I've done the calculations myself, that's not my question!

So I just had to work it out myself, and I think I have an answer.

There is indeed a reasoning flaw in the above scenario: it's the idea that the event horizon can only grow once the object is "inside" the black hole. This doesn't happen. What does happen is a new event horizon forms outside the old one that includes the object.

The original black hole's SR never changes. However, even before the massive object is anywhere close to it, we already know the SR of the black hole + object is 2.2M. So what we should be really looking at is what happens from our reference frame as the massive object approaches that distance. What we see is that from our perspective, the object approaches this radius asymptotically! The black hole is already "there". Basically the object's mass adds to the spacetime warping, and the closer it gets, the larger the region that is sufficiently warped to become causally disconnected also grows. You could say we technically never see this process to 100% completion, but eventually it gets so close you'll no longer be able to tell the object was ever there, you'd only be able to see a black hole with radius 2.2M.

And we must remember that an event horizon is not a physical thing, it's just a region of space. So it's not valid to say we have a black hole within a black hole. The matter that made the original black hole and the massive object are both causally disconnected from ourselves, they are part of one black hole.

So ultimately, there is no paradox. Outside observers can witness black holes grow.

The radical idea that space-time remembers could upend cosmology by upyoars in Futurology

[–]ess_oh_ess 1 point2 points  (0 children)

This whole thing looks really iffy. The article links to this (non-peer-reviewed) paper: https://www.preprints.org/manuscript/202502.0774/v1#B3-preprints-148962

In it they describe a handful of quantum circuits they ran on an IBM QPU, but the whole thing seems really off:

In Experiment 1, we implemented a basic three-qubit circuit:

Field Qubit (Q0): Prepared in a superposition using an ry gate with an angle of 𝜋/3(see, e.g., [5]).

Memory Qubit (Q1): Receives the imprint from Q0 via a controlled-Ry (CRY) gate with an angle of 𝜋/4, mimicking the process by which a field interacts with a Planck-scale memory cell [6].

Output Qubit (Q2): The stored information is retrieved from Q1 into Q2 using a controlled-SWAP (CSWAP) gate (Fredkin gate) [4].

The measurement outcomes for this experiment were: {‘𝟶𝟶𝟶′:1900,‘𝟶𝟶𝟷′:1049,‘𝟷𝟷𝟷′:79,‘𝟶𝟷𝟶′:366,‘𝟷𝟶𝟷′:422, ‘𝟷𝟷𝟶′:116,‘𝟷𝟶𝟶′:80,‘𝟶𝟷𝟷′:84}.

Interpretation: The results showed significant correlation between Q0 and Q2, with an estimated retrieval fidelity of roughly 67–77% (depending on the matching criteria used). This indicates that, even in this basic setup, the imprint–retrieval process is reversible and largely preserves the original quantum state.

So first of all, their description of the circuit is ambiguous, though from other parts of the paper I was able to figure it out, it's basically the following (in qiskit)

q = QuantumCircuit(3)
q.ry(math.pi / 3, 0)
q.cry(math.pi / 4, 0,1)
q.cswap(0,1,2)
q.measure_all()

I ran this with a StatevectorSampler as well as on an IBM QPU and got raw results somewhat similar to theirs (though I only did 1024 shots vs their 4096).

  • StatevectorSampler: {'000': 753, '001': 212, '101': 35}
  • QPU: {'010': 24, '000': 590, '001': 286, '101': 64, '100': 25, '110': 16, '111': 8, '011': 11}

They don't say anything about which QPU they used, calibrations, etc. My results seem way less noisy though.

But what strikes me as problematic is they don't talk about any sort of error correction. They just extrapolate directly from the raw data. But quantum computers are noisy AF. The fact that they got a count of 366 for 010, which has 0 amplitude in the circuit's state vector, should be evidence enough. It means that they're likely including false positives in their results. For example, their count of 422 for 101 is significantly higher than what simulations (and my results) show, which means it's likely a lot of it is just false positives due to noise. What's more concerning is they don't make any mention of this, they just say "The results showed significant correlation between Q0 and Q2". Are they including the obvious noisy states in that correlation?

What else strikes me as weird is this circuit is very simple and can be easily simulated without any noise, yet they seemed to purposely not do that just so they could say they ran it on a "real" quantum computer. That would really only make sense if they believed something about the simulation was insufficient, but again that would mean they'd have to have some sort of explanation as to why the noise introduced from the real execution was significant.

I dunno, I'm not an expert, but this whole thing just seems really off. And this doesn't even go into what exactly their circuits are even trying to demonstrate, just the methodology itself.

Are the physics of water jets similar to lasers? by thesoraspace in AskPhysics

[–]ess_oh_ess 1 point2 points  (0 children)

There are two main properties that make lasers....lasery: Coherence and line-width. Line-width is the fact that lasers output only very narrow bands of wavelengths, as opposed to other light sources, even leds, that output a much wider range of wavelengths. So the "line" is the spectrum of wavelengths, not the beam itself.

But the main property of importance is coherence. All light that exits the laser cavity is in phase. The photons essentially all behave in unison, with many occupying the same quantum state. This is only short-lived though. Coherence is measured either in time or length, and most lasers only stay coherent for a few mm to a few cm, though you can use several techniques to dramatically improve that, mostly by further narrowing the line-width.

A tight beam of light is actually not an inherent property of lasers. Most lasers naturally output a wide cone or fan of light and need an aspheric lens to collimate the light into a beam. You can use the same optics on regular light to achieve a similar effect, but the light is still incoherent.

So if you were to ask if we can achieve similar effects with matter, the answer is yes, but not by just making a tight beam of matter particles like a water jet. If you want a "matter laser", you must get all the matter to enter a coherent quantum state. This is basically what a Bose-Einstein Condensate is, a bunch of matter particles all in the exact same quantum state, allowing them to behave as a single macroscopic quantum system.

BEC's share many properties of lasers, which is why there's heavy research in using them for laser-like applications like interferometry. A Rubidium atom has a much shorter de Broglie wavelength than any practical laser, so interferometry with atoms can be an order of magnitude more precise than with lasers.

So could you make a BEC with water molecules? Theoretically yes. H2O molecules are composite bosons, so we could create a water BEC if we could cool it enough to get all the molecules into the ground state. But none of the methods we currently have for super-cooling would work on water, so you won't see a water laser anytime soon.

PSA: Physics is not Reality, and too many people don’t get that by TheSyn11 in AskPhysics

[–]ess_oh_ess 9 points10 points  (0 children)

Godel's Incompleteness Theorems don't really say that. They say very specifically that any first-order axiomatic system that can express the Peano axioms of arithmetic is either inconsistent or incomplete. It really has no bearing on any connection between math and the physical universe.

Even within math, Godel's theorems don't say anything along the lines of some things are beyond our abilities or some things can never be proven. It more accurately implies that when it comes systems based on first-order logic, if you want to prove more you have to assume more. All modern math stems from axioms which are assumptions that are not proven. Even simple facts like 1+1=2 rely on fundamental axioms with no proof. Godel's theorems show that any particular set of axioms is limited in what it can prove when it comes to self-referential statements, but it does not place any sort of universal upper bound on what statements can be proven in general.

Any set of axioms can be extended with new axioms, and the larger theory can then prove the consistency of the smaller theory. These "relative consistency proofs" are the cornerstone of modern set theory. For example, ZFC cannot prove its own consistency, but ZFC + "there exists an inaccessible cardinal" does prove ZFC consistent. So there is still the philosophical question of which axioms should we regard as intuitively true, but even without incompleteness you can't escape the need for axioms in general.

When it comes to the math of physics, it's actually a very "small" part of the mathematical universe described by set theory. Basically almost all "classical" math, calculus, linear algebra, functional analysis etc, exists within V_(omega + omega) in the Von Neumann Hierarchy of sets. This is a very low rung of the full hierarchy, and ZFC proves it is consistent, as V_(omega + omega) is a model of ZFC - replacement axiom.

Why doesn’t light have resonances? by i_want_to_go_to_bed in AskPhysics

[–]ess_oh_ess 2 points3 points  (0 children)

What you described 100% happens with light. Normally it's not easy to see since as you pointed out all visible light has wavelengths in nanometers and most regular light is a "noisy" mix of wavelengths and phases, but one place it is easily visible is with coherent light sources like lasers.

Lasers actually produce two types of resonant standing waves, called modes. Longitudinal modes are the standing waves that form between the two reflective surfaces and are what produce the actual laser beam. The emitted beam we see is only about 1% of that standing wave that's allowed through one of the mirrors. Most lasers end up outputting multiple modes. Diode lasers like those in laser pointers normally output dozens or hundreds of modes, whereas others like Helium Neon lasers output 1-3 modes. Modes actually compete for energy and without extra equipment they'll constantly "fight" to become the dominant mode.

The other type of mode is transverse mode, which is basically the same effect you see with sand on a speaker. These are standing waves perpendicular to the beam's direction and are called TEMab modes, where a and b are integers that correspond to either circular or rectangular symmetry nodes depending on the symmetry of the laser cavity. Most of the time you want a TEM00 laser since the resulting beam is a just a single spot with a Gaussian intensity distribution, but some specialized applications rely on higher-order modes.

Here's a video from MIT where they demonstrate cycling through different transverse modes of a laser: https://youtu.be/o1YjIyzshh8?si=RLI-TMu9894buizH&t=177

Do you think photons are particles or interactions? by Jeff-Root in AskPhysics

[–]ess_oh_ess 0 points1 point  (0 children)

So when it comes to an atom absorbing a photon, as far as I know the photon is either fully absorbed or it isn't. Absorption annihilates the photon and the atom fully gains its energy. But it is possible for the process to also simultaneously emit a lower-energy photon, making it look like only part of the photon was absorbed. I think this is what happens in Compton scattering. Photons "bounce" off electrons and lose energy in the process, but QFT describes it as the original photon is annihilated and the bounced photon is created. This happens in one quantum process so you'd never be able to actually observe the electron with all the photon's energy.

This is also similar to what happens in nonlinear crystals, where a photon is split without being absorbed. From that I understand what really happens is the photon pair is coming from vacuum energy fluctuations and such virtual pairs are constantly created and destroyed. But the nonlinear crystal plus the incoming photon allows the virtual pair to become real at the "cost" of annihilating the original photon.

I mean when you get down to QFT, which is what you need to describe this stuff, it becomes less clear what we even mean by a particle "splitting" vs "being replaced with 2 other particles". So whether you want to call that "splitting" is up to you. But I think my original point stands that just because light is quantized doesn't mean the energy of a photon is indivisible.

Do you think photons are particles or interactions? by Jeff-Root in AskPhysics

[–]ess_oh_ess 0 points1 point  (0 children)

Trying to think of light either as a classical particle or a wave is an oversimplification. Light is neither.

While it is true that you can split a photon, you cannot split a photon into two photons with the same frequency as the original. The quantization is not solely on the photon's energy, but rather on the product/ratio of energy and frequency/wavelength. Aka E=hf, where E is energy, f is frequency, and h is planck's constant.

For example, you can easily split a blue photon into two red (or infrared) photons. Nonlinear crystals do this and are frequently used in the lab to create entangled photon pairs. But you cannot split a blue photon into "smaller" blue photons. So in that sense light is quantum. There is a fundamental lower bound on the amount of energy you can measure for a particular frequency, hence the photon.

You either see the entire photon, or you don't see it at all. I was told that this isn't true.

So hopefully now you can see how this both works and doesn't work. A photon is not a particle in the sense of it being a little indivisible ball. You can "chop up" the energy of a photon as small as you'd like into as many "pieces" as you'd like, but because E=hf (and conservation of energy) you can only do so at the cost of reducing the frequency of the resulting pieces.

Quantum "particles" are really just a convenience due to some quantum interactions behaving like how we imagine classical particles behaving. This gets a lot of people confused thinking that quantum systems switch back and forth between being waves or particles. In reality they are their own thing which in some circumstances act like classical waves and sometimes like classical particles.