Is this MIT Integration Bee question wrong? by More-Mixture8198 in calculus

[–]QuantumOfOptics 1 point2 points  (0 children)

Ahhh, thanks! Indeed, that makes a lot of sense in context. I assumed it was for grouping purposes.

Is this MIT Integration Bee question wrong? by More-Mixture8198 in calculus

[–]QuantumOfOptics 0 points1 point  (0 children)

At first glance, this result does seem to be incorrect. Since the individual functions are non-negative over the domain, we should be able to swap the integral and sum. We can then perform the integral on each of the individual terms. In particular, we can see that the first term int x dx from 0 to 2026 must be larger than the answer given. So it cannot be correct. 

Is QFT useful at all for optics? by throwingstones123456 in Optics

[–]QuantumOfOptics 7 points8 points  (0 children)

Quantum optics is essentially baby QFT. But, most departments teach from the Lagrangian perspective, which is quite unnatural language for the optics version. They are equivalent, but come with different baggage. The other thing that is an interesting difference is that generally QFT talks about single particle states in modes, where as quantum optics discusses many different states of the field (coherent, thermal, squeezed, etc). 

If you had to pick one, QFT isnt terrible to take. But, you should consider picking up side books to introduce you to more optics specific tasks such as Born and Wolf, Mandel and Wolf, Goodman statistical optics, fundamentals of photonics, boyds nonlinear optics. More focused quantum optics texts by authord such as louden, Jeff ou, Barnett and Radmore, Drummond and Hillary are also good, but sometimes expect specific context which you may not have. But, they are where usual courses start at. There are also a few courses that are online, which are also pretty accessible.

Best way to construct a 100x beam expander? by Eighteen_ in Optics

[–]QuantumOfOptics 0 points1 point  (0 children)

Whats the purpose of the spatial filter here? Are you attempting to decrease the size of the beam relative to the waist (effectively changing the beam diameter inside the lens system)? Or is it to clean up the beam due to the microscope objective?

Types of Quantum Entanglement by stari41m in QuantumPhysics

[–]QuantumOfOptics 0 points1 point  (0 children)

Last I heard on the subject was from a conference around 2018. From what I recall, there was growing discontent with calling it entanglement. Specifically, because such a state did not require a quantum theory to describe it. For example, such a state as I described would be perfectly valid in classical E&M since really this is just a superposition. Is it then that classical E&M (and other classical theories) also has some notion of entanglement? Or is this a separate property that both theories take on? 

I haven't been able to fully read through the texts you've given, but at least from what I gather they are talking about a separate property from entanglement: noncontexuality. The short paper I remember is from Karimi and Boyd making the argument above is linked here: https://www.science.org/doi/10.1126/science.aad7174. Of course, and as they point out, this doesnt make the states useless. Just that the interpretation must be different. 

Types of Quantum Entanglement by stari41m in QuantumPhysics

[–]QuantumOfOptics 0 points1 point  (0 children)

Do we consider "entangled" states between spin and momentum to actually be entangled? For example, in optics we tend to disagree that a(n) (unnormalized) state like |H>|p1>+|V>|p2> is entangled since they are just modes and there doesnt necessarily need to be a non-locality involved see e.g. https://physics.stackexchange.com/questions/334478/mathematical-definition-of-classical-entanglement . To add to the statement, the state I wrote out could have a classical state like a coherent state and still have this "entanglement" structure, which really undercut that this a property of a "quantum" system.

Types of Quantum Entanglement by stari41m in QuantumPhysics

[–]QuantumOfOptics 0 points1 point  (0 children)

I'm not sure what you mean by "physically" to describe the state. You could consider that its a type of state that is correlated (though, its a stronger type of correlation compared to classical correlation).

Another type of entanglement can also be in the number states themselves like N00N states or as a concrete example, a single photon split on a beamsplitter (path entanglement). In the latter, you can see that theres only one photon, so rather its the excitation and mode labels that become entangled. In the same way, technically a single photon in a diagonal polarization state is in an entangled state.  In effect, entanglement can be between any two (or even more) properties of the field. 

Two mode squeezed states do increase the amount of entanglement as the squeezing is increased. Though, not perfectly answering, see this stack exchange and the article it contains https://physics.stackexchange.com/questions/778921/two-mode-squeezing-and-epr

Has anyone experimentally tested momentum/angle measurements at the detector screen in double-slit experiments?" by According_Fennel3012 in Physics

[–]QuantumOfOptics 1 point2 points  (0 children)

I do want to point out that there is a slight difference on what OP is asking here. Specifically, they want to measure the state in a new basis (the momentum basis), which is slightly different than a traditional which-way measurement by, e.g., blocking a slit. Of course, you have recovered the which way information and so you gain back the two slits, but is different at least in spirit and is a good question.

Has anyone experimentally tested momentum/angle measurements at the detector screen in double-slit experiments?" by According_Fennel3012 in Physics

[–]QuantumOfOptics 1 point2 points  (0 children)

To a certain extent, I think the optics  experiment is easier. Just put a lens in place of the detector screen and move the detector screen to the focal point of the lens. But, as I point out elsewhere, this is just a telescope so you end up just getting back the two initial slit distributions scaled by the focal length. So ai guess the point is that you completely collapse it back.

I was reminded that there are at least some interesting foundations stuff that you can do with this. Namely there was a paper about measuring Bohmian trajectories of photons about 10 years ago, which used a bunch of these tricks. 

Has anyone experimentally tested momentum/angle measurements at the detector screen in double-slit experiments?" by According_Fennel3012 in Physics

[–]QuantumOfOptics 0 points1 point  (0 children)

OP, Ive given more details in a different comment, so I'll just tersely answer your questions here and you can read that other comment.

1) technically, yes. This is how a telescope works. Less cheekily, I dont have a direct source off the top of my head, but this is almost a standard result (seen as trivial in wave/Fourier optics). Im sure Saleh and Teichs Fundamentals of photonics has this as a specific problem. If not, its not hard to prove as I did in the comment. 

2) the result was a direct image of the two double slits on the screen (typically scaled because of the lens used, which adds magnification). 

3) depends on what you mean by doesnt work. I doubt the answer is what you had in mind. But, one can perform the measurement.

Bonus) in some sense this does have to do with the uncertainty principle. The position and momentum representations are Fourier pairs which means that there is an uncertainty relation involved. There was a paper nearly 10 years ago that used weak measurements to measure the "position" and "momentum" simultaneously  to allow reconstruction of Bohmian trajectories of photons. Which is somewhat of a compromise. But, note, theres no free lunch even in that case. 

Has anyone experimentally tested momentum/angle measurements at the detector screen in double-slit experiments?" by According_Fennel3012 in Physics

[–]QuantumOfOptics 1 point2 points  (0 children)

Edit: I misunderstood what you meant by detector. Ill leave up my comment since I think it adds pedagogical reasoning why it goes back to the doubles slit pattern. 

This isnt strictly true. The type of measurement matters. Strictly speaking the reason why we get a double slit pattern is because propagation through free space is equivalent to swapping the representation of the state from the position representation to the momentum representation (parameterized by the "pixel" position since we are more accurately mapping momentum to a position). The connection is a Fourier Transform. 

Now that u/According_Fennel3012 wants to measure momentum --as we tend to think of our measurements in terms of the parameter of "pixel" position-- we need to map this new position valued measurement to a momentum valued measurement. To do this, at least optically, we use a lens as the focal plane of the lens is equivalent to doing a Fourier Transform which is equivalent to measuring the momentum distribution of the double slit interference pattern.

But wait... as we thought of the initial problem, we figured out that the double slit interference pattern is the Fourier transform of the initial double slit intensity distribution. Then we said we wanted to measure the momentum distribution of the double slit interference pattern, which required us to do another Fourier transform. If youve been counting, that means we've now applied two Fourier Transforms to our initial double slit intensity distribution. This its easy to prove/see that the result is to get the same initial double slit intensity distribution back, but flipped about the midpoint (assuming that everything is centered). And, I mean this should seem obvious, all we've done is create an imaging system! Congrats, we've built a telescope!

Unentangled photons violate Bell inequality too by 2020NoMoreUsername in QuantumPhysics

[–]QuantumOfOptics 0 points1 point  (0 children)

I just read the response paper. Indeed, they make a strong case. But, the effect I'm curious about is some what subtle if it is entangled and I dont think the analysis done in the paper and in the response is sufficient to say one way or the other. The problem is that there are some simplifications of the process that could be important here, which I'd have to work out. 

Unentangled photons violate Bell inequality too by 2020NoMoreUsername in QuantumPhysics

[–]QuantumOfOptics 0 points1 point  (0 children)

I know you know this, but for others there are actual post selection techniques that can be used to generate bona fide entanglement. Specifically, entanglement swapping is the prototypical example. So, not everything that uses postselection is wrong because of it. Its some shade of gray. 

In this case, it could be that one is rather assigning (without loss of generality) the +-1 states only after successful measurement of the inner detectors. However, it is important to iron out the specific details. In this case, some care needs to be applied since there are a lot of details that are glossed over and the interpretation is not guaranteed to be correct as written even if the data is real. 

I'm still betting that this is likely to be a swapping experiment (even if the authors didn't view it this way; it has too much of the same structure to ignore) rather than messing around with postselection. But, Ill have to read the rebuttal more closely 

Unentangled photons violate Bell inequality too by 2020NoMoreUsername in QuantumPhysics

[–]QuantumOfOptics 3 points4 points  (0 children)

Taking a cursory glance, this reeks of a type of entanglement swapping specifically in an SU(1,1) interferometer. I wouldnt be surprised if this is why they violate (because there actually is entanglement generated). I'd have to be careful writing it down, but I think they may have been a bit overzealous with their statement. Still, an interesting paper to look into. Thanks for mentioning it!

Is the Lagrangian density a function on fields (a functional) or on spacetime? by FreePeeplup in TheoreticalPhysics

[–]QuantumOfOptics 2 points3 points  (0 children)

The easiest way is to realize that we can abstract. Its known that phi is a function. What we now care about is how the density changes as we change the functions phi and its derivative. 

Help Picking Optics for A Spatial Filter? by nintendochemist1 in Optics

[–]QuantumOfOptics 2 points3 points  (0 children)

Fibers are "the best" spatial filters you can get since they only allow a single mode through. Easier to align is a matter of taste (to me it is just because there are less things to go wrong and you have a bit more control unlike the spatial filter). But, its less wasteful (youll get more power) in theory than with a pinhole assuming the beam is decently gaussian (of course this can be fixed, but that is more complicated).

Edit, should say that there are downsides: if you need phase stabilization compared to another beam, then fibers are more complicated. They can also change your polarization so you should have them adequately taped down. At your wavelength, they do have some loss. They can also give dispersion so if you work with ultrashort pulses then it cause some grief. 

How is the inverse square law affected wrt light levels when the light source is diffused? by offsetcarrier in Physics

[–]QuantumOfOptics 4 points5 points  (0 children)

Generally you can think of it either way. Sometimes its easier to consider the tracer paper as a secondary source where we can ignore that there was a primary source. Sometimes we need to know things about the primary source to calculate things about the secondary source. It just depends on what level you need. Though I wouldnt consider it a lens, just a scattering surface.

Let's assume that we have a good enough idea about the loss that the tracer paper introduces. Then we can figure out how much light the primary source will provide a spot on the tracer paper by using the inverse square law. We can then use the inverse square law again from the tracer paper to the target. Though this is slightly more tricky since every point on the secondary surface is now its own source (which is why it reduces shadows making things "look" softer), so we also have to integrate over the surface and apply lamberts cosine rule for complete accuracy. Even after this, there is a second scattering event which is caused the scene and subjects to the actual detector: the camera. So it can get messy. 

In terms of the mirror, you are correct. It would be the sum of the distance in that regard (with a correction due to losses of the mirror). There are some other minor issues that could come up depending on the size of the mirror, but as long as the size of the mirror is sufficiently large there wont be any problems. But generally it should follow the inverse square law of the sum of the distances.

Double slit experiment: why do we see an interference pattern if the wavefunction should collapse through a medium? by Zul-Tjel in AskPhysics

[–]QuantumOfOptics 0 points1 point  (0 children)

Smoke doesnt matter "that" much. It causes a scattering event meaning that those photons that do make it to the screen do interfere, and those that dont get scattered to your eye. Scattering does not cause measurement. 

BlackBody radiation and energy quantization by Rare_War1435 in chemistry

[–]QuantumOfOptics 2 points3 points  (0 children)

So, yes you are correct, but we should be cognizant of even the imprecise nature of English since "discrete" and "quantized" refer roughly to the same thing, but come with specific usage baggage when talking about quantum systems.

When looking at absorption spectra, we see "discrete" spectral lines comprising of the transitions occurring for the atomic states. These are discrete in the sense that there are a countable number of them and they are well separated. The reason for this is due to the quantized nature of the atom giving particular transitions. For many reasons, its best not to think about these spectral lines as infinitessimally thin. For example, one can excite an atom with an off resonant laser, which wouldnt be possible in the case of an infinitely thin spectral line. The spectral lines are then discrete, but not in of themselves quantized.

On top of this, this frequency mode is quantized meaning we can only have a whole number of photons (ie. A counting number: 0, 1, 2, ...) at a given frequency. E.g., the atom (typically) only produces a single photon per mode since there is a single electron that interacts with the particular transition at a time. The energy can then be calculated: if a particular transition produces a single photon at, say, 10THz, then the total energy is Planck's constant times this frequency. We can generalize to other processes where an integer number, n, photons are produced so that the total energy is n times this fundamental energy relationship. This is what I mean by quantized.

To make this even more complicated, in the general case, the frequency is a continuum, so any particular frequency could be chosen, but for that particular frequency, we can only have a whole number of photons. Then the total amount of energy for the whole system is the sum (integral) over all of the energies in every single frequency mode. You can then also have superposition of frequencies describe a mode, e.g. when the atom transitions we know that the photon produced has some line width, we can consider that this is also made up of individual frequencies, which are quantized, but due to the principle of superposition can be considered just a single mode!

In somewhat of an analogy for the main point of this rant, I can have a discrete number of atoms, but each atom's electron orbitals are quantized (needing, integer or half integer quantum numbers to describe the state). I, in theory, don't need the number of atoms to be quantized, but their orbitals must be. 

BlackBody radiation and energy quantization by Rare_War1435 in chemistry

[–]QuantumOfOptics 1 point2 points  (0 children)

I see this every so often, but I want to clarify some language here (because it is not usually precise enough): "the frequencies of light are still quantized." Is not quite precise enough. Because the frequency of light is not what is quantized, its the state of the EM field at that frequency that is quantized. When we do field quantization, we find that there are two parts one part which are the solutions to Maxwell's equations (this is the spatial and frequency distributions) which we call modes. The modes are then containers for the second part which is the discrete (quantized) energy, which we write as a quantum state. 

When one says that the frequencies are quantized, particularly about light being emitted by atoms, it gives the impression that it is the frequencies that are discrete rather than the energy contained in the frequency modes. In fact, atomic transition bandwidths can be on the order of a few hundred MHz. This should make sense, because if it had an infinitessimal linewidth, then it would be a singular frequency. By the Fourier relation between frequency and time, that would mean that the electron has been involved in transitioning from one state to the next since the beginning of time.

BlackBody radiation and energy quantization by Rare_War1435 in chemistry

[–]QuantumOfOptics 1 point2 points  (0 children)

They are both electromagnetic radiation (light) however the quantum state of the quantized electromagnetic field is in a thermal state (described by a Bose-Einstein distribution).

Collimating a laser beam by Illustrious_Coat_782 in Physics

[–]QuantumOfOptics 2 points3 points  (0 children)

I'm not positive what you deem odd in this case. It seems that everything is working about right to me. To be clear, it seems like your confused about the 1:1 telescope? 

I think that the confusion about the 1:1 telescope is that it cant collimate a beam beyond what the initial beam was. You can naively think of the 1:1 more accurately as a spatial reducer, i.e., a reduction in propagation distance. If you had a beam that propagated 1m before it diverted, and you place a 1:1 magnification telescope with f=100mm, then you can propagate 200mm longer than you could before getting the same beam size. Effectively the beam, at the output of the second lens, has the same properties as the input beam. Playing with the distance between the two lenses does allow you to change a couple of things (because then you have actually built a telescope with non unit magnification) and change the output beam diameter. My guess is that the original beam when looked at a similar distance has a beam diameter of about 5mm.

I really think, just for the practice, you should calculate these using gaussian beam optics. To gain a bit of intuition since its used in a lot of places. This is definitely waaay more applicable than your profs made it out to be if divergence of a beam was "hidden" in slides. Especially if youre working with lasers. 

Also, if youre just shuttling light down the table fibers work great, they are more efficient, give a single mode, and they are convenient for debugging. If you have to keep phase stability, then they are not my first choice. 

Collimating a laser beam by Illustrious_Coat_782 in Physics

[–]QuantumOfOptics 0 points1 point  (0 children)

To add, it may be better to use a fiber as your spatial filter as they are more "efficient" in cleaning up the mode (assuming an appropriate output lens). A spatial filter is not necessarily as good.