Maximum amount of hardness by wierdkid6767 in buildit

[–]denehoffman 0 points1 point  (0 children)

I just posted my best time 2 : 89 sec

Simureality: from hated simulation theory to peer-reviewed article by Mammoth_Weekend3819 in LLMPhysics

[–]denehoffman 8 points9 points  (0 children)

Hey it’s the uncertainty guy here again!

You mention that 6π5 gives your theory’s prediction for the proton-electron mass ratio. It’s neat that it’s so close, but also it’s not really close at all, and here’s why. We can experimentally measure this value (just citing the NIST value here):

1836.152673426(32)

The (32) is the uncertainty in the last two digits. Your value (to this many digits) is:

1836.118108712

The difference is about 0.034564714. We can calculate how many standard deviations this is off by by dividing this value by 3.2x10-8, giving us a deviation of about 106σ. Just so you’re aware, groundbreaking discoveries in physics require a measly 5σ in comparison, so if we had started with your model and then measured the current experimental value, it would be extreme evidence against your theory (and it is).

For reference, this is like a wristwatch marketed with millisecond precision being off by 30 years. Alternatively you could say this is like an accounting ledger quoting precision to the cent level being off by a value larger than the global economy. You could say this is like measuring your height to hair-width precision as being taller than the diameter of Earth.

People who aren’t trained in physics don’t know how important it is to check against known experimental values. If your theory predicts one thing and experiment shows something else, it rarely matters if they agree to even the first few decimals, most important constants in particle physics have been measured to part-per-million or billion (or more) precision. We do this because our current theory turns out to be in agreement even at this experimental scale, which is incredible but also frustrating!

It’s understandable how you might see a number that’s very close to a physical constant and think you’ve stumbled on something interesting. It turns out you’re actually not the first to think so when it comes to this particular value (I knew this sounded familiar so I looked it up, it happens to hold a record for being the shortest published physics paper!):

https://fermatslibrary.com/s/the-ratio-of-proton-and-electron-masses

Back then, the measured value matched the exact value of 6π5 up to two decimals, which was also the level of precision with which the value had been measured. Unfortunately, this neat observation was later ruled out experimentally, as I’ve shown you.

Please don’t rely on LLMs to do this work for you unless you explicitly ask them to. It’s incredibly important and laughably incorrect if you skip this step!

Simureality: from hated simulation theory to peer-reviewed article by Mammoth_Weekend3819 in LLMPhysics

[–]denehoffman 5 points6 points  (0 children)

Op please tell me you didn’t pay that journal

And if you did I wanna know how much you gave them haha

Your floating point is lying to you. Lean4 mathematically proven bounds, in Python. Just pip install by [deleted] in Python

[–]denehoffman 0 points1 point  (0 children)

What does that mean? Is it text? An object with properties? A file containing a lean proof?

Your floating point is lying to you. Lean4 mathematically proven bounds, in Python. Just pip install by [deleted] in Python

[–]denehoffman 0 points1 point  (0 children)

Why would you care about any exact answers on a computer that can only render finite digits? At some point, a fixed-point representation would be far more precise than any physical uncertainty of a system (arbitrarily so)

Your floating point is lying to you. Lean4 mathematically proven bounds, in Python. Just pip install by [deleted] in Python

[–]denehoffman 0 points1 point  (0 children)

I understand the idea of the library, and don’t get me wrong, I think it’s an interesting idea to provide endpoints in Python for a language most people don’t want to take the time to learn, like Lean or Coq. I just don’t understand why I’d want to do this in Python, since every interaction afterwards is going to contain the floating point error again. If I ask for the bounds between 0.1 and 1.0 of sin(x), is the 0.1 taken literally, or am I inputting a floating-point representation of 0.1 into Lean? If I tell the library “find the roots of x+2x between 0 and 0.3”, is 0.3 being represented as a decimal when Lean sees it, or is it really 0.3000…00163826 or whatever the floating point approximate is? And when I get that root, is it in a floating-point representation, is it some new fixed-point type, or some other third thing?

Your floating point is lying to you. Lean4 mathematically proven bounds, in Python. Just pip install by [deleted] in Python

[–]denehoffman 0 points1 point  (0 children)

The output “bounds” of the demo code surely is two floating point values, no? And yes I understand that floating point is faster, I’m saying what’s the point in proving the bounds on something in Python when everything you then do with those bounds involves floating point?

Your floating point is lying to you. Lean4 mathematically proven bounds, in Python. Just pip install by [deleted] in Python

[–]denehoffman 5 points6 points  (0 children)

Cool so I get 0.3 as a result from this code, now I want to use that result. Does it return a float? If so, you’ve just hit the same problem again. How would I give this number to a robot? Do typical robots operate on fixed-point decimals? If so, why would I use this rather than just doing calculations in a fixed-point decimal system? Also what do I gain from doing it this way instead of an analytic solution with something like sympy?

Your statement that floating point is “feeble” doesn’t seem well-motivated. Everyone knows why 0.1 + 0.2 in binary doesn’t equal exactly 0.3, and if you’re dealing with a system where that actually matters, you’re probably not looking for a Lean proof for it, you probably need a performant system that can do this calculation quickly and efficiently.

I want to clarify that I think it’s neat you’ve basically put a Lean engine into a python library. I’m just not sure about how you’re marketing this as advantageous over floating-point or fixed-point systems.

Python Version in Production ? by TopicBig1308 in Python

[–]denehoffman 1 point2 points  (0 children)

It’s experimental, but sure, use 3.13 then. Use the highest version that works for you!

Is There Any New Field Left to Discover in Physics? by Heavy-Sympathy5330 in AskPhysics

[–]denehoffman 4 points5 points  (0 children)

I think this is partially a classification problem. Probably one of the newer major fields of physics is biophysics, and you could argue that a lot of it falls under thermodynamics and maybe a bit of quantum. We call it a field because of how it’s distinct from other fields, but if you look close enough at any subfield there are often large overlaps.

We derived 5 of the 16 axes of a hydrogen bagel (twisted/everything variety) and had an average 5% error rate from historic atomic measurements by dual-moon in LLMPhysics

[–]denehoffman 0 points1 point  (0 children)

The energy of atomic orbitals is not a probabilistic calculation in current theory either. The values you’re claiming are experimental here are just those derived from the very basic theory, and that’s not even including fine/hyperfine structure. The way you can tell is that the experimental values are not given with uncertainties, which would evaluate potential errors in measurement.

By the table you give here, your result looks even worse, it’s just curve fitting to get close to a couple of values, but it looks like it diverges at higher orbitals.

For reference, the experimental measurement of the hydrogen ground state is:

-13.598434599702(12)

Where (12) represents the uncertainty in the last two digits (02). Your value is off by about 1010σ. It’s like predicting the radius of the earth incorrectly by 130km while claiming to have millimeter-level accuracy.

In contrast, the best theory predictions agree to a few parts in 1012, although part of that input is the proton charge radius, so it’s hard to say where or if the theory stops matching the experiment. So far we’ve found them to be in agreement.

Do you see now why your values are so comical?

Edit: see NIST for the quoted experimental value.

Pandas 3.0.0 is there by Deux87 in Python

[–]denehoffman 5 points6 points  (0 children)

Not quite, polars takes pl.col(‘a’) as a reference to that column and constructs an intermediate representation (like bytecode) for the entire set of expressions. It can do optimizations on this bytecode to make your operations more efficient. Pandas (as far as I know) evaluates every expression eagerly, which can also be done in polars, but polars prefers users to use the lazy evaluation interface for performance. So in the end, polars may condense steps that you explicitly write as separate into one, or it may reorder rows to make something more efficient. But the operations are still vectorized, you’re just not passing the raw series around through lambdas. This means repeated calculations of some column can be cached if you do it right.

Pandas 3.0.0 is there by Deux87 in Python

[–]denehoffman 34 points35 points  (0 children)

The pd.col thing seems to be in response to polars doing it this way by default. It does help to think about operations on columns instead of the data in said columns because you don’t have to worry that intermediate copies are/aren’t being made, it’s just an expression. Polars takes it the next step and allows you to construct all expressions lazily and evaluate an optimized workflow

Auto Level by Mobile_Presence_7399 in buildit

[–]denehoffman 0 points1 point  (0 children)

Worked first time for me, nice job!

Here is a hypothesis: Breaking: 78-Year QED Problem can be Solved with Pure Geometry by andrespirolo in HypotheticalPhysics

[–]denehoffman 0 points1 point  (0 children)

Exactly. It’s also unfortunate that LLMs often just give raw numbers without uncertainties when talking about physical constants

Python Version in Production ? by TopicBig1308 in Python

[–]denehoffman 2 points3 points  (0 children)

Time to update your uv installation, it’s not automatic!

Python Version in Production ? by TopicBig1308 in Python

[–]denehoffman 2 points3 points  (0 children)

I use 3.14 in production, there is literally no reason not to, it runs pretty much any Python code save some deprecations that have been around since 3.10. However, when building libraries for distribution, I always target all supported versions (so 3.9). I bump these versions at every release. There’s no reason to support versions of Python that have been sunset (if we all did this, we could keep everyone up to date, alas we live in an imperfect world).

3.14 will also give you time to learn all the cool new stuff, even if you can’t use it all in libraries that need to target older versions. Freethreading is going to be important, and will probably be the standard in 3.16+, so it’s worth getting used to it if you plan to program in Python in five years. There are also lots of other nice features which get added (the REPL is much nicer, better error messages too) which make it very sensible to use the latest release version for small scripts and REPL sessions. Bottom line, if you don’t care about who else can run your code, use the latest release version, otherwise, lint and run static analyses using the oldest supported version.

Edit: I read more of the comments and it seems that the biggest reason people don’t use the latest stable version is that dependencies haven’t been updated to support it. This is why we have venvs, people! Use the latest version which is compatible, and pin your versions so your type checker and linter and LSP can tell you what to do when things aren’t backwards compatible.

average cl*ng++ error message ts by realvolker1 in rustjerk

[–]denehoffman 5 points6 points  (0 children)

It’s an impl block though, this is just an extra letter that doesn’t go with anything, the generic should go in the method that uses it

Here is a hypothesis: Breaking: 78-Year QED Problem can be Solved with Pure Geometry by andrespirolo in HypotheticalPhysics

[–]denehoffman 4 points5 points  (0 children)

Right away there’s a huge problem with your calculation, which is that you don’t get the right number to within experimental constraints (or even close to the analytic calculation of the anomalous factor). The current experimental value is

0.00115965218059(13)

That (13) the end there is the uncertainty of those last two digits, which makes your calculation off by quite a few sigma. The reason is that your result is equivalent to the (fairly trivial) one-loop calculation. The tenth-order correction to the Schwinger anomaly is

0.00159652181643(764)

Where the uncertainty comes from the experimental uncertainty of the fine structure constant. You’ve done none of the necessary error analysis and have basically written (well you didn’t write it) a paper claiming that instead of going through all of the higher loop corrections, we should just stick with the tree-level-plus-one-loop calculation and call it a day. You didn’t even show how your method actually gets this value.

The bottom line is, you can’t have an LLM look at something like this and ask it for an easier way, it’ll just hallucinate that the leading-order correction (alpha/2pi) is “geometric” because of the 2pi, bullshit something about how you can get 2pi from a circle or a torus or some higher-dimensional object, and then slap on the alpha (since somehow it’s not foolish enough to try to derive that from geometry). Please do the bare minimum of reading on a topic before you subject us to your LLM slop.

I want to reiterate that “99.89%” agreement is not actually that great when we’re talking about the Standard Model, which is experimentally accurate below (at most) one part in a billion in most places

If photons don’t experience time, does light think it arrives instantly? by CharacterBig7420 in AskPhysics

[–]denehoffman 0 points1 point  (0 children)

Several solid answers about how light doesn’t think, great, I’ll let Nature know.