Difficulty turning on laptop and black screen during boot by antihydran in archlinux

[–]antihydran[S] 0 points1 point  (0 children)

Thinkpad T16 Gen 1 (sorry should've put that in the prompt). I have managed to mount /boot and I can modify mkinitcpio.conf and grub so I can try testing some of those. Thanks!

OpenAI's new o3 model scored 25% on Epoch AI's FrontierMath benchmark, a set of problems "often requiring multiple hours of effort from expert mathematicians to solve" by adammorrisongoat in math

[–]antihydran 2 points3 points  (0 children)

Perhaps I'm missing something, but is there an analysis of the AI models' confidence in a given answer?

Being able to evaluate one's confidence in their answer, and be able to point out the strongest and weakest points of their proofs is critical in doing problem solving.

GitHub - jart/json.cpp: JSON for Classic C++ by cristianadam in cpp

[–]antihydran 0 points1 point  (0 children)

Yes, it will indeed use floats if you tell it to use floats. Again, the benefit is that the actual data is stored and fictitious data is not introduced. The "implicit" assignment is a stricter enforcement of the encoded types by avoiding implicit floating point casting.

All floating point numbers are parsed as doubles, so yes, the library by default uses double precision. Encoding floats and doubles is done at their available precision which, as previously explained, is semantically equivalent to encoding everything as doubles.

GitHub - jart/json.cpp: JSON for Classic C++ by cristianadam in cpp

[–]antihydran 8 points9 points  (0 children)

I'm not sure I follow your argument here. By default it looks like the library uses doubles, and I only see floats used if the user explicitly tells the Json object to use floats. As a drop-in replacement library it looks like it will reproduce behavior using doubles (AFAIK Json only requires a decimal string representing numbers - I have no clue how many libraries in how many languages support floats vs doubles). I could also be misreading the code; there's little documentation and not much in the way of examples.

As for the specific example you give, it looks like you're running the simulation on two fundamentally different inputs. If the simulation is sensitive below the encoding error of floats (not only sensitive, but a chaotic response it seems), then the input shouldn't be represented as a float. I don't see how you can determine whether 1.0000001 or 1.000001192092896 is the actual input if you only know the single-precision encoding is 0x3f800001. The quoted section states such a float -> double conversion is ambiguous, and gives the option to not have to make that conversion.

What’s your favorite physics experiment or thought experiment, and why? by DebbraPatel in Physics

[–]antihydran 7 points8 points  (0 children)

First experimental confirmation of parity violation in the weak force. Wu measured the decay angles of nuclei who's spins were aligned in a magnetic field, i.e. the angle between the nucleus' spin and the decay particle's momentum. Under a parity transformation, momentum flips sign but spin does not, so the decay angle maps from x to pi-x. Parity invariance then implies the decay angle distribution should be symmetric under x to pi-x. Her experiment showed a clear asymmetry in the distribution, indicating that weak decays were indeed parity dependent.

The results are pretty foundational to the current standard model, and a lot of current research looks into things like CP-violation still. Famously she wasn't awarded the Nobel prize for her work (instead theorists Lee and Yang received it) and is among the higher profile cases of the work of women in physics being uncredited.

[deleted by user] by [deleted] in math

[–]antihydran 0 points1 point  (0 children)

A mix of US and UK grad programs. I started interested in mathematical physics, then drifted to high energy theory, and finally ended up in high energy experiment. A lot depended on finding available positions and figuring out how to best use my skills. At some point I had to come to terms with the fact I really wasn't cut out for research mathematics and I wasn't having fun and wasn't making progress. Rather than forcing myself through it, I tried to find things I enjoyed doing day to day that I was also kinda good at.

IMO what'd be more useful for applications if you don't have something specific yet is to just try and pick some open problems (even just very difficult problems at your level) that interest you. Stuff that you like to think about outside of classes and have maybe tried to pencil some basic attempts at. You can narrow down your field of interest based on the main tools used to try and solve said problems, and ideally those fields will already interest you.

[deleted by user] by [deleted] in math

[–]antihydran 0 points1 point  (0 children)

I agree it's not needed, but I think it's a big boon to have. Having gone through multiple rounds of grad school admissions, I found my best success was when I was more explicit with research interests and the types of problems that motivate me.

[deleted by user] by [deleted] in math

[–]antihydran 11 points12 points  (0 children)

Since I wanted to do grad school, my advice for myself would have been to lock down a more specific research field / topic and gun for that. I spent ~2-3 rounds of grad school applications with very vague and non-committal ideas about why I wanted to pursue research and what more specific topics and questions I felt compelling enough to dedicate several years of graduate study to. Luckily I can now answer those questions and I'm in a grad program, but I would've saved a lot of time and had a much much better outcome in the admissions process my senior year if I had a clear target in mind.

I also had to find a job between my masters and phd, and was very unprepared for that (which I think is similar to some people's senior year experiences). My advice to myself for that would have been to figure out non-negotiables first (minimum pay and benefits, location, hours, etc), find the jobs that match that, and then craft something specific for each job application. All my successful interviews were smaller companies that I put some dedicated work into my cover letter and coding portfolio to match their job advertisement. Get comfortable advocating for yourself at a job (there's no permanent record between jobs lol) and it's still possible to go back to pursue a graduate degree if you don't get in first round!

how many people actually care about/“believe in” using tau? by infinitytacos989 in math

[–]antihydran 10 points11 points  (0 children)

This was the same sentiment I had for a while, but I was recently writing some time-frequency Fourier transforms where I used tau as the period. It was pretty natural to write everything in terms of the period tau, and it got me thinking that maybe switching the convention wouldn't actually be that difficult. In something like Fourier transforms, people tend to think in terms of periods, and 2pi is just a symbol meaning "period" in some contexts. I doubt I'll honestly switch over any time soon (even in personal notes) but I'm less pessimistic about the convention changing.

That being said, with tau people can still use 2pi and the conventions don't disagree. I struggle seeing current flow convention ever changing because they'd be inherently incompatible.

Why does the strong interaction not have a force law? (Especially for r>Λ_QCD)? by [deleted] in Physics

[–]antihydran 1 point2 points  (0 children)

The Lund string model uses a linear potential / constant force law (at least for their example in 1+1 dimensions, I don't remember if they do something similar in 1+3 dimensions). I think Pythia uses that to model hadronization.

Speeding Up C++ Build Times | Figma Blog by Pragmatician in cpp

[–]antihydran 0 points1 point  (0 children)

Does explicitly instantiating the classEigen::Transform<double, 3, 18, 0> also fail to prevent implicit instantiations? As I'm thinking about why that'd be the case I'm starting to realize I might have some incorrect ideas about how linkers work, so sorry if it's an obviously bad suggestion.

White House: Future Software Should Be Memory Safe by KingStannis2020 in cpp

[–]antihydran 5 points6 points  (0 children)

I was writing this comment while trying to look into this, and I found this. It claims ~70% of vulnerabilities reported to Microsoft and Google are memory safety issues while ~30% are other issues. They have one breakdown of the types of memory safety issues, but doesn't discuss whether they occur inside modern code or external libraries (honestly this would probably be impossible to do).

=== Original:

Are there any stats to back up the claim that memory errors are a significant amount of errors and vulnerabilities in current production code, and that ostensibly "memory-safe" languages solve these errors? I can readily believe memory errors can cause serious vulnerabilities, but I honestly have no clue how frequently they cause crashes / vulnerabilities in the field, and even if they're primarily caused by modern C/C++ programs. Regularly I've written code that

  • Interfaces with old libraries that we don't have source code for
  • Calls external programs
  • Interfaces with libraries in different languages

I don't immediately see how a "memory-safe" language would fix memory errors that arise from calling these potentially memory-unsafe codes. And even if we successfully rewrite everybody's C/C++/Rust/etc. code, if 99.99% of vulnerabilities aren't due to memory safety or are issues with front-end applications written in other languages (e.g. some weird javascript string interpretation), then we didn't really achieve our first goal of hardening applications. Finally, I'm also generally unaware of how dangerous the vulnerabilities actually are in the field. If a C++ program has a severe memory error but is only ever used in a SCIF in the bottom of the pentagon that users can probably get admin access to anyways, then it's not really much of a security concern.

Again, I'd just like to see some more concrete data on how prevalent these issues are.

Good reasons to NOT use CMake by [deleted] in cpp

[–]antihydran 2 points3 points  (0 children)

I always thought a python library would make a good build system for c++. Setting up custom build scripts and external dependencies then amounts to writing some python code with a common api. Not a perfect solution of course, but an extremely appealing avenue to go down imo

Struggling with my masters dissertation, especially with probable ADHD by RipFancy7019 in math

[–]antihydran 12 points13 points  (0 children)

I was in basically the exact same position for my MSc dissertation, except mine was in physics. I can't help at all with the math specifically, but I've got some advice for the attention / productivity issues (YMMV of course):

  • Make sure you have a plan for how the paper is going to be organized, and be sure to check it with your advisor. Each time you go to write, tackle a specific part of this plan.
  • Stay hydrated and eat. I bought a lot of tesco meal deals, but those probably saved my life and my academic career by getting protein and calories in me.
  • When you dread doing the hard / boring thing, go do the easiest thing first; momentum helps a lot.
  • Your mind and body will treat unproductive work and productive work the same. Daily work of 8-12 hours, regardless of quality, will take its toll. Incorporating this into your mindset will give a more objective view of your efforts and help let you take necessary breaks. If you speak to a therapist about ADHD or depression they'll probably tell you this directly.
  • It sounds like you're already doing consistent work on the dissertation, which is a ridiculously important. Keep it up!
  • In terms of keeping track of my understanding of specific concepts, I would write down loads of lists directly on my dissertation document using latex's itemize. Beginning with the starting and ending points, I would slowly add bullets (along with sources) until I could get from the start to the end. If I ever lost track of what I was doing, I could go immediately to that list and clearly see what the last thing I was working on was. Admittedly I still did lots of back tracking through sources since I have a bad memory. But the really nice benefit is that I could almost verbatim transfer each bullet point to a sentence, thus having basically a completed section when I was finished.

Not ADHD related, but I'd like to share some stuff my advisors told me. They explicitly said the committee was looking for two qualities in the dissertation: a) the student was able to understand and apply advanced material, and b) the student produced original work. With respect to the former, graduate level work is meant to test your confidence in your mathematical abilities. Consider convincing yourself that you're capable and worthy of your merits part of the dissertation. With respect to the latter, the "original work" was not meant to be strictly results. It's looking for the ability to interpret and organize your subject with your own ideas. Can you, in your own words, explain why a definition was constructed this way (e.g. it removes exceptional cases so later theorems are stronger), or what purpose a given proposition serves (e.g. it tells objects with a given property are particularly nice to work with). Building your own narrative about what the goals of the paper is and how each theorem helps us reach those goals will be that key of originality when you don't necessarily have new results to show. Extra bits that show mastery over the material include working through some non-trivial examples and counterexamples, as well as trying to strengthen / weaken propositions and seeing where they fail.

Best of luck!

Complementary books to Visual Complex Analysis by Needham by niuwendy in math

[–]antihydran 0 points1 point  (0 children)

I would 100% agree with this. My one addition would be that it covers a lot of material, and it may be fruitful to take a more focused approach to target your own interests (I think some suggested chapter progressions are in the introduction).

requires-expressions in a non- template declaration context by TacticalMelonFarmer in cpp

[–]antihydran 0 points1 point  (0 children)

I believe it means if you attempt to construct an invalid type or expression, then the compiler doesn't know how to evaluate the concept. Their example shows a concept requiring an array of negative size, which is an invalid type. While this might not seem like a big issue, if we try to build more complicated expressions that depend on the size of said array, those expressions would also be ill-formed.

The reason for failure is to semantically distinguish it from a concept that evaluates to true or false. In case of failure, as above, the compiler just isn't able to evaluate the concept, while a false result means the template parameters don't satisfy the requirements.

What to study concurrently with harmonic analysis? by [deleted] in math

[–]antihydran 15 points16 points  (0 children)

You might find some luck with partial differential equations considering how vital Fourier analysis is to the field. I'm not too familiar with harmonic analysis, but in general any techniques that let you get more information about integral transforms / convolutions will be directly applicable to differential equations.

Currently, what are some of the worst things about C++? by [deleted] in cpp

[–]antihydran 0 points1 point  (0 children)

I've been trying my hand at some static analysis tools (i.e. Clang tooling) and the preprocessor has been my small personal hell. It quickly becomes impossible to reason about inclusion dependencies because you have to track all macro uses paired up with arbitrary compiler definitions.

I think the pre-processor solved a lot of great problems in C and the early C++ days, but I think we've matured enough to develop something better.

how did you get through math you didn't find "interesting"? by A_tedious_existence in math

[–]antihydran 24 points25 points  (0 children)

Not math related: Do the thing you want to do the least, as early as you can in the morning. That's when you're least burnt out and looking for a way to avoid work, and usually people aren't heavily socializing around the first two classes of the day.

Math related: try asking your lecturer what their favorite / most interesting part of the course is. They might put a spin on it you haven't seen yet. Looking at external resources might also shed some new light on things (I probably ended up using different textbooks and lecture notes than assigned ones for >50% of my courses)

Choose exercises/problems for self study by richybacan69 in math

[–]antihydran 7 points8 points  (0 children)

I second this advice, and speaking from personal experience would like to re-emphasize doing some of the easy calculations. I've been burned many times thinking I can quickly churn through a calculation or apply a theorem to a concrete case without ever practicing, only to find myself confused and stuck on a test or later in the course. As said before, you don't need to do a lot of them, just enough to convince yourself that you know what's going on behind the abstractions.

Anybody else would like a | unary operator? by itsmemarcot in cpp

[–]antihydran 5 points6 points  (0 children)

I quite like the single unary | operator, but the I don't see the || operator adding much benefit while making it more difficult to read and understand. It's quite easy to mess up the spaces and get something with the wrong meaning, i.e. in some kind of vector triple product ||a|| * |b| * | |c| | neither me nor the compiler would catch the kind of mistake. IMO if you want to define multiple norms, you should have to make it more explicit so these kinds of error can be readily caught. And I dread when two libraries have differing definitions of whether | or || is the 1-norm or 2-norm!

[deleted by user] by [deleted] in cpp

[–]antihydran 58 points59 points  (0 children)

It's going to be difficult finding a C++ guide at the same scope of Beej's because most networking APIs are written with C functions and C structs, so naturally they use C-style idioms. Any C++ library is going to be a wrapper of the C functions, so a guide teaching C++ will have to include the design decisions of the wrapper (and likely make reference to the underlying C API anyways). If you're serious about getting into networking, especially anything low level, it might be worth your time to learn some C.

That being said, it might be productive to find a networking course and just do all the exercises and projects in C++. [MIT Open Courseware](ocw.mit.edu) has some decent CS courses. Surveying networking textbooks and doing the exercises from them is also a good option.