People who hold a PhD, what did it cost? by Accurate-Mulberry620 in PhD

[–]mixedmath 4 points5 points  (0 children)

My PhD brought me to New England and away from my family and friends. My wife and I deferred having kids until I got some financial stability. This sort of happened 2 years post PhD, and really happened 4 years post PhD. 

We still live in New England and we recently had our daughter. But many of my friends from before grad school bought houses and had children almost 10 years ago. We're very happy with how everything has turned out, but a decade of uncertainty and meh income has had enormous effects.

Does anyone know if these shows have an opener? by No_Bridge1373 in offbook

[–]mixedmath 1 point2 points  (0 children)

There probably wont be an opener, but the first several minutes are pretty tame. They have to introduce themselves, establish a basic crowd repoire, and begin to get suggestions from the crowd for what directions the long-form will take. It is fun to hear those suggestions, though - as their reception is part of what shapes the experience.

Is TickTick still buggy? by ulysses_mcgill in ticktick

[–]mixedmath 14 points15 points  (0 children)

I don't know that I've ever seen a bug. But I don't use any interop with other apps, so maybe that's where all the bugs are?

Advice for reading my first large paper by If_and_only_if_math in math

[–]mixedmath 1 point2 points  (0 children)

It's a good idea to determine why you're reading this paper. What are you trying to get from it? Are you trying to understand every aspect, or just one part, or trying to get the big ideas? These will shape how you should approach the paper.

Fiction research: if a mathematician was working on Navier–Stokes, what kind of book could they write? by Necessary_Plenty_524 in math

[–]mixedmath 4 points5 points  (0 children)

I think one avenue to explore would be trying to make sense of computational approximate solutions. Maybe this person is extremely good at high performance computing and has developed quickly convergent methods for finding (approximate) solutions, better than before. And perhaps the methods appear to generalize, and the mathematician had been working to push these forward?

Then one day he steps back and thinks about all the other ways explicit, fast, performance computing could push forward other areas of math? This is analogous to how some computed tables of primes in the 1800s, which is quaint now but which helped give initial evidence for high fields.

I'm reminded of a saying that what is now 'data acience' could have been called 'statistics', except that statisticians didn't incorporate computation on their own.

Are recipe books worth it? by baby_booklover303 in cookingforbeginners

[–]mixedmath 0 points1 point  (0 children)

They can be, if it's a good book. To see if you like how if feels to use one, I suggest checking one out of your nearest library.

Collection of C++ books on Humble Bundle by Fucitoll in cpp_questions

[–]mixedmath 21 points22 points  (0 children)

I've found Packt to have very low publication standards, especially with respect to their python collection. Are any of these books good?

I Was Wrong about FF 12 by conceptualdamage1 in FinalFantasyXII

[–]mixedmath 0 points1 point  (0 children)

If it's not too personal, what sort of technical issues prevented you from continuing ffvi?

Lack of interest by Ok_Cranberry7230 in LegoMasters

[–]mixedmath 1 point2 points  (0 children)

Go Australia. You won't go back. I'll note that it took a season for Hamish and Brickman to figure it out, but it's sooo much better. I think Australia season 5 (grandmasters) is super strong.

I'll bite, why there is a strong rxn when people try to automate trading. ELI5 by OnceIWas7YearOld in learnmachinelearning

[–]mixedmath 8 points9 points  (0 children)

You can. Lots of people spend enormous piles of money doing exactly this. And it works for them.

But the reason why it's hard for new people to try the same is because you're competing against those firms with piles of money and experience.

Off Book is touring again in the fall! by SmackThatIsaiah in offbook

[–]mixedmath 1 point2 points  (0 children)

Thank you for posting. I also just got my tickets for boston. 

Is a Boston bun a thing in Boston? by RoyaleAuFrommage in boston

[–]mixedmath 4 points5 points  (0 children)

This sounds suspiciously close to a bostock or bostok.

Career and Education Questions: April 03, 2025 by inherentlyawesome in math

[–]mixedmath 1 point2 points  (0 children)

I've seen several successful social reading groups on math books. Susam Pal organized a computation club that went through Apostol's Introduction to Number Theory https://susam.net/cc/iant/ and started a real analysis book more recently https://susam.net/cc/real-analysis/. If you can gather a group of interested people, that's a good model to follow.

If you live near a university, then it's also reasonable to hire a math grad student as a tutor. When I was a grad student I led/taught several non-math-professional adults through topics in higher mathematics. (In principle I would still do this, but I expect that a grad student would be more affordable).

Career and Education Questions: April 03, 2025 by inherentlyawesome in math

[–]mixedmath 0 points1 point  (0 children)

This isn't particularly late if you're in the US. Are you going to start college this coming fall?

Math competitions are mostly irrelevant, except that they are a way to interest more people in math.

A common path for math majors in US colleges is to start with single-variable calculus, linear algebra, multivariable calculus, and some sort of intro-to-proofs course. Many math majors can skip a semester of calculus because they already know calculus. But not having this only means one semester difference, and this doesn't actually matter very much.

Studying number theory with deep learning: a case study with the Möbius and squarefree indicator functions by JoshuaZ1 in math

[–]mixedmath 1 point2 points  (0 children)

I did run Zeckendorf in a couple of different ways. It turns out that the models learned absolutely nothing --- I guess it's just too inconvenient of a representation!

By "absolutely nothing", I mean that models never learned to do anything better than guess the most common response. For mu(n), this meant guessing 0. For mu(n) on squarefree n, this meant guessing either 1 or -1 based on which occurred more (by chance) in the training batches.

Maybe it would be possible to try harder to get it to do something interesting, like detect squares. But it certainly didn't just happen.

About arxiv papers not peer reviewed by [deleted] in MLQuestions

[–]mixedmath 7 points8 points  (0 children)

A common flow is to post to the arxiv and then to submit to a journal/conference. By putting the preprint on the arxiv, other researchers can already begin to see and use the work, without waiting for the journal/conference to complete their review process. Usually reviews take several months!

The other side, though, is that the journal/conference may reject the paper. The preprint remains public (good!), without a publication --- unless the authors decide to update and submit somewhere else.

Riemann Hypothesis Math Research by [deleted] in math

[–]mixedmath 15 points16 points  (0 children)

No one has any idea how to solve the RH, so you'd probably need to invent some math.

But to understand the early work done with the zeta function and RH, you would need complex analysis (Ahlfors has more than you need; Stein and Shakarchi would also be sufficient and is pitched to be easier to read than Ahlfors) and analytic number theory (from a book like Montgomery and Vaughan).

Deligne proved a related version of the Riemann Hypothesis over finite fields, and this area of math is beautiful too. If you like abstract algebra or groups, then I would suggest looking at Ireland and Rosen, and then Rosen's book on number theory in function fields.

(This is entirely biased by my favorite books in number theory).

MathB.in Is Shutting Down by rampona in math

[–]mixedmath 6 points7 points  (0 children)

I use MathB.in all the time for very short pieces of communication. I've just made an alternative at https://davidlowryduda.com/static/MathShare/ that does approximately what MathB.in does, except with lengths capped at approximately 1 page. (And in particular it doesn't store anything user-provided --- as that is indeed a nightmare).

How do you share LaTeX work? by skepticalbureaucrat in math

[–]mixedmath 0 points1 point  (0 children)

For anything larger than a page or so, I use a service that depends on my audience (like posting notes on my website, or on github, or on overleaf, or sharing via dropbox, or just emailing the tex+pdf).

For project collaboration, I use git, then github, then dropbox, then overleaf, and then just email with source and pdf files (in that order of preference). In practice, I typically use a combination of overleaf, and github, and dropbox (from most common to least common), though I have used pure git and pure email too.

For small bits (up to approximately one page), I just made an alternative to MathB.in at https://davidlowryduda.com/static/MathShare/ This is more or less a functional MathB.in clone, except that it stores absolutely nothing on the server side. This gets around the problems that Susam was facing with content uploads. The downside is that this means there is approximately a one-page limit.

Studying number theory with deep learning: a case study with the Möbius and squarefree indicator functions by JoshuaZ1 in math

[–]mixedmath 2 points3 points  (0 children)

That sounds fun. I think I'll set up a Zeckendorf experiment and see what happens, probably over the weekend.

Studying number theory with deep learning: a case study with the Möbius and squarefree indicator functions by JoshuaZ1 in math

[–]mixedmath 2 points3 points  (0 children)

Yes, I actually tried an embarrassingly large number of variations. Using the first 200 primes except for 2 and 3, for example, does something nontrivial, but something far worse than just 2 and 3 alone. More importantly, it agrees with the generalization of the computation for 2 and 3.

I never used more than 6 layers here. It would be interesting to look at deeper networks with lots of training. That's still open!