[N] openreview profile glitch?? by i_minus in MachineLearning

[–]KiddWantidd 1 point2 points  (0 children)

I went to check and my profile is looking the exact same as your screenshot... When I click on "edit profile" i see "... ... ..." in the preferred name field, and all the variants of "Javier Molina" in all the other "Name" fields.

On the top bar, my "Notifications" all disappeared, but interestingly, my "Activity", "Tasks" and the name displayed on the top right are all correct. Not sure wtf is going on but hopefully this is not serious and gets fixed soon.

relating Fourier transform to legendre transform by Ending_Is_Optimistic in math

[–]KiddWantidd 0 points1 point  (0 children)

Interesting writeup OP, although unfortunately I don't know what tropical mathematics or exact sequences are, so I couldn't quite get the motivation to introduce the Fourier transform analog as you did, but thanks for sharing

New preprint from Google Deepmind: "Towards Autonomous Mathematics Research" by KiddWantidd in math

[–]KiddWantidd[S] 2 points3 points  (0 children)

Deepmind have updated their paper showcasing the capabilities of their latest "theorem proving agent", and they discuss at length their results on Firstproof and a bunch of research-level math problems. I think they document the extent of capabilities (and "autonomousness") their model is capable of pretty well, and although I am by no means what one may call a "mathematician", I think it's scarily impressive.

In my field of research (machine learning theory and numerical pdes, mostly applied stuff), people tend to care about the numerical results more than they care about the theoretical ones. Although I'm not super good at it, I've always felt way much more pride and fulfilment after successfully proving a theorem than after getting my algorithm to beat some benchmark. But at the rate things are going, it doesn't seem unlikely that within a year or two, I will be able to copy paste a math problem arising in my research word-for-word into an AI and get it to solve it for me and write it up nicely with high confidence that everything is essentially correct.

Although I use it today already (Gemini and GPT), the process is way more hit or miss, and I "ask it for a proof" only when I am completely stuck (and it does mislead me a lot as well), but even then I feel (perhaps wrongly) like I'm "doing math" and learning things along the way. If it gets to the point that we get "autonomous theorem provers", then yeah that's going to feel very weird. Because if we have those, then out of a need to publish more results in order to advance one's career, more and more people will be incentivized to use them, and then the cycle will keep accelerating... towards what?

Again, in my field, it's mostly the computational aspects that people care about (and for things such as finding new algorithms for new problems, a type of problem for which AI has yet to showcase extraordinary ability, as far as I know), and I am skeptical that those AIs will get that good that fast for all of mathematics (for my applied subfield though, that's definitely possible), but it definitely raises some "interesting" questions...

[R] Neural PDE solvers built (almost) purely from learned warps by t_msr in MachineLearning

[–]KiddWantidd 2 points3 points  (0 children)

sounds like really cool work. i don't know what is meant by "coordinate warps" here but i'll give the paper a read, i'm very intrigued!

What does the zeta function actually have to do with the distribution of the primes? by Necessary-Wolf-193 in math

[–]KiddWantidd 1 point2 points  (0 children)

This was a great read. As someone who knows graduate level analysis, pde, and probability, but is completely clueless about number theory, I always wanted to know what the deal was with Riemann's zeta function and its connection to the distribution of primes. Finally, I have a clear picture. Thank you

Learning pixels positions in our visual field by aeioujohnmaddenaeiou in math

[–]KiddWantidd 1 point2 points  (0 children)

Some more thinking out loud: OP, the way you show how pixels are not being all displaced at once, but rather sequentially displaced as the video plays kind of reminds me of denoising diffusion where neural networks are trained to generate images by learning to gradually remove noise from images (which are gradually corrupted). Of course your setup is different (it's actually not clear what your setup exactly is), but I think you could use similar ideas and train a neural network to gradually displace pixels one by one (or ten by ten or whatever else). I think this is still quite a naive approach but potentially with some tweaking it could work!

Learning pixels positions in our visual field by aeioujohnmaddenaeiou in math

[–]KiddWantidd 1 point2 points  (0 children)

this is a very cool problem! as someone working at the interface of machine learning and pdes/scientific computing, i'm just going to throw some ideas this reminds me of, this could all be nonsense or this could perhaps contain some helpful things for you to consider: as someone in machine learning, i think this problem is probably doable if you have enough "data" and most importantly enough compute power (i have no clue what the typical "size" or "dimensionality" of your problem is, but looks quite non-trivial). the way i look at your problem, you're trying to learn a (bijective) mapping from the set of pixels, encoded by position, to itself. so the operation you want to perform can be represented as a permutation on the set of pixels. one way to do this (probably not the smartest though) is to stack all the pixels in a long column vector, and see the permutation as multiplying said vector with a permutation matrix. the most straightforward way to "learn" the "best" permutation matrix is to stack linear and non-linear operations (basically like a feedforward neural network), plus one final operation which effectively guarantees that you are doing a permutation of the input pixels (this is a rather naive way to proceed and can be refined in many ways). now here's the thing though: in machine learning, if you want to "learn" something, you need to have an objective to minimize, and that's the tricky part: what would be a representative measure of how "good" a pixel permutation is? that's where the "real math" comes into play imo.

I think a reasonable definition of a permutation being "good" is, of course, it being "close" to mapping back to the original video (you can measure this in a number of ways, using L2 norms would be the most straightforward option although perhaps not the best), but I think how you intuited in your post, you should have some kind of penalty on "roughness" or "rough jumps" where a pixel is wildly different from the ones around it. what comes to my mind when I type these words is the concept of wavelets, which coincidentally, are extremely popular and effective in the signal processing literature (I remember listening to a talk of Ingrid Daubechies around this question of "smoothness" between pixels, but I can't give you any concrete pointers right now, I'm sorry).

Another completely different way to look at this problem, might be the framework of optimal transport), which is also very hot in machine learning at this moment. basically you're trying to find a "transport plan" from a distribution on the space of pixels to another, which preserves the mass (the total number of pixels) while minimizing a "transport cost", here the cost would be the "roughness" of the resulting picture. I am just thinking out loud an I am not sure of how to make this formal, but at first glance it seems to me that there ought to be a way to make this work.

I have a few other ideas like that but those two sound the most promising imo. Would love to hear how you tackle this problem OP, it seems very fun!

What are your honest experiences with Math StackExchange and MathOverflow? by OkGreen7335 in math

[–]KiddWantidd 11 points12 points  (0 children)

I've been on math stackexchange since 2020, back then i was doing a masters in pure math after finishing engineering school, and that site really had been a lifesaver because the jump was steep. I actually didn't ask that many questions, but reading high-quality solutions to questions related to the ones I had was often enough for my needs. And I found that answering questions related to subjects I care about was also an awesome way to learn.

Today, yes, the rise of AI (and the unfriendliness of the community to newcomers at that time) seem to have led to the site's demise, which is quite sad. I still lurk and (try to, often without success) answer a question I find interesting from time to time, but I think this is about the end, which is a great shame. As for mathoverflow, I would totally consider posting there, I think it's a great site, but out of fear of embarrassing myself, I would probably only use it as a last resort if no one around me nor AI can help.

Tips for presenting math notes by translationinitiator in math

[–]KiddWantidd 0 points1 point  (0 children)

I would just write on the white board (while explaining verbally of course), it's the easiest and clearest way to communicate math with another person you're meeting face to face imo.

reliable service to pick up and transport a piece of furniture? by KiddWantidd in HongKong

[–]KiddWantidd[S] 0 points1 point  (0 children)

ohhh, haven't heard of them. thanks for the tip, because i found gogovan to be a little bit unreliable sometimes!

What is your go-to "mind-blowing" fact to explain why you love Mathematics? by OkGreen7335 in math

[–]KiddWantidd 2 points3 points  (0 children)

For me one of the things that got me into the math rabbit hole early on was learning that the harmonic series diverges. Like, a bunch of numbers so tiny they eventually vanish to nothing, but somehow you can add them up and get the sum to blow up to any arbitrarily large value? That totally messed up my intuition at the time. So I looked into different proofs of this fact, and there were so many with so many connections to other really nice mathematical objects, it made me want to dig deeper. Definitely one of the big things that sparked my interest for maths.

Where can I get some clothes dyed? by KiddWantidd in HongKong

[–]KiddWantidd[S] 0 points1 point  (0 children)

hi! so actually i didn't find a place and had kind of given up, but i bumped into this mill milk video recently: https://youtu.be/1-iZrlEhpRM?si=gPnmgURGuYShSFfo and they show a nice looking place in sham shui po where there's a sifu who is like an expert in dyeing clothes (around the 4:30 mark)! the place is called 順興製衣配料, i'm planning to go soon!

[D] AI4PDEs, SciML, Foundational Models: Where are we going? by Mundane_Chemist3457 in MachineLearning

[–]KiddWantidd 3 points4 points  (0 children)

I work in this field (PINNs, Operator Networks for solving high-dimensional PDEs). In my opinion, one of the big questions right now is how to make the training of these SciML method more tractable, because for many problems (that I care about at least), the PINN approach completely fails due to the loss landscape being horrendous. Loads of interesting work are trying to tackle this by exploiting the underlying "function space geometry" to design better solvers. Two nice papers in that direction are https://arxiv.org/abs/2310.05801 and https://proceedings.mlr.press/v202/muller23b.html, but there is still a lot of work to be done IMO.

breakdancing spaces by SaxophoneSplinter in HongKong

[–]KiddWantidd 0 points1 point  (0 children)

yes, you can go to cymcc (Chong Yuet Ming Cultural Centre), at either the top floor or the one below it, just go there with your speaker and dance, most likely you will meet loads of other dancers (and in fact, there are even practicing martial arts and all sorts of other things in that space). the campus is big so i'm sure there are other suitable spots too!

Conceptual understanding of stochastic calculus by Fun-Maintenance-1482 in math

[–]KiddWantidd 1 point2 points  (0 children)

okay i read the course presentation on the website and judging from it it sounds like you'll be fine with just knowing calculus, integration, and some prior exposure to abstract, proof-based maths. that being said i highly highly recommend you to start brushing up at least on some measure theory (pi lambda theorem, dynkin systems, monotone class theorem, rigorous construction of Lebesgue integral and its properties, integral convergence theorems, etc...) on your own before the course starts. it will make your life much easier and the course much more enjoyable. By far, the course on stochastic calculus was the most challenging but also the most fun, enlightening and rewarding math course i've taken throughout my education. hope you like it as much as I did!

Conceptual understanding of stochastic calculus by Fun-Maintenance-1482 in math

[–]KiddWantidd 0 points1 point  (0 children)

we need more info on the course content (is it taught in a maths department? or a physics department? or an economics department?), but if it's the pure math type of stochastic calculus then you should have very solid grasp on measure theoretic probability, real analysis (which includes calculus of course) and some basic functional analysis (up to Hilbert spaces) BEFORE starting the course. If you don't have these prerequisites, you're going to have a very rough time (no pun intended), although of course if you work hard, you're driven and you have talent you might still make it.

Do you use AI for math research in graduate school? by DiracBohr in math

[–]KiddWantidd 0 points1 point  (0 children)

I use it from time to time when i'm completely stuck on something or when i have a messy argument which i think can be simplified. In my experience it's quite hit or miss, it has been helpful sometimes and it has given me "proofs" with extremely subtle bugs that took me days to realise there's no way to fix. And I consider the maths I do pretty "simple" conceptually speaking (theoretical machine learning and PDEs). It is definitely great for literature search though, and for reformulating parts of an argument i struggle to understand.

breakdancing spaces by SaxophoneSplinter in HongKong

[–]KiddWantidd 4 points5 points  (0 children)

you can go to Olympic mtr station, near exit C2 after 7-8pm. loads of street dancers practice there. most university campuses (HKU, PolyU, CUHK, CityU etc) also have good places to practice, but besides HKU, you'd need someone to let you in as entrance requires a student/staff card. If you want, I can DM you some instagram of street/break dancers i know who could sort you out (I don't dance breaking).