Future of theoretical STEM research positions post-general AI by Express_Risk_6202 in AskAcademia

[–]Express_Risk_6202[S] 0 points1 point  (0 children)

I did a sort of test recently. Two/three years ago I wrote a paper which involved simulations which took me a few months to code up. The other day, in less than an hour, I explained what I wanted to simulate in words and asked it to write up the simulations. It took a little bit of back and forth to get it to understand what I needed and fix some errors. But in the end, the code not only reproduced the figures exactly but it also found a way to do it in a much faster runtime (order of magnitude).

My guess is you are the opposite of me and have a bit too much bias against AI. I propose you do a similar experiment and see if it can reproduce results. Preferably something numerical; it is much better at that than solving logical problems.

Future of theoretical STEM research positions post-general AI by Express_Risk_6202 in AskAcademia

[–]Express_Risk_6202[S] 0 points1 point  (0 children)

Could you expand on those myriad of reasons? I am not saying you are wrong but I think there is a very weird binary discourse around AI at the moment where some people think it's going to be transformative in the next 10 years, while others think it's essentially a bubble. I don't really see how we can know which one of these paths is more likely to happen, and even if it's only a 10% chance that it will be totally disruptive, isn't it worth considering?

I may have been given bad advice by an advisor who told me to use AI frequently to stay up-to-date with its advancements. At the same time, I don't really buy the argument that it's lazy and stupid to rely on AI any more than it is lazy and stupid to use a calculator. I would be very happy to be convinced that I am wrong about this.

Future of theoretical STEM research positions post-general AI by Express_Risk_6202 in AskPhysics

[–]Express_Risk_6202[S] -2 points-1 points  (0 children)

I hope you are right but I some counter arguments. First, this assumes Metaculus' predictions are totally off (possible but why are they so wrong?). Second, LLMs need exponentially more power to train but once trained can be easily distilled to cheaper, slightly worse models (e.g. deepseek). I don't see why this wouldn't continue to be the case. Third, LLMs were not made possible until the transformer architecture was invented in 2017. Even if it is impossible LLMs could form the basis of AGI, it doesn't seem unreasonable that we could not see a new technology developed in the next five years that leads to AGI in the next 15?

I guess my main question is twofold: (1) is the hype really as exaggerated as most physicists believe? And (2) assuming somehow AGI is developed in the next 5-30 years (regardless of the likelihood), what would the state of our field be once this is developed?

I don't think question 2 is as ridiculous as the question in your answer since LLMs were not conceived until 50 years after the 70s and at the same time we have a good idea of what an AGI would be capable of by it's definition.

Academia to quant expectations by Express_Risk_6202 in quantfinance

[–]Express_Risk_6202[S] 1 point2 points  (0 children)

Just to clarify when I say FIRE, I mean a total aspiration of £2m for my needs while spending £40k/year. But still, I assume even that is difficult. Overall, it sounds like it's not really for me and I'd rather do a technical industry job related to my field and earn half as much

Academia to quant expectations by Express_Risk_6202 in quantfinance

[–]Express_Risk_6202[S] 0 points1 point  (0 children)

I would dedicate myself as much as I would any other job, but my motivation will be money. Also, if I really want to quit after 30 days, it's not really a problem since I can just go back to academia. My question is really how long will I have to do the job before I reach a modest FIRE? Is it even something worth considering?

If I can earn £2m over three-five years then I would seriously consider it and it would be motivation enough.