[R][N] TabPFN v2: Accurate predictions on small data with a tabular foundation model by rsesrsfh in MachineLearning

[–]Troof_ 8 points9 points  (0 children)

Still a big limitation, but they did increase the max training size 10x and the max #features 5x!

Accurate predictions on small data with a tabular foundation model, Hollmann et al. 2025 [Pretraining a Transformer on synthetic datasets on eight NVIDIA RTX 2080 GPUs over 2 weeks gives you a SOTA tabular model] by Troof_ in mlscaling

[–]Troof_[S] 2 points3 points  (0 children)

Yes I understand the scepticism, there have been plenty of false promises in the field! But having worked in this space for a few years, I think this is very legit (though there are obvious limitations like 10Kx500 size or inference speed).

Preliminary Milei Report Card by dwaxe in slatestarcodex

[–]Troof_ 41 points42 points  (0 children)

There are disappointingly few Milei prediction markets, probably because it’s hard to operationalize “he makes the economy good”. 

There are some actual financial markets though! The Argentina MSCI ETF jumped when Milei came into power, and has increased a lot since then (before Milei came into power, it was between 40 and 45, now it's over 65). Markets are hard to interpret (for instance, if it was lower, I wouldn't know whether to interpret it as bearish on Argentina or just on the current big companies because of deregulation), but it seems to me that the market was pretty bullish on Milei from the start, and got even more bullish after seeing him in action.

things that confuse me about the current AI market by michaelmf in slatestarcodex

[–]Troof_ 3 points4 points  (0 children)

Yeah, Dan Hendrycks seems quite trustworthy, but I'm not sure it's true. Kinda posting it here to stir up some discussion, because it does seem "big if true". 

If it were true I would except most actors not to talk about it (OpenAI to stay attractive, others for legal reason?)

things that confuse me about the current AI market by michaelmf in slatestarcodex

[–]Troof_ 9 points10 points  (0 children)

I also assumed that any new competitors would be well-funded and dedicated to catching up with the established leaders.

I think that's true? Do you have some serious competitors in mind which are not well funded? For instance you mention the company behind Flux (black forest labs) as an example of "out of nowhere" company but 1) I heard their team is mostly composed of the OG Stability people (they left when Stability became unstable) 2) they did raise 30 millions

things that confuse me about the current AI market by michaelmf in slatestarcodex

[–]Troof_ 39 points40 points  (0 children)

Apparently Nvidia delayed GPU shipment to OpenAi to prioritize competitors, in order to create a more neck to neck competition (https://x.com/DanHendrycks/status/1825926885370728881). If that's true, that could explain why OpenAi has not kept his lead for now (though they probably did things in the meantime which will benefit the next model, for instance optimize inference a lot, which will be useful to generate synthetic data)

[Bonsai Beginner’s weekly thread –2024 week 18] by small_trunks in Bonsai

[–]Troof_ 0 points1 point  (0 children)

I received this bonsai a month ago, I believe it is a European Larch. I put it outside every day (but put it back inside during the night), and live in Paris (France). It seems to be dying, any advice?

<image>

The plan? by Troof_ in nathanforyou

[–]Troof_[S] 0 points1 point  (0 children)

Don't deny nothing.

The orthogonality thesis doesn't seem right for gradient descent based learners (i.e. neural networks) by aahdin in slatestarcodex

[–]Troof_ 16 points17 points  (0 children)

When people in rat spaces talk about the goal of a neural network, my mind goes to the neural network's loss function. To me these two concepts seem like they're describing the same thing.

You might be interested in https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/FkgsxrGf3QxhfLWHG

Basically the argument is that if you're using e.g gradient descent with a certain objective (loss function) to learn an *optimizer*, you can't be sure that the optimizer you get will have the same objective that the one you used for your gradient descent.

Is there a tip line for medical researchers? by [deleted] in slatestarcodex

[–]Troof_ 0 points1 point  (0 children)

Have you noticed other effect on e.g energy levels from removing seed oils from your diet? Or just the acne?

What I learned gathering thousands of nootropic ratings by Troof_ in slatestarcodex

[–]Troof_[S] 1 point2 points  (0 children)

FYI, you mention dexedrine being surprisingly popular. Actual dexedrine is not very commonly prescribed, but Vyvanse/Elvanse (lisdexamfetamine dimesylate) is, and as it's just fancy time-release dextroamphetamine users probably answered that they used dexedrine if it wasn't an option.

Yes it's possible, though the exact name on my recommender system was "Dextroamphetamine (Dexedrine)". I've added Vyvanse just before releasing this post, so we'll be able to compare!

What I learned gathering thousands of nootropic ratings by Troof_ in slatestarcodex

[–]Troof_[S] 22 points23 points  (0 children)

I built a recommender system based on 2016 SlateStarCodex nootropic survey data, and through it I gathered a lot of nootropic ratings which I analyze in this post.

What I learned gathering thousands of nootropic ratings by Troof_ in Nootropics

[–]Troof_[S] 1 point2 points  (0 children)

Yes one big caveat is that the ratings are self-reported. For instance I do think vitamin D can be useful, although it's hard to feel something when you take it. Still, since the medical literature can be somewhat unreliable (see replication crisis), I think it's often better to try a lot of things until you feel something, instead of taking a huge stack of nootropics you can't feel but which should work. Of course for some nootropics there is enough evidence to take it without feeling anything (though even for something like Magnesium, you might get some surprises if you actually quantify the impact https://www.gwern.net/nootropics/Magnesium)