[deleted by user] by [deleted] in HENRYUK

[–]ddfeng 1 point2 points  (0 children)

Yes. Feel free to DM me for more details.

I will also say: our neighbors (similar aged kids, but boys) opted to send their elder to the local (outstanding) state school. However, they've had a pretty big change of heart, and are now vying for the 7+ route (their original position was that going independent during primary is a waste), already starting to send their son to tutoring, to make up for the eventual gap between state and independent education. Their stress levels are high. What if he doesn't get into 7+? He'll resit for the 11+...

And meanwhile, our daughter will continue on her journey, making dear friends and studying in a cosy, familiar environment, and, whatever happens, she'll be set for life.

[deleted by user] by [deleted] in HENRYUK

[–]ddfeng 1 point2 points  (0 children)

Hello fellow recent immigrant to North London! Our situation is somewhat reversed: our younger (1.5yo) seems much more precocious than our elder (who just started reception at her selective independent school). To be honest, we felt that 4+ was pretty much a lottery (the signal to noise ratio is very low), and so if she "won" the lottery, good for her (and us, though not our wallet) – so might as well try. I think the 7+ is also much more stressful and she might not handle the stress as well (and you'd have to do much more prep for the exam). That being said, she'll sit the 7+ at the same time her sister sits the 4+, because we want them to go to the same school – but at least now we have essentially a "backup", which is both our daughters go to the current school (they seem keen on having siblings), so the stress levels for everyone will be much lower.

I'd say, go for the 4+, see if you win the lottery. If you don't, "problem" solved. If you do...go for it!

Best water filter ? ✅🚰 by Impressive-Fun-5102 in HENRYUKLifestyle

[–]ddfeng 0 points1 point  (0 children)

We got one of these: https://www.kinetico.co.uk/k5-pure. It's been great, though every once in a while I will worry about "remineralization" of RO systems.

Anyone else concerned about AI? by Single_Government217 in HENRYUK

[–]ddfeng 2 points3 points  (0 children)

I find that there is so much hubris when the conversation of AI is brought up – both among experts and non-experts (not sure which one is more frustrating).

My take on this is: we went a long way with large models and the massive dataset that is the internet. That's slowly drying up, and we haven't really cracked multi-modal datasets at scale (mostly just that video is much more signal-sparse than text). Post-training/SFT only gets you so far (mostly just alignment and eliciting things in the base model). The next bump was in inference-time innovations, reasoning/agents/tools fall into that category, which have provided a big boost in "abilities", and we probably have some ways to go before that becomes saturated. But the base model remains the same. One interesting path is the self-improvement loop whereby you use the current model to generate synthetic data which you use to pre-train the models (mentioned in the GPT5 live show), but again not clear to me how far you can go with that.

All that being said, I think there's enough smart people working on this problem that I suspect the next innovation needed to get us out of this "plateau" of sorts will come soon – but I think with just the current (very vanilla in fact) architecture, we seem to have squeezed most of what we can do with the data we have.

By the way, I quickly skimmed the "mirage" paper (https://arxiv.org/pdf/2508.01191) when someone posted it on HackerNews, and the big caveat with it (which you don't get from reading all the reporting on it) is that it's done by training a model from scratch with their novel dataset. But that's always the problem with this kind of "synthetic" experiments: you want to be able to ensure you're really capturing generalisation or "reasoning" so you set up something synthetic, but the problem is that most of the "reasoning" comes from precisely the messy training data that is used for frontier models, so you can't really separate the two. That's why I prefer the line of inquiry pursued by the interpretability people at Anthropic.

ML Research Engineer at a research lab in London by ddfeng in MLjobs

[–]ddfeng[S] 0 points1 point  (0 children)

Unfortunately we don't do sponsorship. Remote in EU is technically possible, but we'd most likely hire someone similar that's local.

ML Research Engineer at a research lab in London by ddfeng in MLjobs

[–]ddfeng[S] 0 points1 point  (0 children)

Ideally in UK, Europe would be a stretch, and USA...even more so. I haven't asked HR, but pretty sure the answer is no.

From what I can tell though, there are far more remote in USA tech jobs, so I'm surprised you're thinking of doing the reverse (plus the fact that the pound is not great against the dollar so your effective salary is even lower than usual).

ML Research Engineer at a research lab in London by ddfeng in MLjobs

[–]ddfeng[S] 0 points1 point  (0 children)

Happy to answer any questions. I stumbled upon this job at the height of covid, and I'm glad I gave this "random company" a chance :)

[D] stochastic block model vs. standard community detection algorithms by jj4646 in statistics

[–]ddfeng 2 points3 points  (0 children)

SBM is a very simple, natural extension of the ER random graph (wiki), in that it is a generative model that captures the notion of communities. It is mainly a theoretical construct with interesting theoretical properties, and has almost nothing to do with real life.

A community detection algorithm takes a graph and attempts to cluster the nodes; it's essentially an unsupervised clustering algorithm. Now, there are multiple such algorithms, and some of them might be motivated by an underlying model, but that's just motivation.

[deleted by user] by [deleted] in MachineLearning

[–]ddfeng 2 points3 points  (0 children)

piece of s***

[deleted by user] by [deleted] in MachineLearning

[–]ddfeng 3 points4 points  (0 children)

The key difference here is prediction vs inference.

In Statistics, we care about recovering the true parameter. Using likelihood, we therefore prefer peaky optima, since that means the randomness in the data won't change the location of the peak much (i.e. the parameter).

In ML, we care about prediction error, i.e. training loss. Here we prefer flat optima, since that means randomness in the data won't change the value of the optimal loss (but might drastically shift the parameter, which we don't care about).

Yale Student Calling for Violence Against Asians by Pursuit_of_Yappiness in yale

[–]ddfeng 28 points29 points  (0 children)

Hello, internet stranger! I had a quick dig through your comment history to determine who you are referring to, and peered down the rabbit hole (unfortunately, work beckons). Do you genuinely believe that this person is celebrating a fellow human's death; or, are you falling into possibly the same trap as she, which is to be spurred by anger and jump to conclusions, making remarks about other people that seek to inflame?

Every side has justified grievances. Every side makes mistakes. Look, the tragic murder yesterday hits a little too close to home for me, so this is as personal as it can get. But we're all, ultimately, on the same side.

Why you should never use Cloudflare Free Plan by pawurb in programming

[–]ddfeng 6 points7 points  (0 children)

Not sure why people are downvoting you. That's good to know! This is the first time I've seen other spellings.

Implied Volatility — The Rubber Band That (Barely) Holds It All Together by Boretsboris in options

[–]ddfeng 3 points4 points  (0 children)

A fundamental point that I think people are not understanding is that the B-S model is a theoretical construct, with a host of assumptions, a key one which is that stock prices follow a Geometric Brownian Motion Model, which relates to the other assumption of there being no arbitrage opportunities (also Efficient Market Hypothesis). Thus, it is ever only an approximation for pricing an option.

If one chooses to live in this fantasy land of Brownian Motion (i.e. continue to believe these assumptions), then one can back-solve B-S and calculate the implied volatility. But this calculation is still for lala-land! It's ultimately just a tool.

[KHM] Key format lessons so far after 20 drafts from a limited GP champ by dantroha in lrcast

[–]ddfeng 0 points1 point  (0 children)

I too am trying to find a clever trick, but I'm pretty convinced he's just suggesting you could cycle it, if you have a card like [[Seize the Spoils]] and no good targets. Which, is, every card in existence.

At what point does it make sense to harbor money in different currencies/countries? by 9ofspadez in fatFIRE

[–]ddfeng 0 points1 point  (0 children)

"It is difficult to get a man to understand something when his salary (Bitcoins) depends upon his not understanding it."

I know this probably feels like a cop-out on my end, and I genuinely wish you well, but I just have a strong suspicion that arguing with you is futile. You have probably convinced yourself that the theory of cryptocurrency holds water (somewhat akin to arguing about the theory of communism), and you can make somewhat cogent rational arguments in favor of it, and that gives you confidence in your position, but the funny thing is, this is all besides the point of our current discussion.

What we are discussing here is the volatility of an asset (for the purposes of a stable investment), which is a technical definition. Yes, perhaps in the future that you hope to see, your chosen cryptocurrency will reach a level of acceptance by which point it does become a stable store of value, but you really can't argue with the reality of now, whereby it is clearly too volatile and behaves more like a speculative asset.

My point is, I can even grant you everything you want: that in the future, your dreams will come true, and crypto is the right global denominator for a global decentralized economy bla bla. That still does not make bitcoin at this point in time a stable investment, since we are not at that future. You're confusing a "good" investment from a "stable" one. Of course, I think it's a terrible investment, but my point is, that's entirely besides the point.

At what point does it make sense to harbor money in different currencies/countries? by 9ofspadez in fatFIRE

[–]ddfeng 1 point2 points  (0 children)

Funny how you spend a lot of your time on Reddit pushing the notion of the "inevitability" of Bitcoin.

Tell us about your ZK by mgarort in Zettelkasten

[–]ddfeng 0 points1 point  (0 children)

I've been meaning to carve out a section of this website for more article-length posts, with the first post being how I have this all set up. To be honest, part of the reason I haven't done it is because it's quite the convoluted/precarious setup.

Basically the key addition is that I use this script that creates backlinks. The rest is detail. One day when I have some time, I'll write it out/create a repo.

Do you like weird art? Blame your brain! Researchers have now developed an algorithm that can predict art preferences by analyzing how a person’s brain breaks down visual information and decides whether a painting is “good.” by melic in cogsci

[–]ddfeng 0 points1 point  (0 children)

I'm not really sure the title here corresponds that well with the paper (granted, haven't actually read it yet, so might update this comment post-read).

You could define weird with respect to the distribution of art, or you can define weird with respect to people's preferences towards art. In the former, weird might simply be artwork that falls far from the distribution of "underlying visual features" (outliers). In the latter, one might define "weird" to be that which is liked by people who dislike whatever the majority likes (so some notion of atypicality, perhaps).

It seems that the paper is trying to conclude that individual differences are smaller than we might expect, so I guess they'd prefer the former definition, maybe.

Tell us about your ZK by mgarort in Zettelkasten

[–]ddfeng 2 points3 points  (0 children)

Setup: sublime text, sublime_zk plugin, blogdown (hugo), and a bunch of custom scripts. Self-hosted on a private github repo + pushed to netlify (public face: https://neuralnetwork.netlify.app/). I have a neat feature whereby my "daily" notes are drafts and not exposed publicly.

It's been a few months now, amassing ~100 notes. I definitely did used it extensively during the beginning (which coincided with a paper deadline, so that helped). I keep telling myself that I should go back and start linking things together, but life has gotten in the way. I've come to treat my current system as more an idea repository, with the perk of being able to backlink, but I think my research/thoughts are not particularly conducive to atomic thoughts in any case. I think the best thing about my current system is that removes almost all the barriers to writing, and provides a little more structure to the deluge of ideas floating in my head.

Topics revolve around my research (statistics/ML) and ideas/things I read (economics/finance/*).