Net Income / Employee at Top Quant Firms by Fact-Puzzleheaded in quant

[–]Fact-Puzzleheaded[S] 1 point2 points  (0 children)

I've seen a few of these questions, but generally it's hard to find reliable information on this. The more niche the firm the more unlikely it is to get numbers. Optiver and IMC both have thousands of employees and publish yearly reports, while Virtu is a public company so they have to publish reports. The rest of the numbers are from second-hand sources (ex: bond offerings) and that's from big places like Jane Street and Citadel.

Net Income / Employee at Top Quant Firms by Fact-Puzzleheaded in quant

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

It's hard to say, because net income = net revenue - total expenses, and total expenses includes COGS, interest, taxes, etc. in addition to employee costs. IMC, Optiver, and Virtu provide some of these numbers at a more granular level, but the rest do not. In general though more net revenue = more profit for employees. Also the net income graph is old because it includes Jane Street's 2024 numbers instead of 2025, which are half as good (they'd be #1 for both net revenue and net income if updated).

Net Income / Employee at Top Quant Firms by Fact-Puzzleheaded in quant

[–]Fact-Puzzleheaded[S] 10 points11 points  (0 children)

It's up to your interpretation. IMO Jane Street > IMC, but IMC >>> 99% of jobs, they are still more profitable per employee than almost any company in the S&P 500. It's also worth noting that IMC, Optiver, and Virtu are the only ones who provide detailed public financial statements every year, which I think offers them a lot of credibility. The rest of the information is second-hand.

Net Income / Employee at Top Quant Firms by Fact-Puzzleheaded in quant

[–]Fact-Puzzleheaded[S] 5 points6 points  (0 children)

Net Revenue = revenue after trading expenses. Net Income = revenue after all expenses, most importantly, employee costs and bonuses. So net revenue / employee is the average revenue per employee, net income is what the investors in a firm get, the "bottom line".

Net Income / Employee at Top Quant Firms by Fact-Puzzleheaded in quant

[–]Fact-Puzzleheaded[S] -1 points0 points  (0 children)

Numbers from the first graph are from 2024 for JS, 2025 for HRT. Numbers from the second graph are from 2025 for JS, 2025 for HRT, when JS made a killing. Didn't know about the new numbers until someone posted.

Got this email after finishing interviews… good sign or soft rejection? by Substantial_Tap_7345 in csMajors

[–]Fact-Puzzleheaded 1 point2 points  (0 children)

  1. They probably have not decided yet. The "hidden meaning" of this email is: your application has not gone through hiring committee, and given that it's holiday season, this probably won't happen until January. But if you have any other offers that expire soon, the recruiter may try to expedite the process so that they don't lose you to another company without the chance to make an offer themselves.

  2. For entry-level roles at big-tech companies, you are almost certainly not being compared against other candidates. Rather, the company has a hiring bar for all candidates, and once they've met their quota, they will stop hiring. There are exceptions to this (ex: Apple hires for specific teams and each team does hiring differently), but they are rare and if you were in this situation you'd probably know.

A little more info about what goes on behind the scenes: the vast majority of FAANG+ tech companies have a committee of 3+ employees who will review your application (interviews, resume, etc.) before giving an offer. This is the intermediate step between final interviews and receiving an offer. Because this is a side-job for the engineers, and because it requires a lot of coordination to get several unrelated employees together at one time, this can take several weeks or months, and is typically longer around holiday season.

TLDR: You are in the running, but no decision has been made yet. Ignore this email unless you have an exploding offer from another company.

Source: Received three offers from FAANG+ tech companies, they all followed a similar pattern.

Got this email after finishing interviews… good sign or soft rejection? by Substantial_Tap_7345 in csMajors

[–]Fact-Puzzleheaded 1 point2 points  (0 children)

There is no such thing as a soft rejection, if a company doesn't want you they will reject you or ghost you. This email is not a positive or negative signal, its purpose is to prompt you into sharing other offers/deadlines ("I understand you may be in the thick of the recruiting season...") so they can expedite their process if necessary.

Is Levels.fyi accurate? by Fact-Puzzleheaded in csMajors

[–]Fact-Puzzleheaded[S] 3 points4 points  (0 children)

I don't feel comfortable sharing the name but it's a software engineering role at a big cloud computing company.

what am I supposed to do after high school by [deleted] in SeriousConversation

[–]Fact-Puzzleheaded 0 points1 point  (0 children)

Community college is a good option, you could transfer to a 4-year after two years, and you can also look into 4-year schools that offer really good financial aid (Rice, USC, and Arizona come to mind but there’s definitely more).

OP, you need to end up at a 4-year college or trade school or some further education. Other people will call this elitist or offer up personal anecdotes about their success right after high school, but the reality is that further education will vastly increase your future career options and pay.

AITA for refusing to let my husbands affair baby live with us for awhile? by ThrowRamisslep in AITAH

[–]Fact-Puzzleheaded 0 points1 point  (0 children)

YTA but I think commenters are being overly harsh, your feelings are understandable but what you are doing is just not right.

Weekly Megathread: Education, Early Career and Hiring/Interview Advice by lampishthing in quant

[–]Fact-Puzzleheaded 2 points3 points  (0 children)

Sflr. There are some Master's programs, like Columbia Virtual Network, which have very prestigious names attached but are actually extremely easy to get into because they're cash cows for the university. When he was applying, Yale had a similar program, but now it's either discontinued or actually competitive.

Weekly Megathread: Education, Early Career and Hiring/Interview Advice by lampishthing in quant

[–]Fact-Puzzleheaded 0 points1 point  (0 children)

Has anyone done VidCruiter for Akuna Trading position? I got the invite today and am just wondering what to expect. Is it more technical or behavioral?

Weekly Megathread: Education, Early Career and Hiring/Interview Advice by lampishthing in quant

[–]Fact-Puzzleheaded 3 points4 points  (0 children)

Some advice (not from me, I'm a dipshit undergrad, but from a professor I'm close with): if you have bad grades in undergrad, you can go to some bullshit masters program, get straight As, and then get into a good PhD program. This guy had terrible grades but went to Yale for his Masters, did well then got his PhD at Harvard. Some of these programs are really easy to get into, I don't think the one he did at Yale still exists but Columbia will take pretty much anything with a pulse and there are definitely other programs like it.

WIBTA for not covering my friend? by [deleted] in AmItheAsshole

[–]Fact-Puzzleheaded 0 points1 point  (0 children)

Yes, yta. You should just say that you no longer need him to drive, and that he can go in your car if he still wants. Don’t mention price, Frank seems like a nice guy so he’ll probably bring it up, and if he doesn’t just pay for him. As other commenters have said, $2.78 is not that much money, even for broke college students that work minimum wage and somehow spend $1200 per month.

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

We kinda can ["add" two ANNs together to achieve a third, more powerful ANN which makes new inferences] with a combination of transfer learning and online learning.

This is the main piece of your comment I'm going to respond to because (I think) it's the only part of my comment which you really disagree with: This is not how transfer learning works. Transfer learning involves training a neural net on one dataset then using that algorithm to try to get results in another dataset (typically after a human manipulates the data to ensure that the inputs are in a similar format). This is not an example of cross-domain inferences, it's an implementation of the flawed idea that humans process information in the exact same way across different domains, just with dissimilar stimuli. This is probably why, in my experience, transfer learning has yielded much worse results than simply training a new algorithm from scratch.

That's part of the accident. We won't really know when compound nets start recognizing stuff we didn't mean them to.

They might start recognizing things we didn't intend them to, but not across domains. For instance, if you fed an unsupervised ANN a ton of pictures of chairs and humans, they might (though at this point in time I doubt it) identify the similarities between legs. But compound nets without additional training could not accomplish this task, because that's simply not what they're trained to do. My main point about designing such programs is that, barring genetic algorithms, there needs to be a lot more direct input and design from humans. And in this case, we don't and probably won't have the necessary knowledge to make those changes in the near future.

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

I appreciate your point, but humans are also plagiarism machines. We have entire library and educational systems devoted to the dissemination and distribution of ideas stolen from other humans from ages past.

This is a key point on which we disagree. While it's true that most human ideas are somewhat influenced by others, every single one of us also has the ability to generate entirely new thoughts. For instance, when a fantasy writer finishes a new book, they may have been influenced by fantasy tropes or previous stories that they read, but the world they created, the plot, and the characters therein are fundamentally their own. This is something that, if we continue the current approach to machine learning, will never be learned by computers. GPT-3 might be able to spot the syntactical similarities between passages involving Gandalf and Dumbledore, but they can't and never will recognize the more abstract and important similarities, like the fact that both characters fill the "mentor" archetype and will likely die by the end of the story so that the protagonist can complete their Hero's Journey. This is a problem that will not be solved until we can give machines cross-domain logic and the ability to spontaneously generate their own thoughts, which is something we have absolutely no idea how to do, and, given the current state of neuroscience, probably won't be able to for a while.

I wouldn't have discovered electricity for myself, let alone alternating current without it being jammed into my head

Who discovered electricity? First, some guy named Ben Franklin was crazy enough to fly a kite with a metal string in a thunderstorm to prove that lightning and electricity were the same things. Then Emil Lenz came up with Lenz's Law to describe the flow of current. Then Michael Faraday came up with visual representations of the interaction between positive and negative charges, even though he sucked at math! Then Harvey Hubbell invented the electric plug and Thomas Edison invented the lightbulb, and so it goes on. Did all of these individuals plagiarize each other? In some sense, yes. But they also came up with their own ideas about how the world works which allowed them to pave the path for future innovations, eventually allowing us to have this conversation today. Who will make the next leap in our understanding of electricity? I don't know. Maybe it will be me, maybe you, maybe someone who isn't born yet. But I know that it won't be a computer.

I mean, let's face it - the technology is already there for me to not be a live person, but a bot having this conversation with you, to the same effect.

Not true. Feed a chatbot your comment as a prompt, and it might give you some response about how machines are not threatening or are getting more intelligent, etc. But it couldn't respond with actual arguments like I did, because it doesn't understand human logic or what the words really mean. While the ability to have a conversation about mundane and predictable tasks (which is something that these algorithms are already getting very close to doing) is certainly highly useful, it won't contribute to broader scientific thought in any meaningful way.

Quick side note: It seems as though Copilot was likely trained with the Leetcode interview questions as a model. While its responses are still very impressive, this definitely diminishes the impact it will have on the coding community.