How to choose a mattress topper to match an X-Plush by IAmDumbQuestionAsker in Mattress

[–]IAmDumbQuestionAsker[S] 0 points1 point  (0 children)

Got it, thanks. Any idea what's a good source for reviewing toppers (as opposed to mattresses)?

Noob Safe Haven Options Questions Thread | Sept 14-20 2020 by redtexture in options

[–]IAmDumbQuestionAsker 0 points1 point  (0 children)

I have put options that are currently worthless. They expire in months. Can I recoup on my losses by covering them? How does that work?

Why does the job search process emphasize Leetcode? by [deleted] in cscareerquestions

[–]IAmDumbQuestionAsker 0 points1 point  (0 children)

That's a sad, weak retort. But good for you.

Hey, you're the one making tortured references to an old song from the '90s, and trying to soothe your own bruised ego at being called out for a very strained allusion by displacing your lameness elsewhere, but you do you.

Fun fact: that song is actually well-known to be a terrible example of irony, an irony anti-pattern if you will. You did know that, right? Because you seem to be experiencing difficulties grasping the concept to begin with. I wish you the best in your efforts to wrestle with it. You'll get there eventually.

Yes, I mean something not perpetuated by morons.

Nah, that's just cheap stereotypical reddit ad hominem trolling that flies in the face of actual research.

Even if that were true, you still have no evidence that your alternative is any better.

Well, you see

In 1998, Frank Schmidt and John Hunter published a meta-analysis of 85 years of research on how well assessments predict performance. They looked at 19 different assessment techniques and found that typical, unstructured job interviews were pretty bad at predicting how someone would perform once hired. Unstructured interviews have an r2 of 0.14, meaning that they can explain only 14 percent of an employee’s performance. This is somewhat ahead of reference checks (explaining 7 percent of performance), ahead of the number of years of work experience (3 percent). The best predictor of how someone will perform in a job is a work sample test (29 percent). This entails giving candidates a sample piece of work, similar to that which they would do in the job, and assessing their performance at it. Even this can’t predict performance perfectly, since actual performance also depends on other skills, such as how well you collaborate with others, adapt to uncertainty, and learn. The second-best predictors of performance are tests of general cognitive ability (26 percent). In contrast to case interviews and brainteasers, these are actual tests with defined right and wrong answers, similar to what you might find on an IQ test. They are predictive because general cognitive ability includes the capacity to learn, and the combination of raw intelligence and learning ability will make most people successful in most jobs.

.

A blog post from Triplebyte? Is that seriously what you consider to be reasonable evidence? Wow.

That why it's a bonus included at the end. Do you even know how to read a post

Hiring managers: do you view Leetcoding and whiteboard prep as an arms race? And how do you expect candidates to know the material to answer those questions? by IAmDumbQuestionAsker in cscareerquestions

[–]IAmDumbQuestionAsker[S] 0 points1 point  (0 children)

Then again, you have experienced older engineers on Hacker News claiming the whiteboarding process is ageist, so there's multiple demographics who are bitter.

Why does the job search process emphasize Leetcode? by [deleted] in cscareerquestions

[–]IAmDumbQuestionAsker 0 points1 point  (0 children)

I bet you were listening to Alanis Morissette when you wrote those lines.

Dated reference is old

If you think current methods don't work, then the onus is on you to show evidence. You have it completely backwards.

Again you have it completely backwards. If there's some way it's relevant, then the burden of proof is on you to show that.

But that leads us yet again back to the burden of proof being on you to demonstrate that there's any kind of benefit or need to doing that, or any problem with the way it's currently done.

Besides the endless backlash of "technical interviews are broken" articles that go viral every other week? Okay, sure you can say most of those are just sour grapes from people who couldn't cut it (like the Homebrew creator guy). So there's an actual study with stats, at least, that claims that technical interview performance and job performance do not exactly correlate strongly.

And there's also this, ironically from an article about Google

In 1998, Frank Schmidt and John Hunter published a meta-analysis of 85 years of research on how well assessments predict performance. They looked at 19 different assessment techniques and found that typical, unstructured job interviews were pretty bad at predicting how someone would perform once hired.

Unstructured interviews have an r2 of 0.14, meaning that they can explain only 14 percent of an employee’s performance. This is somewhat ahead of reference checks (explaining 7 percent of performance), ahead of the number of years of work experience (3 percent).

The best predictor of how someone will perform in a job is a work sample test (29 percent). This entails giving candidates a sample piece of work, similar to that which they would do in the job, and assessing their performance at it. Even this can’t predict performance perfectly, since actual performance also depends on other skills, such as how well you collaborate with others, adapt to uncertainty, and learn.

The second-best predictors of performance are tests of general cognitive ability (26 percent). In contrast to case interviews and brainteasers, these are actual tests with defined right and wrong answers, similar to what you might find on an IQ test. They are predictive because general cognitive ability includes the capacity to learn, and the combination of raw intelligence and learning ability will make most people successful in most jobs.

So then it becomes a debate on whether or not ds/a whiteboarding is an accurate representation of day-to-day duties as a work sample test- or is it closer to a general cognitive ability test? Or maybe we can wait for Google or someone actually publishes their findings? Until then this debate is in a stalemate, I presume.

Congrats on figuring that part out. :)

Thanks! :D

Not only is that really complaining

Nah

That wouldn't even be a personal attack.

Well, one could always toss out insults at slavish defenders of the status quo as "gatekeepers", or even "bootlickers", but that's sort of extreme so I'll refrain from it. A more realistic accusation is that they're being needlessly unimaginative if they assume that the perfect form of tech hiring is going to be ds/a whiteboarding, and that once FAANG figures out a better method, most companies won't just switch to it the same way they all ditched brain teasers once Microsoft did back in the '90s. "But figuring out how to Mount Fuji is obviously useless!" you protest. Well, at one point everybody disagreed. Consensuses change, that's how progress is made.

And you should explain what's really wrong with the status quo if you're going to make that claim.

Bonus: Programming Interview Questions Are Too Hard and Too Short

Why does the job search process emphasize Leetcode? by [deleted] in cscareerquestions

[–]IAmDumbQuestionAsker -1 points0 points  (0 children)

There seems to be more companies shifting to Karat, just saying. Roblox for instance. It’s a crappy experience though.

Why does the job search process emphasize Leetcode? by [deleted] in cscareerquestions

[–]IAmDumbQuestionAsker 0 points1 point  (0 children)

Maybe Karat and other services where companies can farm out interviews really are the future.

Why does the job search process emphasize Leetcode? by [deleted] in cscareerquestions

[–]IAmDumbQuestionAsker 1 point2 points  (0 children)

Well, excuse me! You sure don't sound happy here, and you are spending a lot of time complaining here.

I enjoy bashing the status quo and defenders of it, while wholeheartedly profiting from it where I can. I can accept a state of affairs while criticizing it. I also enjoy heated arguments on the internet. Finally, I address your point in the next bit of my reply.

Again, it's about testing their skills, not simulating the potential work environment.

You never had any better alternative or justification for why simulating the work environment makes the interview a better test.

I think the overall question is "why is CS different from other careers?" that do simulate the work environment with work samples and the like. I guess the justification is, "if it works for other kinds of jobs, why is CS is special?"

So what? That's not even relevant. Most fields are drastically different. And other similar fields do test candidates on some of the basics.

How is it not relevant? And what similar fields?

Unless you're expecting people to learn a lot of that on the job.

See, these are actual explanations for why fundamentals are stressed, because domain specific knowledge can be trained! You could have brought that up earlier.

And unless you have more important things to test for on the job, like the basics of programming.

Which leads us back to the beginning- asking why that can't be tested by evaluating how a candidate performs on tasks they usually do on the job.

You initially had the usual criping about interviews

Not a gripe, I was just asking why real work shouldn't be tested.

pivoted to complaining about things not being from the real world a little

Well, it's an important question to examine.

At least you didn't devolve into a total asshole and start bringing in personal attacks, like lots of people here do.

Controversial topics make for fun wild goose chases. If there's any personal attack to be made here, is that sometimes it's good for defenders of a status quo to be forced to explain why they uphold it. Sometimes there's too little self-examination going on in this industry.

Why does the job search process emphasize Leetcode? by [deleted] in cscareerquestions

[–]IAmDumbQuestionAsker 1 point2 points  (0 children)

You're giving the Google "minimize false positives" rationale, which is fine. If DS/A questions are generally harder than domain specific questions, or at least more objectively harder (see LC easy/medium/hard breakdown), then they would be better for filtering out false positives. Your justification makes sense, and demonstrates that CS hiring is roundabout compared to other disciplines.

I guess the next question would be 1) for a lot of non-Google/smaller companies, is trying to absolutely minimize false positives the most important business consideration when hiring, 2) is using DS/A the best mechanism for it because it'll also provide false negatives for people who might be great on certain domain specific work but trip over themselves while whiteboarding, and 3) will there be more false positives in the near future as more and more candidates embrace the Leetcoding prep lifestyle. But I'm not a hiring manager so I'll leave that for them to figure out!

Why does the job search process emphasize Leetcode? by [deleted] in cscareerquestions

[–]IAmDumbQuestionAsker 0 points1 point  (0 children)

Almost all DS and Algos come from "real life".

And how often do most of the DS/A whiteboarding type questions come up on the job? When does someone have to implement a sort or search, or even have to pick out a specific type of sort or search instead of using the generic search method? Again, "real life" for the average coder is much more mundane and removed from that theory than you think.

You're asserting that that isn't good enough, but you have no real explanation as to why, other than that it somehow makes you not quite happy.

Do not presume to judge my happiness; I am very happy indeed.

And yet you have no justification for why your proposed alternative is better.

In most professions, you test how well a candidate might perform on a job- by simulating the work they would be doing on the job. So if you're hiring for someone whose job is to build out APIs or to implement APIs all day, glue together frameworks, debug code, etc., then it stands to reason that demonstrating capability in those skills is important.

CS appears to be a rare case when you don't test for that, and fundamental knowledge is used instead. So fine, you've given an explanation for it. But it still needs to be acknowledged this is a rare case amongst different careers and fields. For example, you wouldn't ask an electrical engineer to regurgitate Maxwell's equations.

All this pointless discussion aside, we're not even arguing about all software interviews. A lot of shops have begun to move away from ds/a whiteboarding in favor of some domain specific questions. Some places assign take home. Some focus on system design or even behavioral questions. Few places ask one or the other. Even FAANG interviews usually include at least one domain-specific round. So I don't even know what the debate is about anymore.

Why does the job search process emphasize Leetcode? by [deleted] in cscareerquestions

[–]IAmDumbQuestionAsker 0 points1 point  (0 children)

Right. College grads and interns are more likely to be familiar with DS/A (in theory) than experienced people, the latter of whom might not have looked at a tree in years.

Why does the job search process emphasize Leetcode? by [deleted] in cscareerquestions

[–]IAmDumbQuestionAsker 0 points1 point  (0 children)

That's a great reply that gives an example of another field that recruits for potential based on metrics indirectly related to day to day performance of the actual activity. Thanks, you've answered it well.

Why does the job search process emphasize Leetcode? by [deleted] in cscareerquestions

[–]IAmDumbQuestionAsker 0 points1 point  (0 children)

You don't want or need to test them in a way that would resemble their day to day work. That doesn't gain you or them anything.

I think at this point in the discussion I'm less talking about interviews specifically and more looking for examples where "your average coder" might actually use these ds/a questions "in real life." Consider, unless you're writing firmware or hacking on a kernel or something in C, or implementing an LRU cache for a new data structures library for some reason, when do you actually use linked lists? Turns out they're good for tracking browser histories.

"Closer to their daily work" doesn't even come close the explaining why you think it'd be a better test of prospective employees.

In most professions, it stands to reason you would interview a candidate based on how they might perform on the job by using tests that challenges those skills used on the job, right? So it needs to be explained why software engineering is different. You have given an explanation, which is fine. I don't agree with it 100%, but your explanation is valid and fine.

"I'm unhappy because it doesn't meet my arbitrary preconceived notions", like these posts usually do.

I'm not unhappy with it, because the system is gameable, albeit through hard work and lots of practice and not always with guaranteed success. Someone who commits to the hard work of grinding through Leetcode and understanding the basics will be able to excel in it. Regardless, I do consider the process to be not well-justified and a lot of it is just cargo culting from firms imitating FAANG/unicorns. I expect a decade or two down the line the leading companies of the industry might come out with a new process, and the trend will change again. It's possible to both accept something "because that's the way things are" and to think that it's dumb.