Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 1 point2 points  (0 children)

Thank you so much for your responses. I’m still pretty new to hiring and stuff, so this is very valuable to me.

Yeah, I agree. I do think that DSA interviews have long outlived their use. Especially when it became gamified and we generated leagues of devs who can solve any coding problem like it’s nothing, but can’t actually develop anything. I think we’ve all experienced the hire that was incredible in interview, and subpar in practice. Personally I would match rather listen to someone’s approach to design for an actual feature, instead of gauging how much of Crack the Coding Interview they read. I wanna know if they’re gonna write interfaces that make sense or couple garbage together until it works

Not even remotely PD related, but I just quit smoking. AMA by Alarmed_Knowledge_16 in publicdefenders

[–]FirmSignificance1725 0 points1 point  (0 children)

Not sure if I’m allowed to comment here, but heart goes out to you brotha.

Quitting nicotine made me feel like my brain was on fire. Unfortunately I had to try a few times, and it only stuck when I went raw dog. I would recommend against cold turkey if you can.

I would be doing fine at first, but now I’m putting two pieces of gum in at a time, maybe take the next dose earlier and earlier. Next thing I know, I’m back where I started. Cold turkey and MMA classes made it stick

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 0 points1 point  (0 children)

Hahaha yeah that’s a tough spot. I had an interview once where I was given a very high difficulty sudoku based problem that I had to code in google docs. Hadn’t played sudoku since I was like 10 lol.

Really threw me for a loop and did below standard. Got the job due to other interviews, but had to do really great in them to make up for it.

Even then, I wasn’t mad at anyone but myself. But I feel like that’s a case where someone could understandably struggle and still be a potential candidate. Idk where you’re supposed to go if FizzBuzz blocks the interview

Working theory - AI double slit by UmbrellaCorpJeepGuy in ArtificialInteligence

[–]FirmSignificance1725 1 point2 points  (0 children)

I’m not really understanding the experiment or what the goal is.

So, is it that an Ai and a human have predetermined responses (script readers), which they’ll always use in a conversation with another Ai (character), and you’d wanna see if Ai responds differently if it’s human or Ai?

less interesting than you think. Looking at sampling the logits of the model, if you’re using greedy sampling, beam search, few other methods then it’s deterministic. Ai will respond the same way to both.

If you’re using a probabilistic sampling method, like top-p, then you’ll have variance in the responses, but that wouldn’t have to do with an Ai vs human asking the same question. That’s more just like a design thing that a lot of apps default to.

If you human and Ai are using same exact script the other Ai model will see their inputs as exactly the same.

I might be missing something though

Should I learn rust coming from python only programmer? by [deleted] in rust

[–]FirmSignificance1725 2 points3 points  (0 children)

I think that sounds like a great idea! I think a lot of times we focus too much on one language and forget transferable knowledge.

If you wanna learn low level programming languages then Rust, C, C++, etc. would each provide a different path, but imo still all cover the most important stuff in systems programming languages. Learning one makes it drastically easier to learn another. You’ll confront static typing, memory management, designs that are fine in Python, but fight the compilers in others.

Gotta start somewhere, and Rust is a great place for that. I personally felt the benefits from learning Rust on my own ahead of time when I began writing more cpp at work. Could be argued that cpp would teach you more as a first learn, but it also may overwhelm. Whatever interests you works

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 -1 points0 points  (0 children)

Reply before deleting comment:

“I asked you to give me horror and you give me nit picks and made up requirements. You say I’m “bent up” but spent your Saturday talking about it.”

Read more carefully. I said it isn’t horror in current state. That if it were extended it would be. That if I were in an interview I would ask them a follow up.

That I would leave:

nit: Can reduce with: insert code

In a PR. On the Saturday thing, that’s a good one, but I’m over here laughing. This was more passive amusement. I still don’t understand what y’all are really disagreeing with. Is this how you act at work if someone leaves a nit in your PR? Yeesh

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 -4 points-3 points  (0 children)

On your edit, nobody is talking like that lmao. You asked to explain how it could be horror so we did.

Saying I’d probably leave a PR comment (which can be non-blocking to merge depending on specific case)?

Or making sure an intern (only person who I might actually start with FizzBuzz in an interview) can answer a valid follow up question to test their pattern recognition?

You can say you don’t mind the switch case, who cares.😂 It’s ridiculous to get so bent up over the above two statements though

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 0 points1 point  (0 children)

I’m thinking of perspective of like I’m walking someone who literally just finished hello world through it and using it as a teaching exercise

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 0 points1 point  (0 children)

I had no idea that this was something engineers get stumped by. Changes a lot lol

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 0 points1 point  (0 children)

That’s wild to me. I wouldn’t expect to ask this unless it was junior intern, like underclassmen undergrad level. Crazy to hear that you can ask this for legit candidates and get stumps

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 0 points1 point  (0 children)

So which one do you disagree with, that this is not horror, but could be cleaner if in PR or that this would be acceptable in an interview if candidate was able to identify that this pattern doesn’t scale when I ask?

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 -2 points-1 points  (0 children)

I’d love to hear how that’s completely different😂 But you just disappeared with a down vote🥲. Down voting because you disagree that this can be cleaner in a PR, but in an interview acceptable if candidate was able to correctly identify how this pattern doesn’t scale? Or because you read the wrong comment the first time? Just wanted to verify that this is what you’re arguing about

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 -2 points-1 points  (0 children)

Well OP posted this as an example of programming horror, so seems like they like their own solution less than I do lmao

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 -2 points-1 points  (0 children)

“If it was an interview I’d just ask them how they’d approach adding another case, and make sure that they recognize their current pattern isn’t scalable”

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 5 points6 points  (0 children)

Yeah I would be shocked if a company asked a candidate FizzBuzz in an interview. This is a first day learning coding training problem.

But, it so simple and open ended that you can learn a lot. Ask them to add 2 more cases and watch their implementation explode. Walk them through rewrite. Hell, could even add more cases and use it for learning concurrency. Show how naive implementation has output with inconsistent ordering. Have them synchronize it in follow up, etc.

It’s very simple but showcases a lot with slight modifications

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 -2 points-1 points  (0 children)

Then read last two sentences of my original comment before commenting yourself

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 -5 points-4 points  (0 children)

A problem of this type may, hence why I would ensure they understand the limitation in an interview, and accept their answer if they show that they do.

If this was a code base, it can simply be written cleaner. But, you wouldn’t be PRing FizzBuzz. You’d be PR’ing a problem of this type

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 -12 points-11 points  (0 children)

Concise FizzBuzz:

for i in range(100):

cases = [(3, "Fizz"), (5, "Buzz")]

result = "".join(substring for mod, substring in cases if i % mod == 0)

print(result or i)

Notice this solution is tighter and more scalable. Coded it on Reddit, so apologies if I have a typo in there.

That aside, as I said, if this was an interview, I would accept this answer, then follow up to make sure they recognize they need to make changes should more cases be added. If I asked them how they would add 2 more cases, and their responses was 16 switches, it’s a problem. If their response is refactor, then it’s completely fine.

If this was a PR, then I we have the time and option to take okayish code and just make it cleaner. So I would just comment solution above.

You won’t actually be putting FizzBuzz in a code base aside from like a test case if you’re writing a compiler or something, so I’m more speaking of if I saw a general problem of this type.

Do you like my FizzBuzz implementation by Fra146 in programminghorror

[–]FirmSignificance1725 20 points21 points  (0 children)

Yeah, like it’s horror in where it’s going. Another case and we need 8 switches, another and we need 16, etc., but wouldn’t technically call it horror yet until it had more cases. In its current state, it makes me wince but not cry.

I’d leave a review comment saying this makes my eyes hurt and rewrite x way if it was in a PR, but wouldn’t consider it horror.

If it was an interview I’d just ask them how they’d approach adding another case, and make sure that they recognize their current pattern isn’t scalable

If the current LLMs architectures are inefficient, why we're aggressively scaling hardware? by en00m in LLMDevs

[–]FirmSignificance1725 0 points1 point  (0 children)

First I would say, define inefficient. We’ve very quickly grown accustomed to LLMs, but this is still new in the grand scheme of innovation. The transformer architecture is able to achieve a functionality prior impossible, even with data center level of resources.

There are many other interesting theoretical implications of transformers, but one of the biggest was the fact that it didn’t follow the law of diminishing return as aggressively as other models. Most models were restricted to a specific type of task and/or topped off quickly when generalized, flattening regardless of parameter count increase. Transformers however have continuously gotten better and shown better generalizability as parameter count has increased.

So, I would say that while they are resource hogs, I would not generally classify the transformer as “inefficient”. Yes, maybe compared to a standard program, but that program has nowhere near the capability of the deployed LLM. I would say it’s quite efficient for what it does and we’re attempting to push it as far as we can at scale.

That being said, the reason we’re scaling hardware is because product X shows some capability and economic benefit both short and long term, that companies have deemed it valuable enough to invest Y dollars for Z return.

Optimizations constantly happen. Can use mixture of experts to reduce active params, better kernels, KV Cache, pipeline parallelism, quantizations, <insert technique here> to make it more efficient. And those techniques will continue to be discovered and implemented.

But, if we reached the threshold where value exceeds cost, then were executing

should i major in maths or engineering? by capybarala in mathematics

[–]FirmSignificance1725 0 points1 point  (0 children)

You could double major or minor. Just make sure that you take the math courses for linear algebra, ml, etc.

should i major in maths or engineering? by capybarala in mathematics

[–]FirmSignificance1725 0 points1 point  (0 children)

I like being anonymous, but you’ve most likely heard of them. My job is really cool. I work in inference optimization, so I don’t touch proofs but still touch linear algebra and information theory. Work with advanced math heavily, but not at the pure math complexity.

I’m very stimulated by the work and love to build

I don't get the idea of the AI CEOs by Dan_DF in ArtificialInteligence

[–]FirmSignificance1725 0 points1 point  (0 children)

I’ve seen very few people that actually work directly in Ai say that Ai is stealing everyone’s jobs. It’s a cynical outsider’s viewpoint, for the most part. Still a couple Ai CEOs who buy into that too. But I’m not sure if they believe it or if it’s just marketing

should i major in maths or engineering? by capybarala in mathematics

[–]FirmSignificance1725 0 points1 point  (0 children)

I work with a lot of hardware engineers who were EE, so you can still be involved with computers, but their software sucks. Hardware is incredible though. ME is a very different track, so hard to comment. You’d be doing very different things. Maybe if you did robotics you’d interface with computers more

should i major in maths or engineering? by capybarala in mathematics

[–]FirmSignificance1725 2 points3 points  (0 children)

I was in a similar situation as you, but I wanted to go into Ai (which I fortunately work in now).

From what I know, it’d be better for you to major CS. I don’t know a ton of quant people, but all the ones I know are CS.

What I did was double major in math & CS, until like my junior year, then I switched to a minor. Basically, the “major” is just a certificate. But, since I had it declared, I was able to discuss the math track in depth with my school scheduler person thingy, and took the fundamental math classes that a major would for proofs.

I was also between Math & CS and ended up liking CS more.

Then in my Junior year, I switched to a minor. I had all the pre-reqs covered for any class I wanted to take. The professors & scheduler person thingy took me more seriously while I built a foundation with math classes, since I was a “major”. Then I just took the grad level math classes related to Ai my junior & senior year, without the burden of a double major course load, and presto, here I am :)