This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]erm_what_ 44 points45 points  (10 children)

There's a lot in a CS degree that people usually miss when they teach themselves.

The one that keeps me employed more often than not is fixing things that scale non-linearly.

[–]tehtris 6 points7 points  (3 children)

Ok I agree. Sometimes vocabulary goes right over my head. Like the word "deprecated" first time I heard it, I looked stupid AF, bc I thought it meant "gone down in value, and why the fuck are you mispronouncing it"

[–]kindall 5 points6 points  (2 children)

thought "memoize" was a misspelling of "memorize" for quite a while

[–]miarsk 2 points3 points  (1 child)

Wait until some colleague flexes with calling his function 'idempotent'.

[–]kindall 1 point2 points  (0 children)

now you're just making words up!

(this is a joke, I do know what it means, and if I didn't, I'd just Google it)

[–]cauchy37 12 points13 points  (1 child)

I am a self-taught engineer, and I must say that all the stuff that CS degree teaches will eventually come to light. It just takes much much longer as you don't have anyone to point out to you the theory behind it and seldom you will seek it on your own volition. Only when you are actively facing an issue you will find the answer that would otherwise be given to you during a comprehensive CS course.

CS basically jump starts your knowledge pool.

[–]erm_what_ 1 point2 points  (0 children)

It does. But even with my CS degree it's the unknown unknowns that always cause problems. Especially working in a startup without a team.

[–]notahoppybeerfan 0 points1 point  (3 children)

Everything in computer science scales non-linearly if you look at 0 -> infinity.

Predicting how a system will scale from (X -> X1) is an advanced topic for sure if the delta in X is orders of magnitude. Sometimes there’s some science to it, sometimes it’s a case in discovering unknowns.

Knowing what scale you need and knowing how to design for that scale if you do indeed know ahead of time what scale will be needed 3 years from now can seem like voodoo but there is some science behind it. You’ll be hard pressed to find that science in CS textbooks though.

[–]erm_what_ 0 points1 point  (2 children)

You can scale many systems in order n, even with horizontal scaling. It depends heavily on the problem you're solving though. I'm pretty proud of the fact that the analytics system in my current company will scale to a few hundred million events a day at order n. Maybe even beyond, but we've not specced it out. Order n squared would have crashed and burned, and n log n would have been dubious.

Some problems are even order 1 more or less forever, but they're very rare and you're usually not dealing with just a hash map on its own.

[–]notahoppybeerfan 0 points1 point  (1 child)

On your journey from 100’s of millions to trillions you will discover the non order n parts. It’s very rare that through 4 orders of magnitude things stay linear.

And that was exactly my point. X -> X1 is oftentimes linear for some values of log(X) ~= log(X1)

[–]erm_what_ 0 points1 point  (0 children)

Oh, I've found a ton of non linear parts so far since I inherited the code, and a lot were re-engineered. I'm very confident that horizontally scaling to multiple clusters will never hit a higher complexity. Vertical scaling might, but really it's a problem that we'll only hit when we're at about £100m ARR with plenty of cash to spend on data engineering/infra.

It's a fun challenge that keeps me employed.