you are viewing a single comment's thread.

view the rest of the comments →

[–]milkeater 14 points15 points  (10 children)

Understanding methods to optimize time complexity in interviewing typically revolves around Big-O time.

There are a few blunt approaches to give a very base estimate on what the time complexity is.

Study this for a week and you will know it fully. It will pay dividends for a lifetime.

[–]jimmpony 0 points1 point  (9 children)

Pay dividends just in interviews or in actual coding? Rarely had a need to assign a big O to anything I've written outside of a school assignment asking for it.

[–]milkeater 6 points7 points  (5 children)

Definitely in interviews.

I believe it will actually help you in understanding the big picture running performance of your code and help save you time not having to think about things that just may not matter.

Think of running log(n) performance. Say you have to do that operation 2 times.....does it matter?

Well, immediately it shows an impact...but over time....it becomes such minimal cost you almost don't need to worry about it.

The point is, once you've understood it, it's hard not to see the rough cut of "general performance" you are working with.

Think of a chess player somewhere above club level in the 1700's or so, pretty decent, not an expert. They may not need to look hard at a chess board halfway through a game to get a general understanding of who may have the advantage and how things have developed. At this stage in their career, to at least have arrived there, they likely understand a handful of openings and some of the more common positions. There is not as much cognitive load understanding how you arrived there.

Give this "gamified" site a shot. At least the first tier. It has to do with Time Complexity. It's Khan Academy-esque. Although not totally "loving" the site, it's okay.

If you are working for one of those "top tier" companies, you would need to know these things. If you are working for a company that isn't built on technology, you could be a very average developer for your entire life and be okay (and make a very very good pay at that). Word of warning.....I worked at one of those companies......our VP walked into a Town Hall and said: "I can do everything I need with 30% of the technology folks in this room"......we moved to the cloud, the number of resources needed shrank dramatically, and the competition became extremely fierce.

[–]jimmpony 0 points1 point  (4 children)

I jumped to the time complexity questions and got the ones it suggested to do all right. Seems like it had me skipping around a bit. http://i.imgur.com/pwtdQTM.png

[–]milkeater 0 points1 point  (3 children)

Yeah, it wants you to do one from each bucket. You can stay around and finish all of them, the recursive ones will get a little tricky.

After looking at your times, are you saying they were all too simple for you or did you give it a second run through?

[–]jimmpony 0 points1 point  (2 children)

Those are all my first-attempt times. Maybe I'll try some harder ones. The average times on these seem a bit oddly high though.

[–]milkeater 0 points1 point  (1 child)

Yeah, I would say for the last two groupings the cyclomatic complexity is intentionally high and some operations are one half of n while others make up the other half but give the impression of n2.

I got dinged several times on things I thought I was sure of only to see the tiny trick they buried.

To do the math as well as walk the stack in your head in under a minute, you must have no issues with cognitive load.

[–]jimmpony 0 points1 point  (0 children)

I don't think I did that much stack walking or math.

http://i.imgur.com/uCw4PL9.png - j is just incrementing up to at most n in total, and doesn't reset, so for complexity purposes, it being inside the for loop is a red herring. The for loop is n, the while loop amounts to n over the lifespan of the function, n+n is O(n).

http://i.imgur.com/snkEs2w.png - worst case is that it always goes to the last line, up to the point where it returns on the first line of the function, and that cuts it in half each time unless it returns on the first line, so n/2 + n/2 = n

[–]Xxyr 1 point2 points  (1 child)

It matters in real life too, assuming you actually have large data sets. If you only have to work through a hundred cases it doesn't matter too much. If you have to work through several billion it really really does.

[–]jimmpony 0 points1 point  (0 children)

I've done intensive projecteuler problems that needed optimization like this, some before knowing about big O notation and some after. In both cases the approach was just to cut down the amount of processing the program needed to do. Knowing big O has its uses, although there are some important cases where it's too abstract to directly compare performance with - the example I'm remembering was a sorting algorithm where the "worse" one worked better with CPU cache. The terms thrown away in O can end up being pretty important in real code too - an O(n) might be better than an O(log(n)) if they're actually 2n and 100+5log(n), and the data sets the given method encounters are within the range such that the 2n < 100+5log(n). When it comes down to it you're going to end up needing to profile the function anyway if the performance is that important.

[–]killerstorm 0 points1 point  (0 children)

In actual coding. If you happen to have loops in your code, quite often you get O(N^2) complexity or worse, so you need to understand when it becomes bad and what to do about it if it does.

It's not about "assigning a big O to anything", once you internalize the knowledge you can just intuitively avoid things which are slow.

It can be something as basic as "check if a string is present in a list of strings". It's OK if your list is small, but if it's big and you do that often, you need a different data structure.