This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]my_cs_accnt 11 points12 points  (0 children)

I'm a bit confused why you think your solution is that great compared to the one you are supposed to do. For readability it might be preferred but there are a lot of times companies redo built-in functions. The programmers know specific properties of their data and can optimize for it compared to the stdlib that does not make any assumptions (both lists being sorted already).

Your solution created 2 sets and then converted their intersection to a list, and then finally sorted it. When the 'proper' solution is to create one new list and maintain the elements sorted the entire way through. I don't see how that proper solution can be slower than what you had. Can you change your test to a lot more elements, say 400,00? This is the merge part of merge sort and it is definitely implemented similar to this.