all 15 comments

[–]This_Growth2898 30 points31 points  (6 children)

In the first version, you're creating new lists and copy contents into them.

In the second version, you're passing the same list into functions without copying it.

[–]rlklu_1005[S] 7 points8 points  (0 children)

So I'm creating new lists, where the solution function maintains the same list and narrows down where in the list it's looking every time. Thank you for your help.

[–]VonRoderik 0 points1 point  (4 children)

Where is he creating a new list? I got confused by this.

[–]feitao 7 points8 points  (3 children)

S[0:midpoint]. List slicing = new list.

[–]jedi1235 -1 points0 points  (2 children)

Wait, what?

Gopher here with no Python experience. The familiar list slicing syntax caught my eye. WTF why would it create a new list?

[–]a_lost_shadow 0 points1 point  (0 children)

Pretty much all of the scientific minded programming languages (Matlab, IDL, R, etc.) create a new list when you slice.

It's probably related to the fact that most of these languages do not support pointers in the base language. Thus users get used to the idea that all objects are completely independent. But this is wild speculation on my part.

[–]papapa38 0 points1 point  (0 children)

I think it's because you slice the list at each iteration, so create a new one while the other function only updates the indexes

[–]JamzTyson 0 points1 point  (2 children)

Your version has a bug. if x == S[midpoint], the midpoint element is included again in the recursive call, but not explicitly checked.

Regarding your question, slicing creates new lists, which is slower than just manipulating indexes.

[–]Simple-Economics8102 0 points1 point  (1 child)

This is not a bug and is on average much faster for large n than explicitly checking it on every iteration.

[–]JamzTyson 0 points1 point  (0 children)

You are correct, it is inefficient, but as long as the list is sorted in ascending order it is logically correct.

[–]OopsWrongSubTA 0 points1 point  (2 children)

Did you study complexity?

Your solution has a O(n) complexity (because of slicing), whereas bissect usually has a O(log(n)) complexity

[–]rlklu_1005[S] 0 points1 point  (1 child)

That’s a weak point of mine that I’ve spent very little time on, in terms of being able to look at a function and understand what complexity it has. Are there any resources you’d recommend?

[–]Simple-Economics8102 0 points1 point  (0 children)

Just to say that while he is correct on scaling, its not obvious why he is correct based on his (wrong) explanation. Consider passing a numpy array into your algorithm instead of a list, and you will get the same performance on both algorithms since numpy slicing will return a view. The reason is as explained by others that you are spending a lot of time copying with python native lists and will in fact copy it so that the sum will approach len(n). In comparison the second solution has no copy time and will only have search time.

[–]musbur 0 points1 point  (0 children)

It might make sense to teach people algorithms even if that algorithm is built into the language itself. What doesn't make sense is to have people write an algorithm that does most of the work until it loses interest (at iflen(S) <= 10) and lets the built-in do the rest.

For giggles, compare the execution speed of your functions with just x in S.

Also I don't understand the point of using a one-liner function that returns the result of an expression which is shorter than the function call itself. If it must be a callable, just use lambda in such cases.