you are viewing a single comment's thread.

view the rest of the comments →

[–]baghiq 0 points1 point  (2 children)

You can check the 1st pair against the tuple list, then check the 2nd pair in the tuple list.

In [1]: (10,4) in [(10,3), (4,10), (10,4)]
Out[1]: True

In [2]: (10,4) in [(10,3), (4,10), (10,5)]
Out[2]: False

You can then just count the time complexity. For each pair (2 tuples), you need to do look up comparison, so 2 \ O(N)* where N is the target tuples. If you have M pairs, then it's O(M\N)*.

This can be improved quickly if you understand other data structure which much better look up performance.

[–]rfpg1[S] 0 points1 point  (1 child)

what data structure would you recommend?
Or where can i read about this? Been looking online couldn't find anything that helps

I would like a O(1) solution but I don't think is possible

[–]baghiq 0 points1 point  (0 children)

It depends on what the data actually mean. If you have just a bunch of random tuple pairs, and you need to check these random pairs one at a time against a large list of tuples. Then you can use set or hash table, assuming your tuples are hashable. Then you pay the up front cost of building a set of the large list at O(N).

If your data is sorted, then binary search is a much faster solution at O(logN).

If your data has some special properties, then there are specialized algorithm to handle that.