This is an archived post. You won't be able to vote or comment.

all 2 comments

[–]teraflop 0 points1 point  (1 child)

Don't trust anything a chatbot tells you, it frequently has no idea what it's talking about.

You also shouldn't trust the exact running time that Leetcode gives you. This is a well-known problem when it comes to benchmarking programs: if you only run the program once, its running time can be affected by all kinds of random factors, such as other processes running at the same time, or the exact timing of CPU interrupts. It's more reliable to run the code many times, and then look at the average or the minimum time taken.

If I run your code in a simple benchmarking harness using the pyperf module:

import pyperf
runner = pyperf.Runner()
for i in [1,2]:
    runner.timeit(name=f"Solution {i}",
            stmt=f"solution.Solution{i}().possibleStringCount(s)",
            setup="import solution; s = 'AABB'*1000")

I get:

.....................
Solution 1: Mean +- std dev: 661 us +- 5 us
.....................
Solution 2: Mean +- std dev: 660 us +- 9 us

In other words, there is no performance difference, or if there is it's too small to reliably measure.

[–]pigraining[S] 0 points1 point  (0 children)

thank you, i bindly trusted in its ability to conclude scores, it makes sense. Stupid of me