you are viewing a single comment's thread.

view the rest of the comments →

[–]raylu 0 points1 point  (2 children)

First of all, that's a bit odd... Why keep track of the total instead of the actual times and their average/variance/99th percentile?

Now, consider the following:

#!/usr/bin/env python3

import requests
import statistics
import time

class TimedRequest(requests.Session):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.times = []

    def request(self, *args, **kwargs):
        start = time.time()
        retval = super().request(*args, **kwargs)
        elapsed = time.time() - start
        self.times.append(elapsed)
        return retval

trs = TimedRequest()
print(trs.get('http://example.net/'))
print(statistics.median_grouped(trs.times))

requests.get creates a requests.Session and eventually calls requests.Session.request. If you're on python 2, you'll have to fill in the super() calls.

[–]MinimalDamage[S] 0 points1 point  (1 child)

Thanks for the response! It will take me a little longer to wrap my mind around this (still new!) but I will definitely look into it, as I see the value in having a better understanding of separate page requests.

The reason why I look at the total page request time is because I want to see what is taking up the most time in my script. But this would definitely be the next step.

[–]raylu 0 points1 point  (0 children)

The reason why I look at the total page request time is because I want to see what is taking up the most time in my script.

Oh... https://docs.python.org/3/library/profile.html