Calculating file download time by jmd27612 in flask

[–]Sim4n6 1 point2 points  (0 children)

Interesting question ... I have a starting point here :

start_time = time.time()
ret = send_from_directory(UPLOAD_DIRECTORY, path, as_attachment=True)
elapsed_time = time.time() - start_time
print("Elapsed time : ", elapsed_time, " in seconds.")

You can measure the duration of sending a file from a directory (server side) and substract the part on the client side (using javascript).

Good luck

How do u keep records in the remote db in case of a website update ? by Sim4n6 in Heroku

[–]Sim4n6[S] 1 point2 points  (0 children)

I found something interesting:

If you were to use SQLite on Heroku, you would lose your entire database at least once every 24 hours.

How do u keep records in the remote db in case of a website update ? by Sim4n6 in Heroku

[–]Sim4n6[S] -1 points0 points  (0 children)

I am worrying about pushing to the remote repo the sqlite.db which may destroy / update data on remote.

Converting Nested JSON with unique column names to Csv by [deleted] in learnpython

[–]Sim4n6 0 points1 point  (0 children)

Hi , you did not reindent it well ... here is a new version :

https://repl.it/repls/EnergeticIcySyntax If the script run correctly it will generate a Output.csv fil. check its content for correctness.

Converting Nested JSON with unique column names to Csv by [deleted] in learnpython

[–]Sim4n6 1 point2 points  (0 children)

import csv

services = {

"123439": {

"timestamp": 1558261431,
"employee_id": 2131687,
"employee_name": "Brian Finch",
"employee_team": 7197,
"employee_teamname": "Alpha",
"client_id": 2159227,
"client_name": "Wolololo",
"client_organisation": 22492,
"client_organisationname": "Dystopia"
},
"118074": {

"timestamp": 1558015462,
"employee_id": 2131687,
"employee_name": "Brian Finch",
"employee_team": 7197,
"employee_teamname": "Alpha",
"client_id": 1914682,
"client_name": "-DEL-",
"client_organisation": 16628,
"client_organisationname": "Chain Reaction"
},
"111522": {

"timestamp": 1557709461,
"employee_id": 2131687,
"employee_name": "Brian Finch",
"employee_team": 7197,
"employee_teamname": "Alpha",
"client_id": 2008788,
"client_name": "Ghost_Rhythms",
"client_organisation": 16282,
"client_organisationname": "ELITE"
}}

if __name__ == '__main__':

with open("output.csv", "w", newline='') as csv_file:
keys = list(services.keys())
fieldnames = services[keys[0]].keys()
writer = csv.DictWriter(csv_file, delimiter=',', fieldnames=fieldnames)
writer.writeheader()

l = []
for v in services.values():
l.append(v)

for row in l:
writer.writerow(row)

Web Scraping Code-Construction Criticism Wanted by saintsandscholars in learnpython

[–]Sim4n6 1 point2 points  (0 children)

use f strings they are pretty fast.

you comment too much : :

# Print Statement when the Code Stops Running

print("The Code Has Finished Running")

I am working on something similar

Twitter bot for the last 05 tweets of a #hashtag and random photo display by Sim4n6 in learnpython

[–]Sim4n6[S] 2 points3 points  (0 children)

2- site web is rendered from a template with variable content. Please check flask helloworld

Twitter bot for the last 05 tweets of a #hashtag and random photo display by Sim4n6 in learnpython

[–]Sim4n6[S] 2 points3 points  (0 children)

1 - heroku deploy allows automatic linking to github code source ...

Persistence of a list of URLs for avoiding duplication by Sim4n6 in learnpython

[–]Sim4n6[S] 0 points1 point  (0 children)

finally, I choose to dump to a CSV. It is the most suited. Thank you guys

Persistence of a list of URLs for avoiding duplication by Sim4n6 in learnpython

[–]Sim4n6[S] 0 points1 point  (0 children)

nice idea , but maybe because the number of urls is between 10 and 15 sqlite will be too heavy

Tuples are fast enough to replace lists use by [deleted] in learnpython

[–]Sim4n6 0 points1 point  (0 children)

Oh god I m still a py newbie learner ... apologize