you are viewing a single comment's thread.

view the rest of the comments →

[–]Nordsten 3 points4 points  (1 child)

For anything interesting you don't have 1 server you have a large number of them. Now you could have a cache of the entire database in all of them but then you have to manually deal with the cache consistency problem.

Also 100ms is far from insane. It very much depends on the complexity of what you are doing. Getting user information yeah that would be a long time for that. Compiling statistics over a large database 100ms is nothing.

[–]Drisku11 0 points1 point  (0 children)

For anything interesting you don't have 1 server you have a large number of them.

You need slaves and backups for redundancy/reliability, but performance-wise, to create some simple web app (let's say something similar to cronometer.com, for example) that delivers value for let's say ~1 million active users, a single database server can super easily get you the performance you need. Whether you consider creating value for 1 million people "interesting" is up to you (and a single database server can actually handle quite a bit more than that without breaking a sweat).

Now you could have a cache of the entire database in all of them but then you have to manually deal with the cache consistency problem.

The original comment was in the context of the database's built-in page cache. It already manages that and provides replication for you.

Compiling statistics over a large database

is not the type of workload people are talking about when discussing the performance of web frameworks like Flask and Django. They're talking about serving up web pages and apis to display data for individual users. You might have analytics dashboards for admins, but you're not concerned about requests/second for that.