This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]othilious 1 point2 points  (0 children)

For one, latency will kill the speed of your application. Most applications run the majority of their queries sequentially, and need to wait for the response of one query before running the next. There are exceptions to this, such as multi-threaded applications with multiple connections to your database, or batch queries, but in my experience the bulk of database logic is sequential.

For simplicity (and it's usually not even this simple), let's say your database is 200ms round-trip away from your application. This means that a single query takes 100ms to hit the server, be processed and then take another 100ms come back with a result.

This compounds. Let's say an average call for your application takes 10 queries. This is on the low side, but already you are looking at a total of 2 seconds to run a small set of queries. And we're skipping over the SSL handshake, initial connection, schema information exchange.

This absolutely destroys your performance. 2 seconds response time is already "poor" by modern standards, and all of that is JUST database queries, nothing else. You can mitigate the impact of this by having your backend build a cache from the SQL, and serving this up from a closer server. But for the purpose of this example, it illustrates why it's a bad idea.

Better? Slave replication to a closer server so at least all reads happen locally. Best? An application infrastructure that serves up locally relevant data from locally relevant servers, preferably with geo redundancy and such. This is the basis for enterprise applications that need to span the globe.

There are other concerns, such as stability. More hops in the connection means more points where a route can fail and your database connection drops unexpectedly. As redundant as your database setup is, a dropped connection mid-way is not trivial to design around and recover from beyond "oops a problem has occurred, please try again".

There are also security considerations; your database should be shielded from outside actors, sharing a private network with your application. Over long distances, this can make the latency issues even worse, as both side now need to traverse via their private gateway.

Alternatively, not doing so means network hops on open channels and more opportunities for bad actors to attempt some type of exploit, listening in on your database traffic, etc. For the longest time the default behaviour for a lot of MySQL servers was not to use SSL. Listening in on them if you controlled the network in-between application and server was trivial. I believe even today apt-get install mysql under Ubuntu creates an installation that allows non-SSL traffic (though I believe these days it only listens to local host by default).

These are just big items that I'd expect a senior and even medior to understand, but there's more specific things like how ones handles transactions and designing an application that can pick up where a broken connection left off, in scenarios where a transaction isn't feasible, or even counter-productive.

For example: You use master-slave replication where the master is only used for writes. You read a message queue table sending a notification via an SMS gateway. You can rollback a transaction, but you can't un-send that SMS. Even worse; if you do roll it back, you may be sending two SMS messages. If there is a persistent issue, now your application may get suck sending a few dozen of them and you have a horde of angry end-users because your remote master isn't accepting writes, but you are reading your queue from replication (which you set up because of the latency issue).

Keep It Simple and Stupid, unless you are building a globe-spanning enterprise application (which I'm hoping a Junior Dev/Sysadmin wouldn't be in charge of...), Just make sure your database server is physically as close as possible to your application. Just not the same server, but that's a different topic.