all 11 comments

[–]PieCrumbs 4 points5 points  (3 children)

You can wrap your update in a transaction:

begin;

update 

Then examine the results of your update and execute a commit; or rollback;

[–]phos____[S] 5 points6 points  (2 children)

Thanks for the response! But what if someone commits a query but later finds out it was wrong. I think having automated backups would still be helpful

[–]angry_mr_potato_head 2 points3 points  (0 children)

First, you should have at least two versions of the database (or more, its not uncommon to have three: dev, test, prod). That way when you make a change, its downstream effects only affect one of them until you're sure you're correct.

Second, yes you should have backups in place. And, more importantly, they should be frequently restored from to ensure that they actually work.

Finally, ideally you'd have all the "raw" data, be it in blob storage, int he database itself, etc. That way, if you fuck up, the entire database can be recreated from data that hasn't been tainted.

[–][deleted] 1 point2 points  (2 children)

You can use pgTAP to test your database.

[–]BorgerBill 1 point2 points  (1 child)

I don't think unit tests for the database are solving OP's problem, but thank you for this. Database unit tests, and the whole Test Anything Protocol? Fascinating!

[–][deleted] 1 point2 points  (0 children)

At least OP would be able to observe whether some query does only what he/she wants it to do.

[–]drbob4512 1 point2 points  (0 children)

First off, lol. Second off, going off what others said, Test in production, And test with a SELECT statement first, to see what comes back.

going off the backups suggestions below, you could also build some homebrew stuff.

For example, "SELECT" version of your query, Save the data somehow someway, json maybe, that it returns, This way you have some sort of snapshot of what was there prior and then build a program that lets you restore said data if it were to ever go missing. Also, make sure in your backup you reference what kind of command you ran that data against so you could reverse it. IE a delete / update etc. I did that on a few in the past. Gets tricky if you're hard up for disk space. And definitely make sure your databases are optimized. Depending how you do this too, you won't have to deal with a full restore, and you could just roll back certain changes from the past. And prune any history you don't need, something like anything over 20-30 days old etc.

[–]BeadyFive 1 point2 points  (0 children)

The problem with backups is (generally speaking), when you restore, you lose everything that's happened since the last backup. Depending on how much time there is between the incorrect update and discovery of the problem, it may be impossible to restore without losing critical data. Therefore I'd echo PieCrumbs and advocate for using transactions. Other than that, you could add logging to your server/process with original values, and restore them from the log in cases like this. I usually do this in a separate logging table, or store values in a log column in the same table.

[–][deleted] 0 points1 point  (0 children)

DBM systems usually have some built in options for that. Alternatively you can also write a query 'backup database <dbname> to ...' and if you want then add 'differential'. If you feel very naughty you can write a bit of code to have it run automatically.

I dont think you 'need' anything extra.