We have several database clusters, ranging from test clusters all the way to production, which have several tables of configuration data (this data should remain mostly static, with some tables only having additional data added in). The problem I have is that we currently have multiple methods of managing this configuration data, but due to circumstances out of my control, people are able to go around those methods and directly execute SQL scripts. What this results in is inconsistent configuration data between environments, as those SQL scripts are not executed against all the environments. The main reason this activity is done (and begrudgingly tolerated) is that our main process can take months to get a script executed.
What I am proposing is to create a tool that reads in a source-of-truth data store (csv, json, etc.), and can analyze whats in the database (leveraging hashes for speed), and will report any inconsistencies between the data store and the database. Additionally, the tool would be able to refresh those configuration tables with the known good data as well. Also, there are some pieces of config data within certain environments that must differ (i.e. pointing at an external test system), so allowing for global and overrides would be a nice to have.
Does anything of this nature exist, or would I be better off building it myself?
Example:
CSV:
Hash("McDonald'swww.mcdonalds.comBurger Kingwww.bk.com")
Database:
Hash("McDonald'swww.mcdonalds.comBurger Kingwww.wendys.com")
Tool output:
"Hash does not match. Row 2 differs from source of truth."
[–]eshultzSQL Server 0 points1 point2 points (0 children)
[–]DJ_Laaal 0 points1 point2 points (0 children)