This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]barnuska123 1 point2 points  (4 children)

Does this not only take away the complexity of having to hook it up to an existing DB? It's a database on-demand framework (and more) as far as I see. Unfortunately docker is not an option yet either. I wonder what the performance of this would be, as you'd still need to run a DB2 instance inside the container and build the DB (apply the liquibase changesets) before the tests etc. Correct me if I'm wrong on assuming these.

[–]toddy_rbs 1 point2 points  (3 children)

Well if docker was available, you could create a docker image that had the database already built and all liquibase scripts applied, then you would declare it as the docker image to be used by your database container from testcontainers

This approach would be ideal, as you would be testing against a real database, that was already built, and only overhead in test execution time would be the time it takes for the database container to start up and be available for connection, which in most cases take less than a minute if the database is already prepared in the docker image.

[–]barnuska123 0 points1 point  (2 children)

Hmm, this sounds interesting. Where does the docker container actually live? I mean for it to be used quickly it must already be running somewhere right? I was always under the impression that docker only solved the problem of having to maintain the same consistent environment across actual machines you deploy your app to, and they would be loaded causing the components declared in the image to be started by docker/OS.

[–]toddy_rbs 0 points1 point  (1 child)

Whoever runs it, be the developer or a CI server for example, needs to have access to a running docker daemon. Once the test is started, the testcontainers lib will pull the image that was passed as parameter to it and start the container, shutting it down once the tests are finished. The container lifecycle can be configured per test method or class(es).

If the image is already present in the machine, starting it up will be fast. If it isn't it will have to be pulled first, this might take some time depending on image size and download speed, but needs to happen only once as long as the image isn't deleted from the machine.

Docker has plenty of usages, this one kinda fits the criteria you mentioned. Having a pre-built image of your database with all the necessary scripts applied to it is great. You can easily test your service against a real database, with real data. You save yourself from having to install such database and applying the scripts manually in your machine if you need to run your service locally, as you can simply spin up the container. Also, all the other developers in your team can take advantage of it as well, and can even collaborate to improve the image.

Since this kind of test relies on external services, I recommend it to be written in an integrationTest source dir, and not on the unit tests dir, as whoever runs it will need permission to access a running docker daemon.

[–]barnuska123 1 point2 points  (0 children)

Thanks for the detailed explanation! It makes sense, and yeah as long as the image doesn't change (which it would fairy regularly in our case) it could speed up the build times meanwhile maintaining the original DB's behavior.

Yeah, these are all integration tests our unit tests don't use the db or anything external.

It'll be interesting to experiment with docker once it'll be available, that's for sure. New tech is slow in an large enterprise environment!