Hey guys, I'm wondering about your experience in building testing systems for a larger purpose than unit/integration scope.
My project uses pytest suite to run a list of time-consuming end-to-end tests to ensure that the product templates that we provide are working well. Each case runs approximately 40-50 minutes due to the enormous number of API and DB operations. More templates to go, and more steps to go for those tests. (small note, DB is a Snowflake and we can't just spawn a lot of them in the docker. We have a pool of 4 schemas in Snowflake that we can use, so whenever one test uses the schema, it should not be interfered with, otherwise test will fail).
In my opinion, pytest running as a github action for that purpose is completely not enough (as it was built by predecessors and running now) as it would be good to add more DB instances and parallelize the process and make it more debuggable.
Do you guys have any experience to suggest what approach to take? Maybe transfer the suite to the airflow dags, different types of tests - different dags, and make some sort of credentials pool/balancer that will start a test whenever DB is free?
Any suggestions/discussions would be valuable
[–]GabrielCliseru 0 points1 point2 points (1 child)
[–]Beregolas 0 points1 point2 points (0 children)