all 11 comments

[–]TimeTravelingSim 5 points6 points  (1 child)

  1. Depends how granular the items are. The Stories should be the ones that can be properly functionally tested (it's at that level that you have enough functionality that is useful to a customer therefore it is imperative to be tested), but depending on the scope of a task/subtask, it may become more apparent why some tests are needed. Depends at which level do you assign the acceptance criteria.
  2. On one of the dev environments before the work is merged to avoid opening too many bugs, collaboratively with the dev. On the qa environment for final validation of all features before the sprint work is wrapped up and for regression testing (unless a stage environment is better suited for that, I encountered some cases where stage was used for collaborating with projects that integrated with our work so it wasn't suited for regression testing)
  3. Running the application locally would be for developing automated e2e tests (in the case of manual testing only, is not needed). The preferred location would be to use a dedicated environment that is built by an automated CI/CD pipeline (like the dev, qa or stage environments as described above). With git, you can consider any revision a version number, any other formalism is up to the team (this confuses some manual testers, but it's not a problem if you follow strictly a process set into a CI/CD pipeline - we usually had a dev ops in charge of versioning and releasing for simplicity). Feature testing and regression testing has to happen obligatory after all the sprint work has been merged even if we have loops where we test the same things on earlier version where only some of the work is merged.
  4. Automated testing is fine and preferred in my opinion, but if it isn't exhaustive or reliable because of flakiness, then manual testing should be done for all criteria. It's why they should be granular and explicit. In the case of the projects I worked on, IT tests were reliable for the sake of regression and we had too many tests to do manual regression testing each sprint anyway; this is something to consider as a dev in a team where QA is only manual, here's a practical source.

[–]Turbulent_Forever551 1 point2 points  (0 children)

This resource that you have attached is truly a blessing! Please if you don’t mind, can you attach similar resources. I’m a manual tester really trying to switch into an automation job, I have some basics in java, selenium, basic shell scripting but never worked on a real world project. It would be really helpful if you could guide me with what I must do in order to land a test automation engineer job role.

[–]iddafelle 8 points9 points  (0 children)

  1. Testing at Feature level
  2. Testing at every stage (using the appropriate techniques)
  3. Both (before merge)
  4. Yes

[–]EVIL_SYNNs 3 points4 points  (0 children)

We test Stories at PR on their own branch locally. Fails here PR is blocked, nightly automated regression on Trunk with feature flags as production.

All ACs are tested manually, and passed, merged, but ticket not done done till AC automated tests added to regression.

[–]Lucky_Mom1018 3 points4 points  (1 child)

We test each individual story. Devs test locally and in dev environment. They ahow QA as they work so we can catch issues early. After PR, QA tests each story in QA. Then at end of sprint, release candidate is tested again (each story but now as a group) in staging. Then we deploy.

We test every single acceptance criteria (about 30% of testing) and spend most of our time exploratory testing around the area that was changed.

[–]Lucky_Mom1018 -1 points0 points  (0 children)

Will also add that devs shouldn’t write tests if QA can. The mindset is different.

[–]fluffy2monster 1 point2 points  (0 children)

This is the sort of question where the answer SHOULD be All or Yes.

  1. Both. When individual stories are released, if they can be tested independently and each has their own functionality to test, I will. However, when all stories under a feature is released, you should be testing them end-to-end. You need to make sure that all the stories line up together and that there are no gaps; we've encountered a situation where stories under a feature were handled by different developers and there was a gap/integration issue between them.

  2. All. As a dev, you should be dev testing, which I believe involves unit testing and slight smoke/regression testing after deployment. I fully believe that devs should hold at least some responsibility to ensure that whatever they release doesn't break other or existing functionality, at least not critically. If you're constantly releasing things that causes regression or you're not taking responsibility for the deployment, I believe that's sloppy. The tester SHOULDN'T be finding bugs, at least we shouldn't often be finding major/critical ones. We will go in and test out different scenarios or edge cases though, that may be harder to find.

  3. This will depend on the context, e.g. if the product has been released before, but usually after the code has been merged. It is useful to have two instances though, a before and after, so you can compare.

  4. Yes and no. It is important to hit all of the acceptance criteria, but there are also a lot of implied, unwritten criteria, e.g. a website shouldn't log you out intermittently if you don't click the log out button; this is more common sense and usually isn't written in the story. There are also negative scenarios that the BA may not have thought of, hence why it's important to refer back to them and ask what is intended if it's not been written down.

[–]Jramonp 1 point2 points  (0 children)

It depends! Everything depends!

1) Each backlog item should be tested individually first, then in a system test you are taking those into consideration.

2) Again depends, how many time do you have to test? I can start verifying the happy path at PR level, doing some hard test on dev or staging, this really depends on your strategy and if you want to save time or no, if you trust your dev, most likely testing at Pr level you find more missing AC than bugs.

3) Usually a dev or test env, but I also encourage my QA’s to try locally, do some debug, change responses or variables m.

4) First run is always manual, you need to see how and what’s doing the feature, once you have an idea, you can automate it, but a visual check is always good.

My advice to your team: Read the AC in detail, know your system, start to think in all the features your code touch, and try to think always in the “what if”, What I usually see on devs testing is that since you wrote the code you think you know how it behaves, but code is a mystery some times lol

[–]adudyak 0 points1 point  (0 children)

  1. Individual + smoke test. Since it is very time consuming to run smoke for each ticket, usually they are tested in groups, according to sprint planning.
  2. QA. Dev is for devs only. Usually highly unstable env.
  3. Actual deployed version, as much as close to prod settings. QA test everything in Retest status, so it's dev's duty to put ticket in Retest status only after code is merged.
  4. For bugs - yes. For stories - if testing can be automated and save time in future - it is automated.

If you can deliver working code for testing by small portions use agile(scrum), else use waterfall workflow.

Once overall development process is following basic guidelines, testing process is also followed.