Software Quality Assurance & Testing Asked by gb2d on October 25, 2021
I’m a developer trying to find some guidance for the testers on our team. Our strategy regarding selenium automation testing has been rocky to this point. (A .NET project for context).
We started off with an automation test for each story, and were able to link the test to the story by the story number and the method name for the automated test. Most stories were tested, some just had stub tests where the functionality was not suitable for or did not require testing.
This resulted in a lot of code and became unmanageable.
Later we decided to scale back the automation tests to test core functionality.
What I’d like to know – is how do you track what functionality is and isn’t tested so we have some confidence as developers that our code is being exercised correctly in key areas.
For reporting purposes: I find it useful to link everything in a Requirements Traceability Matrix. It gives a quick glance of how use cases get broken down into requirements, which get broken down into features (tracked by development). You can just add an additional column for automated tests + number of tests implemented. If the cell is not checked it means that the feature requires additional testing and cannot be regressed just running the automated test suites.
For work planning and tracking purposes: How you structure the tests / tests suites is up to you. The tests can be planned at the same time as other sprint activities, but the work should be tracked using their own user stories.
Answered by Archie on October 25, 2021
Use 'Tagging' feature of defect tracking systems like Jira/TFS to add tags like 'automated' to stories and use reporting mechanism based on 'tagging' to get desired reports of coverage assuming there is no issue with the quality of tests themselves.
Also would suggest to not to try covering each small validation in automation but focus only on the happy path to avoid unnecessary maintenance overhead.
Answered by Vishal Aggarwal on October 25, 2021
The first question to ask yourself is if this level of traceability is necessary. For some people, maybe the answer is yes. For others, perhaps not. If you don't need traceability between story and test, you can focus on better ways to measure test coverage and correctness.
The second question to ask is how you intend to implement automation. There are different strategies, each of which would lend itself to various methods of tracking the work. For example, requiring all necessary automation to be a completion criterion for the story is different than requiring test cases to be written but automation following at a later date. The person responsible for automation, either a developer, a test specialist embedded with the team, or a separate test automation team, also drives how you trace and track the work.
Ultimately, though, I consider the "trace changes to tests" to be a very different question than "track what functionality is and isn't tested".
Test coverage is an excellent first step toward identifying what functionality is and isn't covered. Depending on your tools, you may be able to combine coverage from different levels of tests to see what is exercised by unit tests, integration tests, and acceptance tests. Using coverage, you can identify what parts of the system need more tests.
However, coverage doesn't say anything about how good your tests are. Just because they cause lines of code to be executed doesn't mean they assert anything valuable about the code or its intended effects on the system. Peer reviews of tests and test suites can help with this. Mutation testing can also help provide insights into the ability of your tests to detect changes in the software under test and fail.
How you monitor coverage, carry out peer reviews, and otherwise assess the quality of your tests depends on who is responsible for creating and maintaining the test suite.
Answered by Thomas Owens on October 25, 2021
Don't track and link stories to tests this way. It will lead to a massive mess.
Treat your automation itself as the product.
Use the Agile principle of working software (automation in this case) over comprehensive documentation such as detailed stories and links.
Reset. Separate your tests from your stories. Create a test suite.
Use test suites to validate functionality. Stories will require adjusting, changing, add new cases, etc. The test suites themselves remain as a a separate entity. There will be varieties such as smoke tests for deployments, browser and device testing, etc.
Also make sure you are educated in and promoting good practices and approaches such as:
At the end of the time day remember that tests are about adding confidence in functionality and protecting your users. Focus less on direct links to specific stories.
Answered by Michael Durrant on October 25, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP