Software Quality Assurance & Testing Asked on December 12, 2021
Our team currently use Selenium and C# (and NUnit) to run UI Automated tests. All tests have been manually programmed, meaning no recorders have been used.
Issue: We now have a request that these tests track their own performance (and past performance) and raise a warning when its runtime increases by x% (5% or 10%, etc).
Question: What would be the best way to accomplish this? Should we create to tool to analyze performance (performance history) of these UI and API tests from scratch or are there other tools out there we can leverage?
Blogs or stackExchange questions discussing load/performance testing usually reference three main tools for C# (NeoLoad, SilkPerformer, LoadRunnerProfessional). However, I’m not sure that what I’m being asked to do is considered performance testing (load testing) in the purest sense and therefore, not sure whether the tools mentioned above will help achieve the overall goal. They also usually separate performance/load testing from UI/API testing.
The test results file (.trx) is an XML file which you could code a parser for. You could scrape each test name and how long it took. You would then need to save that to a database or something. Then when the next test runs you can compare the two using your script and "flag" those tests. You can decide whatever you want to do with those flags tests, like emailing the list to interested parties. I used a similar script in the past to do all my morning checks and send me an email to review when I got in each morning.
Answered by kirbycope on December 12, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP