Sauce Labs Customer Idea Portal

Submit an idea and make Sauce Labs even better!

Include metadata for test re-runs

It is an irritating fact of Web Development that tests are flaky sometimes.  Many customers deal with this by re-running tests one or more times when they fail in any given build.

There is currently no way to tell from the Sauce UI whether or not a test was re-run.  You can guess by checking whether a test with a given name and platform was run more then once in a given build, and check if one failed and was followed by one which passed, but this is frustrating.  In addition, re-runs change the number of tests in a build making it harder to compare what should be "Identical" builds.

It would be great if Sauce Labs annotated re-ran tests somehow.

Possible Annotation methods:

  1. Allow customers to use the REST API to mark a test as a re-ran test, including the Sauce ID of the original test
  2. Allow customers to include a desired capability to have Sauce Labs guess if a test is re-ran; Do this by comparing platform and name, as well as start and finish times, and pass/fail statuses.  Only do this for tests in the same build.

Possible Annotation outcome:

  1. Re-ran tests show the number of re-runs on the Test Details page
  2. Re-ran tests include a tab linking to other instances of that test on another tab
  3. "Collapsing" re-ran tests into a single Test Details page
  4. Filters in the archives to show re-ran tests
  5. Counters on the Build tab to show re-ran tests
  6. Excluding all but one test result in the Build Details page, to keep numbers consistent (potentially allowing customers to choose whether to use the last result, or any successful result)
  7. The ability to find tests that re-run often via Analytics (to help addressing flaky tests)
  • Dylan Lacey
  • Jul 31 2017
  • Future consideration
  • Attach files
  • Evan Giordanella commented
    9 Jul, 2019 08:08pm

    I was also thinking about this. Must unit test frameworks collect retry information, like mocha, so having support in SauceLabs will give better parity with the tools we are using to actually execute the tests.

  • eric twilegar commented
    8 Nov, 2017 11:03pm

    In thinking about this issue it might be nice to be able to set the status of a build. If i could tell sauce a build passed directly, then I could then query for builds with a status of passing, but with failed > 0 and do some analysis.

    As for deeper investigation I've taken to uploading events to Keen.io about the test itself. I'm sure you could write to another analytic type db or service. That way I can do analysis on my own. I don't always run tests through sauce as well.