Testim Documentation

Welcome to the Testim documentation. You'll find comprehensive guides to help you get started quickly, example code snippets, and tips for being more productive. You will also find details about advanced features that allow you to customize Testim to fit your environment or special testing conditions. Don't worry, we'll support you if you get stuck. Let's jump right in!

Guides    Changelog

Tagging failed runs with failure types

Add failure types to failed runs to capture data and provide reporting insights

On remote runs, you can tag the failed tests with failure types tags. Tagging helps provide a historical record of why tests failed to identify trends and provide insights that aid process improvement.

Tagging a test failure from the test result screen

You can tag the failed test that ran remotely, add a description, and link it to a previously reported issue or create a new issue for it. If you want to publish new issues to your bug/issue tracking system, you will have to first configure the connection between Testim and your issue tracking system (a.k.a Bug Tracker). To learn more, see Bug Tracker Settings.

To add a test failure tag:

  1. After running a remote run test, if the test has failed, click the Tag Test Failure link.

❗️

Make sure you run the test using the "Run on grid" option (not "Run locally")

The following dialog is displayed:

  1. In the Failure type field, select one of the following options:
  • Bug in app
  • Environment issue
  • Invalid test data
  • Test design
  • Other
  1. In the Description field, enter a specific reason or context for the failure (optional).
  2. In the Link to issue field, do one of the following:
  • If there is an existing issue in your issue tracking system (e.g. Jira), add the URL the issue.
  • Click Create issue to create a new issue in your issue tracking system. This requires connecting Testim to the issue tracking system (a.k.a Bug Tracker). The bug details are automatically created and the Publish Bug screen is displayed:

In the Summary field, fill in a descriptive summary. You can modify the Project and Type selection and edit the suggested texts. When finished, click Publish to publish the issue.
The Link to issue field will include the URL of the newly created issue or the existing issue.

  1. Click Add to save.

Editing an existing test failure tag

After tagging a test failure, you can edit an existing test failure tag.

  1. From the Test List screen, click the relevant test.
    On the test editor screen you will see your previously selected test failure tag.
  1. Hover your mouse over the tag and click the Edit Tag icon
  1. Edit the existing test failure tag.
  1. Click ADD to save

Tagging multiple test failures from the Test Runs screen

To tag multiple tests on the Test Runs screen:

  1. Go to Runs -> Test Runs.
  2. Use the time-frame drop-down menu to set the relevant time-frame.
    A list of test runs will appear at the bottom.
  1. Click the bug icon at the top of the list.

The following dialog is displayed:

  1. Select the Failure Type, add a description (optional) and Link to issue in your issue tracking system (e.g. Jira), by entering a URL of the issue.
  2. Click Add.

Suggested failure tags

After tagging multiple failed tests, if Testim recognizes a reoccurring issue, it will suggest a failure tag based on your previous selections. The suggested failure tag appears at the top of the test result screen.
To use the suggested failure tag from the test result screen:

  1. At the top of the test result screen, hover your mouse on the suggested failure tag.
    The following dialog is displayed:
  1. Do one of the following:
  • Click Confirm to accept the suggestion. Following the confirmation, the test will be tagged based on the suggest selection and will include the description and the link to the issue of the previous failure tags (the failure tags that were used as a basis for the suggestion).
  • Click Edit to select another tag. The tagging screen is displayed. Follow the instructions in the Tagging a test failure from the test result screen section above.

To use the suggested failure tag from the Test Runs screen:

  1. Go to Runs -> Test Runs.
  2. Use the time-frame drop-down menu to set the relevant time-frame.
    A list of test runs will appear at the bottom.
  3. Tests that include a suggested failure tag will be labeled as "Suggested".
  4. Hover your mouse on the suggested failure tag.
  1. Do one of the following:
  • Click Confirm to accept the suggestion. Following the confirmation, the test will be tagged based on the suggest selection and will include the description and the link to the issue of the previous failure tags (the failure tags that were used as a basis for the suggestion).
  • Click Edit to select another tag. The tagging screen is displayed. Follow the instructions in the Tagging a test failure from the test result screen section.

Viewing Failure Reports

You can view statistics about failures by failure type in the reports view. As you build up data about your failures, you can identify trends, helping you target remediation or process improvement.
To view failure report:

  1. Go to Runs -> Reports.
    The failure report is the second report from the top.
  2. Click the drop-down menu to select the time-frame of the report.
  1. The report has two views – donut chart and line graph. Click the view modes to toggle between the views.

Failure by Type – Donut Chart

The donut chart displays the distribution of failure types by tag. Each tag includes the occurrence percentage out of the total number of runs and its percentage in the previous period. Clicking the failure tag takes you to the Test Runs screen, while listing all the runs that were tagged with this type.

Failure by Type – Line Graph

The line graph displays the number of occurrences of each type of failure throughout the specified period of time. Each tag is color coded according to the legend on the left. You can hover your mouse to see additional info.

Updated 10 months ago

Tagging failed runs with failure types


Add failure types to failed runs to capture data and provide reporting insights

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.