Let's start with the high-level flow of how to create, edit, run, and see the results of your test.
These core concepts are specific to Testim Automate.
There are two ways to create a test:
- by recording the test
For users who want to write tests in code, we recommend a hybrid approach that starts by leveraging the Chrome Extension to record a user flow. This approach can jumpstart your development while also increasing the resiliency of your test.
Once the test has been recorded, you can either edit the test in the Testim visual editor or export the test to code and edit it in your IDE (Integrated Development Environment).
Editing the test allows you to customize it to meet the needs of the web application under test. You can add conditions, validate output, adjust parameters, and establish data-driven testing and more.
A test grid is a set of resources, on-premises or in the cloud, that can spin up multiple browsers and configurations so that you can fully test your application in (simulated) real-world conditions.
When your test is successfully running on your local system, the test is ready to run in a grid. Tests can be run in parallel on Selenium-based grids, including Testim's hosted grid as well as 3rd party services and self-managed private Selenium Grids.
Test initiation can be triggered in a number of ways including manually from your IDE or the testim test editor, at a scheduled time using the Testim scheduler, or most commonly at the completion of a Continuous Integration (CI) server task.
Finally, when the test has run, you will see the results in the Testim reports. This includes summary data of successes and failures, as well as details about test failures. Testim aggregates error types so that you can troubleshoot problems quickly. We also collect the information that you need to identify the root cause of the problem, including screenshot comparisons, HTML DOM, and console logs.
Recording a test occurs in the Chrome browser.
Before recording a test, you must first sign up for an account and install the Chrome extension. Don't worry, we only use the extension for the purpose of recording tests.
Recording a test is simple. Enter the target web URL in the Testim editor, press the recording button and start recording. Then simply go to the web application and click, scroll, enter data, tab, etc. to complete a common task like logging in. When you are done, stop recording and view your test.
Give it a try here.
Once you have your project set up, you can start creating your first coded test. As mentioned above, you can start by writing code directly using the Dev Kit API. Or you can record a test, export it to code and start editing in your IDE (our recommend approach).
Give it a try here.
Smart Locators are super cool! They are where a lot of the magic of Testim's AI happens.
When you record a test, you need to identify the elements on the page. In some test automation tools, visual elements are identified by a single locator such as property ID, text, or class. However, if this locator changes due to a feature update, the test won't find the element and fail.
Testim's Smart Locators solve this problem by taking a more holistic approach to identifying the element. When an element is selected during a recording (click, hover, enter, etc.) Testim's algorithm analyzes potentially hundreds of attributes associated with the element. It then assigns weights to the attributes to uniquely identify the element.
And the Smart Locators learn with each test run. If some attributes change, the Smart Locator will use other attributes to identify the element. This way, if the element is changed but still functioning, Testim's Smart Locators will find it and keep the test from failing.
If you are wondering how a Smart Locator is identifying an element, you can look in the Properties panel. You can also change how Smart Locators are weighted and override them if desired.
A key advantage of managing tests-as-code is that you can store them in your version control system next to your application code. This makes it easier to keep tests up-to-date as application code changes. When you create a branch in GitHub you can automatically branch your tests. Then when you issue a pull request to merge the code into the main branch, you can merge in your updated tests.
After you have written your test, you will want to run it locally to verify that it works. If it doesn't you can troubleshoot, looking at comparison screenshots, HTML DOM data, and console logs to identify the problem.
If you are working in code, you can use your IDE debugger. If you are working in the visual editor, you can use Testim's debugging tools.
You will want to run your tests in parallel on a test grid. This allows you to test multiple browsers and configurations and run entire test suites faster.
See how to set up your grid
While you have different options for starting a test including using the Testim scheduler, most customers integrate to their CI server and trigger test runs upon completion of CI tasks.
See how to set up your continuous integration (CI)
Regardless of how you created your test (in code or codeless) or how you edited it (in your IDE or in the visual editor) when you run the tests, the results will be shown in Testim reports.
You can add labels to your test results to help sort.
You can see managerial reports that show how many tests have run and how they are performing.
Updated about a year ago