Once you have your log files, it’s time to evaluate the information they contain. This shows you where you can gain the most benefits from improving your test. In this chapter, we’ll show you a few examples of evaluations that focus on typical reasons for unnecessarily long execution times.
These short examples can give you an idea of how to approach the wealth of information contained in the log files.
Time-by-event evaluation
A time-by-event evaluation is a good way of finding out which event types take the most time.
In the above example, you can see accessing repository items takes up half of the test execution time and mouse clicks and keyboard sequences take up another 30 % combined. Here, it may be a good idea to do another test run with the repository tracer and the all-input tracer to see in more detail which repository items and actions take up the most time, so you can optimize those, e.g. by adjusting the RanoreXPaths of the repository items.
Delay actions also take up 12 %. It’s a good idea to check out where you can replace those with Wait for actions, which only stop the test for as long as necessary instead of for a fixed amount of time.
Time-by-repository-item evaluation
Using the repository log, you can evaluate which repository items take the longest to be found and identified. This tells you which items you should take a closer look at, for example, for adjusting the RanoreXPath to be quicker.
Time-by-test-container/module evaluation
The time it takes individual test containers or modules to execute is also a useful metric. Use it to find out where your test suite is slowest, so you can make structural improvements, for example, or remove clutter from test cases and modules.