On this page, we explain the preprocessing filters and the similarity value in more detail. For how to apply them, please refer to the first issue in Advanced image-based testing.
Downsize
Downsizing means reducing the scale of an image. With vector graphics, this causes no loss of quality. However, with raster graphics (such as screenshots), downsizing means reducing the number of pixels. Therefore, quality is lost.
The downsizing filter’s purpose is to reduce an image's size while retaining its unique characteristics. This way, superficial differences are less likely to cause a test failure, and test execution is sped up because fewer pixels need to be searched and compared.
Edges
Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness.
The Edges filter reduces the amount of information in an image, which makes for faster test execution. It also makes image recognition more robust in terms of changes in color and brightness.
EdgesSobel
The EdgesSobel filter creates an image emphasizing edges. It’s similar to the Edges filter and also makes image recognition more robust against changes in color, brightness, and complexity.
Grayscale
The Grayscale filter reduces color and brightness information to 256 shades of gray, varying from black at the weakest intensity to white at the strongest. Grayscaling makes image recognition more robust in terms of changes in color.
Threshold
Thresholding is the simplest method of image segmentation. The simplest thresholding methods replace each pixel in an image with a black pixel if the image intensity is less than some fixed constant, or a white pixel if the image intensity is greater than that constant. From a grayscale image, thresholding can be used to create binary images. In the example image on the right below, this results in the dark tree becoming almost completely black, and the white snow becoming almost completely white.
The Threshold filter makes image recognition more robust against changes in color, detail, and brightness.
Similarity
The similarity property controls how similar the comparison and the actual image need to be for Ranorex Studio to consider them a match. It can be adjusted from 0.0 to 1.0. This corresponds to 0 % similarity (completely different images) and 100 % similarity (identical images). It may be tempting to use values like 0.8 or 0.9 to ensure the image is found even if some superficial changes occur. However, these values are only seemingly high. In reality, they are very low already.
At 0.9 similarity, Ranorex Studio would consider an entirely white 100-pixel picture identical to a picture with 90 white and 10 black pixels. That’s quite a difference already. When you start comparing images in the magnitude of several thousand pixels, this becomes even more of an issue.
Similarity example #1
Consider the icons of Edge and Internet Explorer in the image below. They each have around 2000 pixels and are markedly different from each other. A 0.9 value would not catch these differences. It would consider them a match. You would need a minimum value of 0.95 for them to be treated as different.
For this reason, we recommend you always use as high a similarity value as possible. If 1.0 doesn’t work, 0.9999, 0.99, or 0,98 should normally be enough. You should rarely go below 0.95, as this will ignore significant differences. To ensure your images are found at these high values, make sure to use uncompressed image formats, such as .png and .bmp. The artifacts created during compression make formats like .jpg unsuitable.
For large pictures in the order of several thousand pixels and more, we also recommend you turn off similarity reporting, as it can take a very long time to compute even on fast machines.
Similarity example #2
Similarity defines (as a percentage) how similar the compared images need to be in order to pass the check. The color differences for each pixel are calculated and the mean squared differences are summed up.
Example:
- Imagine that we compare 10×10 pixel color images
- If all pixels have the same color except for one pixel being white (RGB 255,255,255) in picture A and black (RGB 0,0,0) in picture B, then the similarity is 99%
- If all pixels have the same color except for one pixel being black in pic A and one being gray (RGB 128,128,128, i.e. 50% color difference) in pic B, then the similarity is 99,75%
(because of the squared error) - Simply speaking, a similarity of 99% is already a quite low similarity if you compare large images and want to find small differences.