Imagine you are asked to develop an app for all platforms (iOS, Android, macOS, etc.) and you need it to be rendered correctly on all devices, you would have to test it on every device and cover most resolutions. Now imagine having to run these tests every time you make changes to your app. This can easily become time consuming and, if you needed to test this quickly, you would have to invest in a big team.
A while back, we created a website to gather data on apps that are being released with visual errors. We were very surprised with some of the apps that contain errors, many of them are run by big corporations and are used by thousands of people daily.

If company of these sizes and with large budget in QA are releasing products that contain these visual errors, how can a smaller team aspire to release with confidence that their app contain no visual defects?
A great start to preventing these types of visual errors is automating some UI tests while using some of the tools that exists in the market to validate that your app doesn’t change unexpectedly. This will make sure that once you have manually validated everything looks good, you will run a test expecting it to look that way.
Using baselines as comparisons is a great start but is still not the optimal choice when you need to scale your tests across different devices. All tools today are focused on image comparison, this means that every time you want to validate a specific view, you will need to manually set a baseline. If done autonomously, you would be accepting baselines that shouldn’t be (for example a section of your app that contains visual errors). If your app changes screen sizes you cannot compare this to the other images, thus having to manually create a new baseline for that resolution. The process of manually defining baselines defeats the purpose of automating tests.
Using Oculow to reduce visual defects
We can’t solve the problem for every developer or every company, but we can help reduce releasing software with visual defects. Through the use of automated tests, image comparison and error detection we can make sure that we don’t have any visual defects and every time we deploy after that, our app is not changing unexpectedly.
Let’s expand on how this is achieved. We automate UI tests for a platform, lets use Android for example. This means we have a functional step by step instruction of how to navigate the app and what to look for. Throughout the automation, we can take screenshots of each section of the app and run it through Oculow to detect for visual errors. If there is anything that looks like it could be a error, we warn the user and tell them that they might be releasing a faulty product. If there is no visual error, Oculow will define that capture as a baseline for the resolution the image was taken in.
We then can use the same automated test on a different device, saving the developer time of running the test manually. Since the test is the same, but running on a different device, it will capture the screen and validate that there is no error for the resolution running on the given device.
Our system is not perfect and currently being developed, so expect some issues when running the error detection algorithm. But this is still better than just forcing everything to a baseline, or having to do it manually. After having the baselines set, the next executions will be comparing them to the current image set as a baseline. All managed by our services and letting you focus on deploying and fixing issues, not running maintenance on the baselines.
If you’re interested in trying out our technology, you can get started for free. It will remain free until we polish it up a lot more and, once we start charging for our tool, we wont ask anything absurd (You can actually check our prices at our page). We will be working on improving it based on your feedback, so make sure to let us know your opinion!
Leave a Reply