New feature: SofySense – Manual test case & test results AI generator to help you speed up your testing process. Try it today.  

Sofy launches product feature suite to deliver effortless test maintenance and ensure continuous precision – learn more.

sofy logo
Image: Sofy

Automatic Test Case Generation Using Machine Learning

QA Testing is evolving with Machine Learning. Technologies like Natural Language Processing can generate Test Cases automatically.

Contributed by Rishav Dutta, Machine Learning Engineer here at Sofy. Rishav is a Computer Engineering graduate with a focus on Machine Learning and AI. He believes the next level of human potential will be unlocked using these technologies.

Automatic Test Case Generation is the process of identifying and creating test cases for an application without the need for any human intervention. This can refer to both visual testing (app visuals) and functional testing (app behavior). For example, an automated test case generator may take in a design document as an input, and output test cases to validate if corresponding components of the application have the appropriate design, or take in written test cases and create the necessary automated steps for the application.

As the platform would be able to learn from different iterations and builds of the app, Automatic Test Case Generation would allow for robust app testing without the need for user input. Furthermore, by learning from related apps, an automatic test case generation platform can suggest test cases that can be extended from other apps to improve and hold apps to a higher standard.


Test case generation is not a simple task. Generic solutions would need to be intelligent enough to adapt test cases for a variety of different applications and functions. Current attempts use a variety of techniques to attempt to solve this problem. These include harnessing machine learning to learn pathways through the app, code automation analysis to try and use automation to create test cases, and tracking user pathways to learn and identify tasks.

Automatic test cases can be either rigid in that they are unit tests that are required to pass before each release or fluid, informative test cases that inform the developers about the experiences of the user base. The generation of such test cases needs to be complex enough to satisfy the requirements of any team developing applications.

Automatic test case generation is currently in its infancy. Some research has been done using NLP (Natural Language Processing) methods to take structured sentences and create test cases (Wang et al 2020). These NLP-based methods take an input from a test case document that contains a list of tests and uses language analysis to extract the steps into a form that is programmatically understandable.

An example of this is open app and then select the first item from a list, which can be converted into programmatic and “”. This method is interesting because it allows users to employ their language to write test cases and provides flexibility for different writing types.

Currently, an amount of software exists that takes no-code automation to try and identify elements of the application, through which some tests are generated as a response to user input. Other methods include using current user behavior to estimate impacts of changes (Silva et al 2018).

These kind of user behaviors include user behavior on the app, data analytics from app usage, and build status of the apps to generate test suits for user testing. While this is an interesting approach, there are many drawbacks to this solution, as it requires onsite data analytics tracking for this process to work the best. Many of these methods are still in development and hence are not fully flushed out public products quite yet. However, the field is clearly improving.  

Image: Sofy


Sofy approaches this problem through ML and AI to generate context-based pathways in an application. By generating contexts, Sofy can learn certain flows and apply them to different applications of the same type. For example, the act of adding an item to a cart is similar among different retail apps.

The first step is to search for a product, the second step is to click on the product page, and the third is to add that product to your cart. The final step is to verify the item was added to the cart. This flow can be applied to the apps of major US retailers like Amazon, Target, and Walmart. Where they differ is in slight variations in visual context clues.

These differences can be insignificant, but by harnessing machine learning, Sofy can start to identify the items if they follow the same basic pattern. The Target app might have a red button saying, “ship it” or Amazon might have a button that says, “add to cart” but the context is, ‘the user is attempting to add the item to a cart’. By identifying context clues like the “add to cart” button having text related to those words, or an icon conveying that information, Sofy can infer and create the test case without the need for any extra user input. Sofy for UX works in tandem with this by allowing Sofy to identify visuals to use for context for specific applications and sites.

Furthermore, Sofy can track app users, identify the most popular pathways, and create automatic test cases to verify that popular pathways are stable between releases. This is to validate that the user experience is not damaged by different builds in a constantly changing dynamic environment.

For example, if users follow a certain pathway through an app, Sofy keeps track of dynamic path changes between builds and verifies that the path changes do not significantly impact the user experience. Sofy treats these new pathways as test cases, automatically validates that these test cases pass and reports them as “user experience tests”. This provides information about how many users would be impacted by certain builds and changes.

By automatically creating test cases, Sofy identifies the most important parts of an application and provides information about different builds to users quickly and efficiently. Sofy, in effect, becomes a powerful testing assistant, creating reports, monitoring app usage, and verifying that customer experience is maintained, all without the need for a huge team of testers.

Sofy integrates with powerful DevOps tools to ensure that builds are created without regressions, and sends reports to the appropriate locations so that people can speed up release and testing. Sofy empowers you to create proper applications without the need for an extensive set of tools or additional personnel.

In conclusion, automatic test case generation increases the power of testers and allows you to avoid what would otherwise be the necessity of hiring more test engineers. Applications will offer better user experiences, better functional performance, and better visual design because Sofy will monitor the apps through intelligent test case creation. If you’d like to learn more about how AI can improve your mobile app testing experience download our free guide.