Automatic Test Case Generation Using Machine Learning

 

This blog post is written by Rishav Dutta, Machine Learning Engineer here at Sofy. Rishav is a Computer Engineering graduate and is keen on exploring the alms of Machine Learning and AI. He believes the next level of human potential will be unlocked using these technologies.

 

Automatic Test Case Generation is the process of identifying and creating test cases for an application without the need for any human intervention. This can be for both visual testing (app visuals) and for functional testing (app behavior). For example, an automated test case generator may take in a design document as an input and output test cases to validate if corresponding components of the application have the appropriate design or take in written test cases and create the necessary automated steps for the application. Automatic Test Case Generation would allow for robust testing of apps without the need for user input as the platform would be able to learn from different iteration and builds of the app. Furthermore, by learning from related apps, an automatic test case generation platform can suggest test cases that can be extended from other apps to improve and hold apps to a higher standard.  

Generation of test cases is not a simple task. Generic solutions would have to be intelligent enough to adapt test cases for a variety of different applications and functions. Current attempts use a variety of techniques to attempt to solve this problem, including machine learning to learn pathways through the app, code automation analysis to try and use automation to create test cases, and tracking user pathways to learn and identify tasks. Automatic test cases can be both rigid in that they are unit tests that are required to pass before each release or fluid, informative test cases that inform the developers about the experiences of the user base. The generation of such test cases need to be complex enough to satisfy any requirements of any team developing applications.  

Presently automatic test case generation is in its infancy. Some research has been done using NLP (Natural Language Processing) methods to take structured sentences and create test cases (Wang et al 2020). These NLP based methods takes an input from a test case document that has a list of tests and uses language analysis to extract the steps into a form that is programmatically understandable. An example of this is “Open app and then select the first item from a list” can be converted into programmatic “app.open()” and “list.select(item)”. This method is interesting because it allows users to use their language to write test cases and gives flexibility in different writing types. Some software exists that takes no code automation to try and identify elements of the application, through which some tests are generated as a response to user input. Other methods include using current user behavior to estimate impacts of changes (Silva et al 2018). These kind of user behaviors include user behavior on the app, data analytics from app usage, and build status of the apps to generate test suits for user testing. While this is an interesting approach, there are many drawbacks for this solution as it requires onsite data analytics tracking for this process to work the best. Many of these methods are still developing and hence are not fully flushed out public products yet, but the field is clearly improving.  

 

 

Sofy approaches this problem through ML and AI to generate context-based pathways in an application. By generating contexts, Sofy can learn certain flows and apply these flows to different applications of the same type. For example, the act of adding an item to a cart is similar between different retail apps. First step is searching for a product, second step is to click on the product page, next is to add that product to the cart, and final step is to verify the item was added to the cart. This flow can be applied to Amazon, Target, or Walmart, the difference is slight differences in visual context clues. These differences can be insignificant, but by using machine learning, Sofy can start to identify the items if they follow the same basic pattern. The target app might have a red button saying, “ship it” or Amazon might have a button that says, “add to cart” but the context is “the user is attempting to add the item to a cart”. By identifying such context clues, such as the “add to cart” button having text related to those words, or an icon conveying that information, Sofy can infer and create the test case without the need for any extra user input. Sofy for UX works in tandem to this by allowing Sofy to identify visuals to use for context for specific applications and sites.  

Furthermore, Sofy can track app users and identify the most popular pathways and create automatic test cases to verify popular pathways are stable between releases. This is to validate that the user experience is not damaged by different builds in a constantly changing dynamic environment. For example, if users follow a certain pathway through an app, Sofy can keep track of the dynamic path changes between builds and verify that the path changes do not significantly impact the user experience. Sofy treats these new pathways as test case, and automatically validates that these test cases pass and reports them as “user experience tests” to report some information about how many users would be impacted by certain builds and changes.  

By automatically creating test cases, Sofy can identify the most important parts of an application and give information about different builds to people quickly and efficiently. Sofy, in effect, becomes a much more powerful testing assistant, creating reports, monitoring app usage, and verifying customer experience is maintained without the need for a huge team of testers. Sofy can integrate with powerful dev ops tools to make sure builds are created without regressions, and the reports are sent to the appropriate locations so that people can speed up release, testing and create proper applications without the need for an extensive set of tools or personnel.  

In conclusion, automatic test case generation will increase the power of testers without the need for hiring more test engineers. Applications will have better user experience, better functional performance, and better visual design because Sofy will monitor the apps through intelligent creation of test cases.    

 

Sign up for a 14-day trial now!

 

Leave a reply

19 + 15 =

1 2 3