As a Microsoft SDET and an Engineering Manager, I had lots of ideas about better ways to create test automation. I think the natural evolution of a tester is that you start with manual testing. This gets monotonous so you start to recognize where test automation could perform repetitive and tedious tasks. You find out that automated testing of APIs is much easier than testing UI.
UI test automation looks beautiful when it runs but can easily get thrown off course and then you spend your day investigating failures not in product code but in your test automation. You later learn better approaches to writing robust test automation such as using abstraction layers or page object approaches to try and isolate breaking changes.
And eventually, you ask yourself the question, “Why can’t I create an automated testing system that just crawls the app and tests everything?”
Smart Monkeys or Chess Programs?
So, you eventually create your first of several “Smart Monkeys”. The term Smart Monkey refers to the premise that if you placed typewriters in front of enough monkeys, that their random banging on the keys would eventually create a Shakespearean novel.
Each Smart Monkey ended with the same result, technically awesome but they found very few issues. The problem is that though they are smart enough to navigate, they had no real intelligence behind their actions (i.e. like monkeys at a typewriter). Crawling UI and clicking and typing looks pretty but will take too long as many non-effective actions are executed.
At this point, I changed the way I thought of Smart Monkeys and looked to gaming to approach the problem. I thought about computer chess programs and started to think that an application like Microsoft Word was just a giant game and maybe I could create a “Strategic Bot” to test it as a computer chess game did.
I read about how chess game programs worked. Seemed they assigned a point value to every piece remaining on the chessboard and then after a move, calculated whether the result improved the point total (a seemingly good move) or decreased it (a seemingly bad move). In other words, when you start the game and both players have all their pieces on the board the point total is zero since the total of the other player’s pieces on the board minus your total equals zero. Now if you make a move that captures an opponents’ piece thus removing it from the board, you improve your point total, so it seems like a good move. You also must account for the opponent’s next move and if she can capture a piece of yours of higher point value than what you captured then that pair of moves seems bad since it lowers your overall point total.
Now a seasoned chess player will tell you that they will gladly perform a move that results in a loss of points if it sets up a next move that leads to winning the game (checkmate move). That smacks of strategy and intelligence. At the novice level of play, a computer chess game may only consider the immediate impact of a move so it will make moves that immediately increase the point total and a moderate difficulty chess game program may look ahead a couple of moves to assess the point total. On expert mode, the chess game would try to look ahead to a game completion to make that assessment. That would make it virtually impossible to beat as the best human chess champions can only think ahead about 10-15 moves.
So my testing thought was if I could establish some point value to the current state of Microsoft Word, and then performed a random action such as typing, selecting text, or clicking a menu option then I could recalculate the new point total and decide if this was an action that had some impact. I thought I could somehow remember that from this state to this state, these actions had an impact, and these did not and then my Strategic Monkey would do more things that were impactful and along the way uncover issues like debug asserts or app crashes. I never succeeded beyond a Smart Monkey and I believe it was because I tried to program this with my limited if-then-else type logic whereas I believe with Machine Learning, a Strategic Bot can be created today.
Apps tested with ML-powered Bots
Imagine an Android phone app being launched and tested by an intelligent Bot. At each screen, it enumerates all the UI actions it can take and since it has not learned about this application, it simply starts to walk the UI probably using a Depth First traversal. It then creates an ML data record of Previous State, Action Taken, Next State. As it processes a screen, it does a series of tests and these are reported, and the failures are associated with states that are also added to the ML data set.
This is where the Machine Learning comes in. ML processes all this data and begins the process of building an intelligent action choice engine that has learned from the past. The more the application is used, the more state data is captured and the better the ML does to improve. Snapshots of the ML engine can be captured enabling several types of simulated users. Novice user bot that tends to do random actions, moderate user bot that takes a higher percentage of impactful actions and an expert user bot that uses the product in a seemingly perfect way.
Now that we have intelligent bots, we can also add to them the logic to recognize build to build code changes to influence where the bot focusses its efforts. The bots can be continuously taught new issues to report on and use their intelligence to retest the application efficiently and effectively as a senior SDET would.
Sofy.ai is an intelligent bot product that keeps getting smarter and more efficient through advanced uses of ML and AI. Get a demo with Sofy and learn how to get started and how you can come with us on this journey.