In a project planning meeting I once attended, I recall that someone said, We need to start automating tests for the user interfaces during the continuous integration stage. This was met with a few seconds of silence.
And then all hell broke loose.
The argument quickly grew heated and all due to a difference of opinion between developers, testers, and product managers.
Three roles, three perspectives
According to the testers, if we’re still in the coding integration mode, we cannot place automated checks on user interfaces. Checks will change rapidly and therefore it would be better to execute after receiving a feature complete green flag from the development team.
According to the development team, if we are full throttle-engaged in integration activities, we don’t want tester interference in UI updates and related issues. This will only consume our time, eventually impacting sprint timelines.
Finally, the product manager disagrees with testers and the development team alike, because we’ll need to start writing and planning tests as we go. That way, at the time of delivery, we’ll save time and we’ll also have complete and reliable tests in hand. We should therefore start planning and writing cases for upcoming features.
Each stakeholder presents a valid argument. But what is unquestionable here is that, before making a decision, there’s a crystal clear need to understand software design and its relationship to test automation. Any wrong decision could jeopardize the whole project, injecting chaos into delivery timelines.
Due to conflict between QA, engineering, and product, organizations are now moving test engineering into the development pod, therefore making the process extremely collaborative right from the beginning. Testing becomes a component of sprint planning itself and the much-desired shifting left—more on this later—is finally achieved.
Considering software design
Way back during the primordial dawn of software development, the very first applications consisted of a set of functions intended for calculating numeric data. These were built around a simple cycle of information input, processing, and output. Back then, programmers were focused entirely on the accuracy of formulae rather than the need to create eye-pleasing interfaces and themes.
Today, almost half a century later, software is not just a set of sequential instructions. It is now a cognitive space where users rely heavily on application design.
Placing context in software design means introducing flexibility. This allows the user to avoid the feeling of operating a complex logical solution created from thousands of lines of code, but to instead focus on an objective. Modern software design guides the user to a point where the user can solve problems, process data, and provide results. (In other words, abstraction in action!)
To achieve this, it’s typical for teams to develop design objects, elements, and software that allow design engineers to better construct user interfaces and to allow all stakeholders to provide their recommendations. You can see the result of this user-focused effort in the form of thoughtfully-considered and intuitive websites, mobile applications, and anywhere else people are expected to approach something with as little trouble as possible.
Three overlapping spheres
No matter how complex an app is, it must exist in some sort of phase space. But what exactly is phase space? English Wikipedia offers a nice and concise definition:
In dynamical system theory, a phase space is a space in which all possible states of a system are represented, with each possible state corresponding to one unique point in the phase space.
The application ecosystem presents a very interesting mix of human and machine coexistence. On one side we have high-end techies, such as software developers and engineers, user experience (UX) designers, business analysts, and others. On the other, one can find enthusiastic users who are very familiar with apps, sometimes to the point that they may in fact know more about them than their creators.
In between these two circles one can place a third circle consisting of hardware like laptops, mobile devices, cameras, smart watches, processors, touch sensors, QR Codes, scanners, printers, and so forth.
Any app design outcome needs to cater to these three entities. It needs to respond to the technical implications of the system, embrace user side aesthetics, and embrace the equipment and interfaces for which it was designed. Testing too must address these three spheres as much as possible.
Time and resources
The main reason motivated test teams take up the task of automating application design tests is because they must otherwise spend a tremendous amount of time manually testing GUI elements in both functional and non-functional categories. This makes for a big resource drain.
This process sometimes includes several elements that can become extremely time consuming for a QA team, who must work around looming sprint deadlines and parallel product delivery schedules. Another aspect is the matter of hardware compatibility and the necessity of GUI modifications in response to requests from a client (or clients) and/or input from internal teams during stand-ups.
Some would argue that the best time during product development to introduce design-based test automation is when a certain feature is marked complete by the product owner. Here the definition of done comes into play. Both teams—product and quality assurance—have to sit, think, and sign off on every feature considered done. This ambiguity can raise more issues than answers.
Testing teams produce test cases around a planned feature with a sprint plus one approach, meaning that they’ll have the feature ready in their test cases before the sprint initiates. This allows them to tune their cases during story grooming sessions. And this is also the core reason one may propose to automate cases during the system integration phase, doing this instead of at the time of delivery.
When it comes to functional and business logic, cases and the test scripts are notably easier to maintain, but when it comes to maintaining a GUI, the test team usually arrives at a crossroads where they must decide what to automate and what not to automate. This question needs to be answered with great care. Any misstep can cost a team precious time and other resources. The wrong decision can even impact product delivery schedules.
Testing has always been a crucial factor for software development: Poor testing can only lead to problems and those problems can and will rapidly compound. Today, apps are more complicated than ever before and the pressure and scrutiny for an app’s development team to release a flawless product is intense: Users expect their apps to work flawlessly under any condition and at all times.
So, when should I test? Is really as simple as as soon as possible (the aforementioned shifting left). You should automate testing as soon as it is feasible for your team: By embracing no-code automated app testing, for example, your team can expect to save time, money, and any other associated resources for your next big project. This allows engineering teams to automate without considering time and resources as factors for automation.
Not only does no-code allow for non-specialists to execute tasks, it allows your team to completes testing in a much shorter timeframe than manual approaches. It’s a complex world out there, and one full of surprises and the unexpected—why make things more complicated than they need to be?