The basis for the MetaAutomation pattern language, and the reason for it, is at the core quite simple.
Everybody on the team (or, almost everybody) does manual or exploratory testing with the SUT. When we do manual or exploratory testing, we pay attention to our actions and how the product responds to our actions. We’re detectives, and we gather information from all available sources to decide whether SUT behavior is acceptable or not. If it might not be acceptable, we use our powers of observation and detective smarts to enter a bug. The team decides whether the bug is actionable.
Then, there’s “test automation” that appears on the surface to be about automating that process, so that what the manual testing role might do is now handled by automation, that runs at all hours, in the lab or in the cloud, etc… except that it obviously does not replicate the skills or the value of a good tester.
Quality Automation done well can be very powerful, but unfortunately the way it’s generally done, the observational powers of a manual tester are replicated poorly or not at all. When the product is driven by automation, most of the data from driving the product and how the product responds is simply dropped on the floor. It’s nobody’s fault, it’s just the way it’s done, because the available tools are poorly suited to recording this data.
For example, automation code or tools often use logging. Logging is a great tool for instrumenting your software, because logs are simple and lightweight, but for quality automation, logs are poorly suited; they drop all context, by design, other than the timestamp. Some context can be re-created with unique identifiers that show up in multiple log statements, but that can’t fully compensate for the fact that logs lose context and are poorly suited for performance measurement by design.
BDD and keyword-driving automation is a good effort, but what happens behind the keywords? Maybe nothing; you have to look to source code to see what they are supposed to do, but that’s not good enough either. Stepping through code shows what code runs for that time but is risky due to timing and the fact that what the code does might vary from one check run to the next due to changing context. Personally, I’ve modified keyword implementations in order to make a keyword reusable, but such code or code changes aren’t reflected at the output from automation at all.
Typical automation to drive the SUT drops most of all data on the floor, whether the check passes or fails. As a result, if a check passes, nobody cares about more than the “Pass!”
The lost information is a huge lost opportunity, with huge opportunity cost.
If we record that data in a format that works well with automation, we can do powerful things with it, including shipping software faster and at higher quality with happier teams…
But, how? Log statements are the wrong tool. Keywords hide at least some details of what is really going on with the SUT, potentially important ones.
I will address the solution in a future post. If you are curious enough to do some reading on this, the answer is to apply the ubiquitous Hierarchical Steps pattern by implementing it as part of the Atomic Check pattern. Both patters are part of MetaAutomation, and both are implemented in the GitHub software samples that the MetaAutomation.net web site points out here.