7 Key Considerations For Automation From A Developer’s Perspective By Andrés Felipe Gómez
Ever since I started working in the software development industry, circa 2005, I’ve been in a love/hate relationship with QA, given that it tends to hurt your developer’s ego, but at the same time guarantees that you don’t have to revisit your code, as well as granting some goodwill about your deliverables. Overall, I’ve been working as a full-stack developer in several technologies, including .Net, JAVA, and PHP based platforms, as well as mobile technologies. I got used to dealing with QA the way all developers do: We write the code, they test the functionality and get back to us, so we can rewrite the code. That was it.
About two years ago, I was appointed to the task of refurbishing my client’s automation framework. At the beginning of the project, automation was done by developers, and so the framework was made for developers. As the program got bigger CI/CD got more important and automation tasks were passed to testers (some of them with little coding experience), so developers could concentrate on writing features.
Handing over the framework was a problem since Automation folks have very different needs and approaches to coding, so they didn’t get acquainted with it. The framework went from being a useful tool for testing to being a restriction that teams used as an excuse to avoid automating tests. So, I was brought into the QA team to refactor the framework and make it more “QA-friendly”, so to speak.
I was also in charge of training functional testers to become effective in automation while developing tools to fit their needs. They became my users, many of them reluctant to go into automation, others eager to start automating. All of them trying to adapt automation to old fashion software practices, while trying to avoid new ones. So, after learning from the tester’s day-to-day struggles, having had failures and successes in automation processes, and hearing third party automation framework providers sell us the Eden of automation and not deliver, I bring you some considerations I found important. They might come in handy in case you are a software developer/architect trying to set up an automation process in your project, or if you are a functional tester starting on automation.
1. Automating IS programming
Whether you like it or not.
In recent years TDD and BDD have gained an important place in CI/CD. Being data-driven approaches, they allow you to use languages like gherkin, or even plane configuration files, to enable code re-usability when designing tests. A test can be designed using previously developed steps, without having to write a single line of code.
Having such platforms at hand, there’s a huge temptation to design your testing framework in a way that testers or product owners can create tests out of configuration files with definitions, without learning programming skills. Such a strategy is so attractive, some frameworks will offer you the ability to set the test’s steps on an MS Excel file (or CSV, JSON, etc), including waits. Other third-party frameworks will offer you a feature to record your manual test, so the framework generates the automated code and you don’t even have to know about it.
What you have to understand is that, by using any of those approaches, you are still programming. You’re just putting your code in a configuration file, or a macro, instead of writing it in a programming language. Keep in mind that eventually your automated test will fail (too often, in fact) and you will have to debug it because the target application is constantly changing, and you need to know whether it has a bug or your test has somehow become obsolete. So, when it fails, you don’t want to be the person who spends several hours trying to debug a configuration file. Instead, you want to have a debugging platform that allows you to stop the execution, assign values on the run, and check states and call stacks.
Of course, there’s a place for BDD and TDD in automation: providing a channel of understanding between product owners, testers, and developers. Data-Driven testing is also important since it allows testing regardless of the scenario. But, you want to keep most of your automation platform as close to a debugger as it is possible.
2. Crappy fast testing vs slow maintainable testing
Any software developer will think I’m nuts for saying it, but in automation testing having a messy solution is a perfectly valid option, even if it’s full of spaghetti code.
In an ideal world, a development team will get together at the beginning of the iteration to plan architectural and functional changes, while keeping in mind how the automated test suite will be affected. In the real world, however, testers rarely take part in architectural decisions or are informed of them. Automation is not often taken under consideration either, and keeps breaking with new changes.
What often happens is that the developer doesn’t know how the final code will look until it’s ready. Furthermore, such discovery happens pretty much at the end of the iteration, leaving the tester little time to automate. Because of that, some teams leave feature automation for a later iteration, when they know it will be stable enough. It gets worse if you are writing integration tests: Teams will come to you only a couple of days before deployment, demanding you put together some automation because they didn’t do any of it.
Under such scenarios, you want to be capable of writing test cases as fast as you can, regardless of how well written they are. If you get to a point where you can write a test in say, twenty minutes, you will find it cheaper to replace your obsolete test cases with new ones, rather than writing exquisite maintainable tests that might take you longer.
That said, a 20-minute test might also be unrealistic. A seasoned automation tester takes about an hour to write a test, and even then other issues like context injection, testing platform (local, cloud, agents), and the application performance itself can increase the time required to write a test. In some cases, a maintainable and well-developed base framework can probably help support your messy testing and eventually might make it cheaper to write maintainable tests. So you might want to focus on finding the right balance between maintainability and speed. That process will happen naturally, so long as you keep the following in mind:
- Your client is paying to have a high-quality application, not a high-quality test suite.
- Your primary goal should be testing the target application on time for deployment, and making the results visible to management.
3. DO NOT model the world
A very common standard in UI automation is the use of Page Object Models. Having a POM architecture not only allows you to keep a conceptually centralized directory of controls that accurately reflects your target application, it also allows you to set reusable business rules that will come in handy on future tests.
You should definitely attempt to model your target app with POM, or any other model, whenever possible. Even a simple control index sheet will do if you think you don’t need to debug how controls are being accessed (you will probably need to debug it, though). What you should not do, is spend too much time modeling all the controls in your app. The same applies to other types of testing, like unit tests, in which the tester spends too much time mocking all possible responses from a service.
One big mistake inexperienced architects make is to think they can foresee every possible scenario in which their framework or model will be used. Two results can come of it, and they are both bad for you. In the best-case scenario, you’ll waste precious automation time modeling stuff that nobody is going to use, instead of writing actual test cases. The worst-case scenario, and a very common one I’m afraid, is that you will get cocky with your architecture and that ends up conditioning the way you, or other testers, use your code until the point it blocks further testing. I’ve seen testers refactor their whole suite just because a new requirement doesn’t fit their beloved architecture. Needless to say, they fail to deliver on time.
There’s no place for narcissism in test architectures. Try to stick to the controls you know you are going to use and the models you really need. That way, if you made a model that does not generalize well, at least you spent little time doing it, and then you can spend the remaining time modeling that new requirement that did not fit your first model, as a separate one.
4. Procure single responsibility
You should try to comply with all SOLID principles when writing any software, that is if you want to keep it maintainable. But if you have to stick with one on your automation suite, pick Single Responsibility: Another common mistake of inexperienced architects is to have functions cover several requirements at once. It is rather common to find test suites, especially UI test suites, where a single function navigates to a state, changes the page somehow, adds a log, and performs assertions, all in one package. This is more evident when working with BDD, since adding many then statements to your user story takes only one line of code (A well known BDD anti-pattern). Similarly, when writing Unit and API tests, there’s a natural trend towards adding many validations at the same time.
The main problem of writing multiple responsibility functions, in terms of automation, is that it hurts your ability to perform exploratory testing. When you assign too much responsibility to a single method, it implies that your method will only work when many business conditions are met. Hence, if you want to try a business rule over different conditions, you’ll have to write yet another method, almost always duplicating code. It will also affect performance, which becomes critical when running your tests in the cloud.
Instead of trying to model the world, assume that you will never know what your function will be used for. You’ll find that making your functions as simple as possible makes more sense, and will open a whole new world of possibilities to enrich your test cases. Have your method split into several, and then you can add a wrapper method so you don’t have to copy five lines every time you want to try a business rule. Just make sure that the wrapper method has a single responsibility: to wrap.
5. Stay in touch with your developer
If you had automation frameworks sold to you, you probably noticed that they always use very simple examples to show the product (i.e: navigating to Google, performing a search, and validating some simple rule). There’s a reason for it: They want to show that automating with their framework is super-easy, full of unicorns and rainbows.
Your average application is rarely that easy to automate. To get to the page you want to test, your test has to log into the system, navigate through other pages, add items like photos, products, locations, or any other requirement for the test’s initial state, and only then you can start testing. It gets worse if your application is only a part of a bigger program, because then you get external dependencies you have to deal with, and you’ll be limited to integration tests.
An alternative to avoid integrating your test with the rest of the program is to add Context Injection to your test suite. For unit tests, context injection usually means mocking calls to external libraries. For UI and API tests, it could go from inserting data in your database or generating certain cookies, to have secure back-doors on the selected environment, to make sure remote calls return the expected states. Context injection will make your tests independent, but it requires deep knowledge of the application, and nobody has a deeper knowledge of where to inject context than the developer.
Good developers will not only help you find a way to inject context for your test or mock your dependencies. They will help you model your testing architecture in a way that gets more maintainable over time. They can advise you in terms of performance and good practices. Also, since they are the ones changing the application, making them aware of the likelihood of breaking your context injection will prevent you from spending several days figuring out why your tests suddenly stopped working.
6. Comment as much as you can
One of my fears as a developer has always been falling victim to my success. When you take care of a certain system that no one else understands, and you do it too well, you get stuck with it. Hence, you won’t be considered for that new exciting project, mainly because no one else can maintain your current system better than you.
As a tester, you get transferred more often, but you have it worse. Management could not care less about your test suite, and they will transfer you in the blink of an eye if they feel that other project demands better quality assurance. Thus, you won’t get promoted to the new exciting project either, but to that old crucial application, full of bugs and technical debt. To make matters worse, your replacement is likely to be some junior developer who will stare at your test suite like it is written in ancient Sanskrit. Of course, you will be expected to support your replacement while you practically start a new suite from scratch (or worse, while you try to decipher that messy test suite the last tester left outdated).
So, if you don’t want to spend most of your time trying to explain your suite to your replacement, the best advice I can give you is to be very generous when commenting your code. Try to make it so self-explanatory that your replacement practically doesn’t need you there, or at least make it verbose enough to help you quickly remember what it does, so you can provide quick support to your replacement and go back to work.
7. Go into white-box as soon as possible
UI automation is the natural entrance to test automation in general, mostly because it is a black-box approach. That means you don’t need deep knowledge of the application to test it and you can concentrate on testing business rules instead. However, there are two problems with UI automation.
UI automation takes too long to write and too long to run
Most automated tests rely on waits to ensure the test won’t break over a slow agent or a slow network. So they can take minutes, even hours, to run. The fact that your test suite can take several hours, makes it a poor candidate for CI/CD. Furthermore, if most of your testing is done at UI level, you might be falling into the Ice Cream Cone anti-pattern. (See: Test Automation & The Ice-Cream Cone Anti-Pattern).
In contrast, Unit testing takes milliseconds to run, requires less intrusive context injection, and can be effectively integrated into CI/CD processes. Moving your automation to Unit testing has become a major trend among the automation community (See: Shifting left your UI tests by Arjan Blok).
UI automation might be dying
I hate to break it to you, but you might be late for the automation party. Cloud computing allowed for staged deployments, which kinda makes production the new QA environment. Companies will deploy their application in small controlled zones, and wait for incidents to be reported, before deploying on critical zones. That will kill manual testing and will most likely kill a big chunk of UI automation (See: James Whittaker: Google, Microsoft, Future of testing).
Meanwhile, unit and component testing are being sold to developers as the new essential good practice, so I could argue that they are expected to take over most automation. That itself can be an opportunity, given that developers can only think in happy paths and therefore are limited to regression testing. A good unit tester might fill the missing skills by adding negative and exploratory testing to the test suite, to spot errors earlier and cheaper.
I think a good way to start is to keep close contact with your developer. Let the developer do the first round of unit tests for a new feature, and have him show you how he did some of the mocks, as well as other important details. From then on, you can pretty much Copy/Paste your way into unit testing to increase business coverage of the code. Repeat that process until you get acquainted with the target application.
Test automation is a different kind of monster from other software development endeavors. It is still a programming skill, something you cannot run from, but it takes a different approach to what defines a good test suite, in comparison to a good software application.
When developing a test suite, keep in mind that your primary goal is to test the target application, not to have a pristine test suite, so the level of maintainability should be measured in terms of how fast and cost-effective your tests can be written. Therefore, overdoing modeling can become an anti-pattern and eventually block you from testing. My advice is to stick to what you need to complete your tests while keeping single responsibility when writing methods.
Also, developers can be a source of good practices, context injection tips, and information about possible impacts to your suite. They can be a good source to start going into white-box testing as well.
That said, you should seriously consider going into white-box testing. Contrary to UI testing, white-box testing is more suitable for CI/CD and appears to have a future in software development that UI testing might not have.
Good luck with your journey through automation, and I hope these considerations are useful for you.