It is pretty common these days to use Selenium and NUnit or any other unit testing framework to create an automated regression solutions. If you are not familiar with this it’s basically cantered around automating/simulating tester using your application and confirming case by case with certain steps and assertions that a particular functionality works as expected. If you have ever used this approach, you may have noticed that it’s fairly simple to start with. While the beginning is simple, after some time you may potentially reach hundreds or thousands of tests. If your test cases growth isn’t approached very carefully you may end up in the same situation one of my projects did.
Let me sketch the scenario and approach which unfortunately led us to our EFM point – Edge of Feasible Maintainability. The automation team for our solution was built of 6 testers and 2 supporting developers. We knew we couldn’t just start coding and some of us were aware that a lot of large automation endeavours end up way past EFM point without even realizing it, which is just as bad as it sounds. In order to shield us from the possibility of such a scenario, my fellow developers with their team of testers prepared the plan in which they:
Focused on utilizing Page Object Pattern (You can read more about it in Anton Angelov’s great article here).
Agreed on peer reviews and weekly analysis of progress and state of solution.
Despite their preparations, after some time reality checked in: code has been written, hundreds of classes created, all obviously trying to utilize some common features and the SOLID principles. Unfortunately, at a certain point it has become clear that testers get confused and are losing grasp of the big picture. The solution was growing and so was its complexity at the cost of maintainability. Luckily, we have realized in time that we are headed straight at the EFM point. If our project was a large enterprise ship, EFM was our iceberg. Thanks to the quick feedback form supporting devs and testers themselves we realized what was happening and scheduled an evasive maneuver meeting (well, more than one really). My goal was to help and find root causes of our situation and rectify it with a clear and simple solution guaranteeing maintainability for years to come.
So let me break down my thought process there:
In our automation solution I have noticed a lot of “attempts” at SOLID coding. We had to face the facts at this point – writing a SOLID code is simple and complicated at the same time and it may take a few development-focused years of experience to fully grasp what it is all about. Not all of our testers possessed this deep understanding of SOLID code foundations. So attempts at reusability were flawed with misassumptions. Clearly reviews didn’t pick up on this. As it turned out: while it’s easy to confirm that particular reviewed code itself looks good in its close context, it’s much more difficult to assess if it fits into overall code base without being familiar with most of this code base.
So at this point it became clear that a meaningful solid reusable code could not be achieved with the soul help of reviewers. There was a clear need for some kind of architectural pattern which would enforce right solutions to common problems. The goals of this pattern were to:
Achieve a standardized mind set. So all of our testers can understand each other’s code and reuse it wisely.
Show a clear way of dealing with common problems.
I needed an architecture in which some well-known and accepted testing pattern will be enforced. I also had to allow enough flexibility so that our complex enterprise solution can actually be automated using this type of restrictive architecture. So a kind of a sweet spot between a set of restrictions and flexibility had to be achieved. I remembered seeing some good test case examples based on the triple A principle saying that each test should be composed of 3 stages – Arrange -> Act -> Assert. This is the kind of clear separation I like in software so I started with this assumption in my head. I could also recall using some pretty cool pattern called “fluent API” which could potentially be helpful here. I decided to enforce triple A principle with the help of a fluent API pattern. Once I reviewed my assumptions I was able to create a working prototype with the help of anonymous methods and some generics C# magic. Here is basic example of what I have managed to create:
[Some attributes helping inject test data from xml]
While the triple A approach was a good foundation, at this point we only had a frame for it. More had to be done to achieve readability and ease analysis of each test case in future code reviews. I am sure you have noticed each line in the “Arrange” phase is using .Do. actions and the “Assert” phase is using .Assert. methods. In our initial approach we have utilized a page per object approach in which there was a class with properties for every DOM element on the page and methods to act on them and assert some assumptions. For the complex pages this effectively yield classes with hundreds of lines of code, both hard to maintain and understand. To fix this potential mess I have come up with a section based approach to minimize the per class code amount. I have also created the concept of ActionControler and AssertControler. There are a few guidelines that had to be followed here:
A page only consists of properties defining its sections.
Each section contains a property defining its Action and Assertion controller and obviously a property per DOM element it contains.
Action controllers only consist of Arrange-Act methods so any clicking, selecting, submitting methods.
Assertion controllers would only validate content as per test case requirements.
What is worth noticing is that for this architecture to flourish it is of crucial importance that pages are properly separated into sections – if sections are based on their capabilities correctly architecture will force code to be separated correctly and triple A fluent API will be readable and verbose. We would decide on section separation on meetings held weekly instead of allowing a single person to create it to avoid potential havoc.
Moving on, at this point we have hidden details of each action and assertion in a well named methods. After all we are not interested in details unless we need to change something in them so it’s best to hide them in lower levels but still just a click away each. The most important thing in maintainability is simplicity which I think was achieve at this point.
Obviously the “Do not repeat yourself” – DRY principle had to be addressed. It is clear that we will have tests that pretty much do the same things with some alterations. My solution is the Action-Pack concept. Basically, action packs are just arrays of delegate based classes. I have added a T4 template which would be automatically invoked and able to generate a partial classes expanding our Action controllers. The template would use reflection to add some extra properties to ActionControllers, all suffixed with the (…)Action word for each controller method. So if an action controller would contain a FillAddressFields() method, it would now also get the FillAddressFieldsAction property which we could use to define reusable action packs like these:
Once defined we could use this action pack instead of listing each set of repeating steps, thus making our test cases smaller but still maintaining verbosity which was one of the main goals of this architecture. You may have noticed I am using Page.Get… here. Get is a locator method that will use dependency injector to resolve particular page and its dependencies so we can access a section of this page and by extension each section action definition.
So let’s list a few advantages of this approach:
Actual test case files are written and composed in a very verbose way, it really doesn’t require a dev knowledge to know what each test is doing so mistakes in assumptions are easy to notice and possibly could even be analyzed by business people if extracted to a pdf or word format.
Logic for each tests is separated in sections so it’s difficult to create spaghetti methods.
Reusability is provided via action packs: if we repeat dozen of steps in a few tests – just need to create an action pack with a descriptive name and use it in each test case.
It is now months since we have introduced this approach and while some things had to be adjusted, main concept was well received by our testers and is serving its purpose – keeping our solution maintainable and verbose even once we have passed our thousand test cases point! Next step will be adding some automated documentation building also based on T4 so that we can have a living documentation spread into sections. Documentation will be generated, mostly based on attributes and xml comments decorating test cases. You will be able to read more about how this can be achieved in my next article.
Do not hesitate to leave your comments and share any thoughts you may have on this subject.
Software developer at Goyello. Problem solver. The more complicated the problem is, the more motivated he gets. Whether it’s designing, improving processes, architecture or coding, he will be the first one to jump right in.
We process cookies and make them available to Google Analytics (a service provided by Google, Inc.) to improve the performance of the website, to learn your preferences about using it and to tailor it to your needs. The data will be anonymised before being transmitted. If you do not agree to this, you may disable cookies in your browser. If you do not change your browser settings, you accept the fact that it saves cookies.