Consultancy

Case Study: Standardizing Testing at PostNL

PostNL, a leading postal and parcel service provider, faced challenges in standardizing testing practices across multiple teams. As the SDET/QA Lead, the goal was to streamline testing methodologies and ensure consistency while catering to diverse team needs. When I started at PostNL, all teams were testing with a wide variety of testing tools. Anything from Cypress to Specflow with C# all the way to legacy testing tools that I personally never heard of.


The Goal

The goal was to try and get everyone working the same way. The more people working the same will create a work environment that allows for knowledge sharing. This approach will also create an environment where one team solves a problem and can make it available to everyone else.

The process was slow and there were a bunch of people against the idea of changing tools, so the task was a monumental undertaking. I needed to gather a bunch of information on a huge array of tools and decide what would be the best for PostNL. With new and emerging tools, I think it is very important to stick to something that has a proven track record. It was important for new testers to search for solutions and get them reasonably easy. When selecting a tool, it was not just based on my skill set and my expertise, but how fast new and junior testers could be trained to get up to speed with automation.

The solution I decided to settle with was Java with Maven + Cucumber (sorry Playwright users.) A year and a bit ago, the framework was very new and problem-solving seemed a lot more tedious as opposed to Selenium, which has been out for a very long time). The idea behind bringing a new language into a C#-based company was met with some skepticism. Nevertheless, I pushed on and got to work creating the framework.

The design philosophy

The design philosophy was rather simple: Make a framework that many teams get to use without interfering with each other. Each team needs to have its pipeline and way of testing, but still conform to a standard. Teams should not worry about the Selenium, Appium, or RestAssured implementation and only focus on writing tests. Teams should also not worry about dependency management. A great example would be if Team A never updated dependencies for months. Security issues with outdated dependencies, compatibility with new and old dependencies as well as dependency compatibility between libraries start popping up. Pipelines start spitting out warnings and security breaches.

The solution

Going forward I will be referencing different cores. A core is a dependency I created.
The cores are as follows:

“…To ensure the smooth operation of all systems, we decided to stress and load test our digital and on-premise infrastructure. We took relevant guest journeys and tried to simulate 100% and 200% occupancy. To do this properly you need a good engineering partner like spriteCloud to be able to model this….”

Bojan Pavicic

Director Technology & Digital

citizenM

  • ParentPomManager

  • Web core

  • Mobile core

  • Utility core

  • Api core

  • Ui Testing client (cores get added into this client.)

Maven has an amazing and extensive set of features that I could leverage.

  1. With Maven, I could create my own dependencies and among them, I could create a Parent pom. (Not to say C# with nuggets or Gradle was not an option.) This allowed me to remove ALL dependencies such as Selenium, Appium, RestAssured, logging, recording, reporting, and many other libraries to name a few, and put them in the ParentPomManager core. Then each and every project will only have the ParentPomManager as a dependency. I have just removed the dependency issue from all teams by doing this. Now we can install Renovate onto the ParentPom project and it will keep all dependencies updated automatically and teams do not need to worry about anything in that regard.

  2. Now that the dependencies are handled for all teams, how am I going to manage full web implementation, hide the code from all teams, and only allow them to interact with the code? The solution was a web core dependency. How does this work? I created a full project running on Selenium and wrapped Selenium click, wait, enter text (keeping it simple for the article) into my own wrappers and added reporting, asserts, and everything needed to log all errors, actions, and stats. The Web core also handles the management of browsers by starting it up and stopping it based on the config class in the UI testing client. I then packaged it up as a dependency and uploaded it to GitLab for teams to use as a library.
    With this process, I can easily edit new actions like double click, drag and drop, upload files, and other such actions to the Web core. I simply build the Web core and all teams will get notified to update the version of dependency and along with the version a list of ‘Change logs’ for new and improved features or the removal of features.

  3. Now that the web UI testing is handled, I did the same for mobile testing. The same concept applied 100%, excluding a few minor changes due to how mobile testing works. I also added a profile for BrowserStack with the help of another colleague. This allowed us to simply switch between local runs and BrowserStack runs. All the configurations happen in each team in the UI testing client.

  4. Now that the ParentPom, Web core, and Mobile core have been created and uploaded to Gitlab, it is time for the Utility core. This core is exactly as the name implies. Anything that could help the end user write tests goes into the Utility core and is pushed to all teams. In this Utility core, we have things such as Appium server managers, barcode scanners/creators, JSON Object mappers/writers, OTP for Google and Amazon as well as date time pickers, class readers, video recorders, test timers, and a few other neat little classes to make testing simpler for all teams. (Side note: when I see a tester has created something really unique and useful, I usually grant them access to the Utility core to abstract their implementation into the Utility core for other teams to use.). This aids in the creation of a huge helper library that all teams can use comfortably without needing to spend time creating the functions themselves.

  5. For those keeping track, there are now a total of 4 dependencies. These 4 dependencies are added to the UI testing client. The client is the end user interface/dashboard/workspace of the entire framework I have built. This client works exactly as expected. This is where all the feature files are stored, JSON files, mobile app files, step definitions, page objects, and everything unique to a team and the features they are working on.

Before continuing, I need to point out a few things. Not all teams want to do mobile testing. Not all teams want to do web testing. So how will we go about fixing that problem? The answer lies in the design philosophy and solution. Because I abstracted away the web and mobile implementation, due to the way Object-oriented programming rules work, I managed to write full independent pieces of code that can live on their own. No hard-coded references or class creations. Now the magic happens in the BeforeAndAfter Step classes. The UI testing client comes with pre-packages with both mobile and web BeforeAndAfter steps. It’s as easy as deleting the Class you do not need as well as the dependency in Maven. That is the beauty of the implementation. You can choose what you would like to subscribe to and you can choose how to do your own testing. 

In short, the entire web testing, mobile testing, and API testing are all packaged up into libraries that then get installed based on the team's needs and wants.

The Impact & Challenges

Now I think it’s time to look at the impact and challenges. The framework is great and I have gotten huge praise and amazing feedback from many testers, but I have also gotten a bunch of pushback from many others. So I think it’s fitting to start with the challenges first.
The biggest challenge faced so far is the skill gap and expertise. I have removed Cypress, C# frameworks, and Javascript-based frameworks away from teams and gave them a framework and a language they are not familiar with. This alone is an issue that needs to be overcome. “Well okay, let’s give training. Problem solved?” Not quite. The next hurdle was to convince teams to take automation seriously. At this point, teams have been working a certain way for a long time and some teams have got ZERO automation?! How on earth am I to get teams to start automating if they have nothing and they have testers who are not experienced in programming?

This is where the impact comes in. The impact of rolling out this framework has created a willingness and an eagerness to learn. Manual testers wanting to learn how to code. Wanting to learn how to write tests so that they can spend less time manually clicking on the screen and working more on things that have a bigger impact long term. Not only from the testing perspective but also teams are starting to realize that they can release faster. Regression testing takes 10-30 minutes as opposed to hours of manually testing everything. Costs are being reduced and things are more efficient.

Adoption

As mentioned before, adoption was difficult. Adoption was slow. But as adoption improves, and testers start to create automated regression packs, releases get better, faster, and more cost-effective.

Challenges

Developing the framework did not come without challenges. The development of the framework took 6 - 9 months. Now most people reading, will see that 6 - 9 months is a super long development cycle for a testing framework. You are right. The development cycle was painful and long. It was just me. The idea of splitting the framework into cores started around 4 years ago in 2020 when I was working for a consultancy. The massive issue I saw with consultancies was that they gave away IP to clients. What do I mean by that? An IP is an intellectual property. Although most testing tools that I like to use are open source individually, how you package them and make them available is not open source. It’s an IP because it’s uniquely built. Special or otherwise, hard to create. The contract is then signed and the tester and the testing framework are then sent to the client to start testing. 6 months to a year goes by and the client wants to end the testing contract. The tester leaves the framework and all artifacts with the client. If the client is wise (most are), then they can hire a junior tester at much cheaper rates and have the tester continue on the IP that was sent with the tester from the consultancy.

Now what if I could build a framework that can still be used but access would be revoked on new versions? That seems a lot better right? The consultancy locks away the IP of the framework and the client still gets to use the framework and keep all artifacts once the contract ends. The pipeline will run and the test will execute but it will get outdated and fast. So how will the client continue writing tests? Short term they can still write test cases using the version they got with the consultancy tester but very soon issues will appear, versions will get outdated and the client would need to hire a consultancy to update, manage or ultimately re-write all tests.


That is where my idea started. On my own time, I wanted to try and solve this problem and before I could present my project to an old employer consultancy, I left and the project stayed in my personal repo for 2 years.

When I started at PostNL, although not a client, I could still treat the teams like a client. 7 clients each having their own UI testing client and artifacts but hiding the implementation of the Selenium and Appium code. I then pulled up my old project from a few years ago and I got to work. I had a bunch of issues that needed ironing out such as dependency hosting privately on Gitlab, rolling out the updates via Slack messages, and ensuring everything worked the way I intended it to.

Integration

One of the things I bet a lot of people are thinking about is integration. If you weren't thinking about it, you are now! Integration was the cornerstone of the design philosophy of this framework. I need a framework to integrate into the current workflow of all teams. Luckily for me, I have a few DevOps engineers at my disposal and together we wrote a gitlab.yml file that integrates perfectly into the current pipeline as a build step. Maven is fully configured to run goals from the pipeline that switches .xml files, browsers, users, environments, etc. Because GitLab is so configurable, I managed to get Sonarqube and Datadog fully integrated as well.

Upcoming features

Currently, I am working on a Performance testing core and a contract testing core. These cores will install just like the other cores and it will allow teams to easily do performance testing as well as contract testing. The contract testing core is pretty much done. This core uses Swagger + RestAssured to run contract tests that are also added as a build step in the release pipelines of all teams

The performance testing core will simply be a package that can be used to execute Jmeter files. Jmeter is still the preferred open source performance testing tool and I doubt this will change in the future. The performance core will just be able to read the Jmeterjmx files and pull data from them for the pipeline.

Conclusion

I’d like to thank you all for taking the time to read this case study. I do believe there is huge room for improvement and my way might be one of many ways to achieve the same results. In programming, there are many ways to go about it. I hope this gives you an idea of how to create your framework to service many different teams and their needs.


About The Author

''My name is Garreth. I am the SDET/QA lead for the PostNL consumer arts platform team. I was tasked to standardize the way teams in the consumer arts do testing. My role shifted from Tester to Developer to Mentor. Now as of the time of writing, I am the Software Developer in Test who is looking into standardizing contract and performance testing'', says Garreth Dean, TA Engineer at spriteCloud.


Ready to revolutionize your testing process?
Reach out to
projects@spritecloud.com for a free consultation and discover more!

Interested in learning more about our Performance Testing?
Learn More