Performance Testing – A new world
I’ve moved into a new role as a Performance Engagement Manager at a leading bank a few years back and have been trying to pen down my experience. I can say it is an entirely new world out there and yet most of it remains unexplored in the realm of Performance. This is my attempt at writing about Performance Testing from a Developers point of view.
Most software individuals are broadly categorised (or used to be) into Developers or Testers. Developers make things. Testers break things. That is their primary objective. Of course Developers also need to try breaking things to make them better and Testers sometimes need to make things to break them. However most professionals from one area find it quite difficult to understand the other and hence that is where most assumptions are born.
When I moved into Performance I found it quite interesting to learn that most design principles that we (developers) consider as “best practices” are not quite performant and at times a leading reason in the performance degrade of an application. In addition, designing the architecture to be scalable and ensuring the software components are loosely coupled (less or no dependency on each other) makes it even more complex in-turn contributing to the performance downfall (if not designed well).
On the contrary if we design a very performance-adherant architecture then at times we do not have a scalable system or the components are quite tightly coupled or require heavy dependency on operations with frequent maintenance. (I’m sure most senior architects will disagree on my above statement. I’ve had endless debates on this topic and experience architects will agree that between best design and high performance we got to find some middle ground).
This led to my conclusion that both areas have their own valid arguments and justifications however for the benefit of the product or software one has to meet in the middle. One has to understand what is important for the software – scalability is quite important, loosely coupled systems are also a priority however one needs to consider how often will a change occur and when can one choose to ignore or overlook a design choice or a development best practice to achieve one of the operational goals or non-funtional requirements i.e Performance. Performance is equally important specially if it is a customer facing application and in this attention diminishing era with faster networks and devices, every piece of software is expected to be highly performant.
What is Performance Testing all about?
Performance testing is an area of non-functional testing where the performance of an application is evaluated against a set of KPIs (Key Performance Indicators). The keyword “Performance” is quite relative and broad. For several organisations, performance can have a whole different meaning and hence it is quite important to take into consideration the software, the business, the audience, the environments and infrastructure and perform a thorough risk assessment before putting in place a test strategy to drive the performance objective to success. Moreover it is critical to define what are the business priorities before jumping in to improve response times of every user activity or focus of memory usage. It is critical for the business to provide as input the critical business processes that would cover most of the heavy and frequent user activities and has some high business value $$$.
Why most organisations get it all wrong?
It took me about 2 years and multiple projects to understand the importance of live metrics that helps contribute to better systems design and also focus on specific pain areas rather than be all over. After all there is a cost involved. With enough metrics the business can have realistic expectations and KPIs and also prioritise on key business processes rather than have a huge scope.
Doing it the right way
There are several approaches to performance testing however it is key to ensure certain steps to have conclusive results. From my experience these are quite generic and may apply to most situations:
- Ensure code baseline in place. Agile guys are not going to be very happy but we got to meet in the middle
- Dedicated environment (no access to anyone)
- Same or similar environment builds (same as production or scaled down); this also includes software configurations and their versions – I find organisations at times do NOT understand when we say ‘SAME’. We would like it to clone production. However due to the cost factor this is not very feasible and hence we need to be able to scale down based on CPUs and other contributing factors.
- Realistic workload model – this is the most important aspect to Performance testing. If we are unable to get the right workload there isn’t any point in testing as the results will be quite misleading.
- Realistic test data along with residual data in the database – I have always stressed on ensuring clients/businesses to produce realistic test data. Writing scripts to produce them makes life easier. Ensuring residual data in the database helps to produce the real impact of a ‘heavy’ database.
- Test execution and monitoring – It is not only important to monitor the system metrics and application response times but also keeping an eye on the load generator metrics.
- Analysis and Reporting – It is critical to monitor and analyse all metrics for any issues that may impact application performance. Discussing these with the business and ensuring that we are focusing on all pain areas and communicating these to the business.
The above approach is just a brief high level that can change with every engagement and every application but these key items broadly remain the same.