This blog is the third in a little series where I write about my thoughts on increasing test maturity. Previously, I went into a bit of detail on the importance of wining the hearts and minds of the various stakeholders impacted by the change in approach.
It’s easy to assume that in an organisation that has decided to take a journey of this kind, the leadership team is behind the decision and is fully supportive of the change. Unfortunately, this isn’t always the case and sometimes, dark, politically motivated forces are at work whereby a senior individual’s self-interest takes priority over that of the wider organisation.
These need to be identified and managed. It won’t always be possible to rationalise with, motivate or influence these people and just recognising the problem so you can work around it, is the best you can do. Getting support from where it matters is the key and this means ensuring the improved policies and processes that cover testing get mandated. Culture will eat process for breakfast every day of the week, so without strong leadership that will keep the change on the straight and narrow, it’s very easy to slip back into old habits.
Today, I want to take a step back and look at how to get started on a journey to increase test maturity.
What are the drivers for change?
Firstly, I like to find out what’s driving the need to change and increase test maturity. In the early stages of an engagement, this is sometimes conflated or confused with the root cause of the quality/testing problems. Keeping this in mind, I like to avoid jumping to conclusions and for the sake of argument, I‘d like to propose that the problem can be summarised as “Testing isn’t good enough in one form or another”. Ultimately it always comes down to part(s) of the Time, Cost, Quality Triangle and someone, somewhere thinking testing is taking too long, costing too much or quality isn’t good enough.
That could manifest itself as the test scope being too low, tests aren’t being executed quickly enough, deadlines are being missed and ultimately there are too many bugs leaking into production or some other project stage. It might be with one specific type of testing, such as performance testing or systems integration testing or it could be across the board. It’s always possible to argue that these problems are purely the symptoms of a problem occurring elsewhere in the software delivery lifecycle (hence the long list of stakeholders in my first post in this series) but I choose to ignore that at the beginning and take ownership. By doing so, testing gets the opportunity to do the analysis, collect the evidence and shape the conversation about the root cause from our perspective. Without taking ownership, this isn’t possible and testing becomes a stakeholder in someone else’s analysis.
Simple, high-level questions can be asked to solicit the specific time/cost/quality needs, but in the majority of circumstances, I find this information is readily volunteered and easily recognised by the senior management. Understanding the specific pains the organisation is experiencing helps to understand the drivers for change and articulate the problems we need to solve.
What and Why are you Testing?
I need to understand the strategic test objectives to determine if testing is fit for purpose. It’s important to understand why we are testing and what the perceived risks are. Similarly, I need to know the technology, functionality and use of the system under test. The email system for the NHS has a different testing need to the online booking system of a gym. Similarly, a cloud hosted SAAS system has a different testing need to a custom built, on-prem client server product. I always reserve judgement of the current testing model until I know the nature of the system under test and the risk that a broken system creates.
What is the “AS IS” model?
Before testing can be improved, I need to understand what needs improving. That means understanding the current practices and operating model by reviewing the current test strategy, test approach, practices and coverage (depth and breadth) across the delivery lifecycle. Invariably I get directed to a load of documentation. Invariably, these do not reflect reality and a review of test output and the interviews I’ve mentioned before, quickly highlight that there’s deviation from the documented approach. That in itself is OK, things change with time. However, if the change is not having a positive outcome, then that’s not OK. Often I find that the test documentation is meaningless. It is often refactored, plagiarised documentation that someone has found on the internet or their hard drive and is so high-level that it adds little value.
Even in an agile world, I firmly believe that properly documenting the strategy and approach to be adopted is a worthwhile exercise when done properly. It helps formulate ideas, clarify opinions and communicates what the intention is. Unfortunately, it is often performed as a box-ticking exercise and I find that the large System Integrators are the worst for this. Either way, the objective is to understand what is being done in practise, what’s working and what’s not.
At a high-level, I start off by looking at the people, process and tools & technology. Some example questions might be:
- What is the team size, skill set and level of experience of the test and project team?
- Where are the touch-points between teams and individuals, what’s communication like and how is this managed for project activities such as estimating, backlog refinement, bug fixing etc.?
- How does location affect the operation of the test activity?
- What is the SDLC and how is being applied, how and when is testing conducted?
- How are tests categorised, how are you doing regression testing?
- What is the level of automation and how is it being done?
- What are the testing processes and practices that have been implemented?
- Where are corners being cut – and why?
- Tools & Technology
- What’s the physical and logical architecture of the system under test?
- What are the tools being used for development and test?
- What do the test environments look like and how are they used, controlled and managed?
Quickly I’ll drill into the specifics of these questions and I’ll start to pull at threads that appear through the interviews. As such, it’s crucial that all the relevant SMEs are included in the sessions. It’s not always obvious where the root cause of the problem lies, even if the senior stakeholders are under the impression that they already know and the net must be cast far and wide.
I compare the strategic test objectives and priorities with the practical execution of the tests in order to identify divergence and corrective actions that need to be taken. There are typically many problems in all areas that need to be prioritised and assessed for their impact. Often problems are multi-faceted and are not simple to resolve.
It’s important to remember that people, teams and organisations act and behave in a certain way for good reason. There are constraints that prevent best practices being followed and everything from going as originally intended. For example, a decision to test in the development environment doesn’t usually get made unless there’s a budget constraint that makes having a controlled and dedicated test environment unfeasible. These constraints need to be taken into consideration before solutions can be identified otherwise the said solutions will just be theoretical and impractical.
In the next and final part of this series, I’ll discuss how to make use of this information and how to get the testing improvements you need.