<img alt="" src="https://secure.wauk1care.com/164394.png" style="display:none;">

Prioritisation, Impact Analysis & Dependency

Posted by Jane Kelly on 27/07/2023
Jane Kelly
Find me on:

How to Utilise Impact Analysis & Dependency for Maximum Benefit

Hello everyone, I wanted to share a few musings around prioritisation, impact analysis and dependency in testing. This topic is of great interest to me because it affects so much of our lives, we do it naturally with varying success; depending upon our core values and many other variables. These includes aspirations, age, upbringing, genetics, level of responsibilities, consequences and experience. We are all consumers and users subconsciously testing the products available on the market, prioritising those which are important to us and our perception of how much value is to be received in using those carefully selected products.

2023 LinkedIn Posts (2)-1

What is the point of prioritising?

This has been said on many occasions in a variety of settings, many can boast their claim to multitask and produce results on parallel priorities in the same period; simply by changing approach to achieve more. John Staughton cites in ScienceABC: "Multitasking is when someone does more than one thing at the same time. It is impossible for someone to focus on more than one thing at the same time because the brain cannot handle it. When people try to do more than one thing at once, they usually make more mistakes, and it takes them longer to do everything."

Research in this field has been extensive and has found that attempting to do more than one thing at once usually results in more mistakes. It also causes a slower total time for goal achievement than if a person had fully completed one task before moving onto the next. 

Prioritisation is one of the first most important things I learnt in software testing and yet surprisingly, I work with many clients where this has not happened at all. In other cases, it may have happened inconsistently or done only once before it became outdated.

One of the consistent themes is that testing time always gets squeezed. Testing activities can be assessed but there are unknowns relating to elapsed times and problems we must work around that cannot be measured or anticipated ahead of time.

If we prioritise the work and begin the most important things first, then it lessens the risks of key features and functionalities not working in live production. Therefore, it's imperative to ensure the right people make that judgment call about what the priorities are and why. 

It's also especially important that the person(s) making those calls are equipped with the most accurate and detailed knowledge, facts and onward intentions; in order that they can confidently make and maintain informed decision-making. It may need to take more than one person to accurately set out what the priorities are.

Another aspect which often gets overlooked, is crucial for that prioritisation is reviewed at intervals. As changing priorities, scope and additional information emerges, the solution and objectives often need to be aligned to a different trajectory or outcome. That means checking the program or the project activity is on track and still meeting the cost-benefit analysis results; a green light in the first instance.

Of course, if that review shows a negative result, then there are intrinsic costs and impacts associated with undoing the work. Activity generated at the point of that decision will also have to be carefully managed. Some priorities are unlikely to change but others may take on inherited priority from a related deliverable. That is why we often see project requirements set out in the beginning, with all the bells and whistles of the ideal outcome. Often then get whittled down to the essential core deliverables, minimum viable product (MVP), when the project gets bogged down with cumulative unknowns.

Unknowns can be, for example, but not limited to:

  • misinterprets communication
  • delay start to test
  • unplanned absences
  • technical complexity
  • defect resolution
  • the volume of defects
  • low-level queries that come out of testing
  • scope creep
  • changes in expected elapsed time
  • budgetary
  • shared resource constraints
  • environmental instability unknown environment/code based clashes
  • data issues and technical architecture issues
  • lack of third party availability or single point of contact availability
  • lack of breadth and depth of knowledge
  • security access, license management and technical incompatibilities
  • the cumulation of sequential events
Communication is key to ensuring that the right people are engaged and that where possible, planning is thorough, agreed and understood. Contingency and back-up plans need to be considered from the outset. Key milestone checkpoints to be put into place at the relevant stages. Often, the very reason that things get missed is that we are expected to manage many priorities simultaneously which means fair, clear and uninterrupted considerations cannot realistically be applied.

Prioritisation, I find is inextricably linked to impact analysis and dependency, which lead to the consequences. Here is just one example to illustrate this and bring it to life for you in a business context.

I previously worked with a sizeable number of highly experienced technical and business colleagues, all focused on a programme of work. The agreed and reviewed scope was disseminated to us along with the development order. The delivery lifecycle work was well underway.

It was a sequential development lifecycle:
  1. New business generation
  2. Amendments
  3. Renewals and Cancellations

The plan was not to test until the code delivery was completed and has been through unit testing. It was only after completing the testing on Stage 1, where it was discovered that due to the laws around consumer 14 x cooling off period, we were legally obliged to work in a different order that facilitated cancellations back to day 1 at the point of new business take up.

There was panic followed by a rapid impact assessment and hurried plan of action. Eventually, this resulted in a successful delivery but it involved a huge amount of re-work, delays and re-planning the impact analysis which cost money and time. This was due to having to consider things such as people's availability, skills, linked code and limited environments.  

So, next time you tackle a new project, think ‘prioritisation', 'impact analysis' and 'dependency'! Ask questions and speak out if you think there is a new risk or something that will likely impact the critical path.

Revisit this when new variables enter into the equation. Stop and think about any potential slippages or any contingencies that might impact the dependencies if the plan shifts right.

Here at nFocus Testing, we are very well placed to help you with managing your priorities for your strategic objectives. Contact our team on 0370 242 6235 or info@nfocus.co.uk.

nFocus SDET Academy

Topics: Software Testing, Software Test Process

nFocus Blog

Welcome to the nFocus software testing blog. As thought leaders and technical innovators, we created this blog to distil our thoughts, ideas and musings on improving software quality.

Fill out the form below to receive future communications from nFocus including our latest:

  • Blog articles
  • White Papers
  • and plenty more!

Follow us

Follow us on LinkedIn to see our latest content!

Subscribe Here!

Recent Posts

Posts by Topic

see all

Posts by Topic

see all