<img alt="" src="https://secure.wauk1care.com/164394.png" style="display:none;">

Quality Assurance Software Testing: Right Thing, Right Time

Posted by Jane Kelly on 4/07/2024
Jane Kelly
Find me on:

Reflections on 20 Years of Testing: Adapting and Optimising Practices

In my testing career, I’ve witnessed significant changes over the past 20 years. However, certain constants remain: the pursuit of working smarter, managing risks efficiently, and optimising testing efforts. We strive to test faster while minimising costs to stay competitive in the market. Flexible testing tasks allow us to adapt to changing schedules.

Additionally, creating knowledge documents, conducting peer reviews, and asking critical questions contribute to effective testing. This blog will explore insights from my testing journey, emphasising three key areas: what, when, and why. The “what,” "when,” and “why” drive out what goes into the category of doing the right thing, but the three are inextricably linked, and I expand on these below.

iStock-1175059391

Doing the ‘Right Thing’

When determining the right course of action, I consider its relevance, context, necessity, impact, and value-added.

In the early testing stages, this involves static testing, gathering test data, assessing the environment's availability and suitability, and identifying risks, dependencies, and constraints.

During execution, static testing can be overlooked and involves a manual review of documentation to identify potential issues before test preparation and execution. I verify that the code aligns with technical specifications. When I am in the User Acceptance Testing (UAT) phase, I oversee the validation of changes to ensure they meet end-user needs and operational requirements.

Doing ‘The Thing’ at the ‘Right Time’

Timing matters. Considering perspectives from various stakeholders, such as sponsors, project managers, analysts, architects, developers, and end customers, is essential. However, stakeholders frequently have conflicting needs. For instance, the business wants changes quickly, but scope creep occurs when new functionality is added after initial planning. Balancing these demands while meeting the original target date can be challenging.

Early test preparation reduces costs, but timing is crucial for maximising value. For example, starting work on a change request before approval risks wasted effort if it’s not approved or requires design changes, leading to rework and impacting test scenarios, design, data, and estimates.

In situations where risks were preferable to the alternatives, I’ve accepted them. For instance, my team was about to start test execution, but we faced an unexpected technical issue that delayed the project by 2–3 days. With no warning, I had a supplier resource team waiting for work allocation.

To keep them productive, I assigned them the preparation of high-priority changes and fixes awaiting formal acceptance in an upcoming separate BAU (Business as Usual) release. This way, they were utilised effectively rather than doing tasks of lesser priority. In that scenario, I delegated the completion of some work for the BAU scope but set it aside until it resurfaced during our scoping finalisation sessions. I was able to adjust resources to accommodate late-code delivery.

In another instance, I suspended work on a small enhancement needing further detail without being given a clear timeline for resuming it. I used my experience to determine which tasks were likely to come into scope and chose not to work on the small enhancement. In those scenarios, gathering information and considering other upcoming deliveries is essential. We can take what we know and make an educated bet, knowing it is a gamble and agreeing collectively to take that risk. It pays off to keep an ear to the ground and build relationships with change and release management teams, as they can serve as your early warning system radar.

To ensure I am doing the right thing and performing my job properly, I require clear reference points upon which to compare and evaluate expected system software behaviour and test results. When you provide requirements, it must be clear to me what is classified as a passed test result and an indication of what the results look like when they are complete. If that information is not provided promptly, it's a sure-fire way to increase your costs and timelines, as I will need to spend time to find that detail out.

Typically, this is a versioned set of documentation with context and supporting evidence, diagrams, caveats, and rationale. It's crucial to understand stakeholder expectations and requirements from the outset, so their misinterpretation does not get inadvertently built into the delivery!

In my role, I typically receive user stories, requirements, or similar for functional, non-functional, business, and technical specifications. They are usually ambiguous, extremely high-level, and/or incomplete.

These can relate to minor change requests, problem fixes, small enhancements, or large-scale enterprise programmes. I analyse these and get answers to questions to drive out and prioritise the tests I need to run and the data I need to prepare.

Effective project planning involves detailed planning, consideration of elapsed timing, risks, assumptions, issues, dependencies, and constraints. Collaboration with teams and third-party organisations is crucial to getting the cooperation and shared effort needed to make things happen, ready for when they are needed! Effective prioritisation of tasks ensures optimal use of time and prevents unnecessary work to mitigate fluctuations in the schedule.

Test Execution, Defects, and Completion

Test execution requires decent quality preparation in advance. Preparation involves baselining, prioritising, and ensuring we have complete and testable requirements. I create a traceability matrix to demonstrate testing coverage against those requirements. Next, I set about preparing test scenarios, scripts, data, and executing tests; raising, retesting, and closing defects; getting test case scenarios signed off; and addressing risks, issues, dependencies, and constraints along the way!

Test execution involves running critical, high-, and medium-priority test cases, subject to stakeholder agreement. While not all my tests may pass, I typically attempt each of them before moving on to lower-priority tests.

At each stage, I have regular touchpoints, focus on priorities, ask questions when in doubt, and communicate anything deemed relevant to stakeholders. For example, many projects I work on have useful “nuggets” of information on known issues and workarounds used in live operations teams, helpdesks, and training teams.

When low-priority tests fail, I assess their impact on business operations. They are triaged by others. Peer review of tests, defects, etc., is most valuable in driving out a shared understanding. If multiple tests fail, I explore workarounds and document the issues in defect records. These records are maintained throughout the lifecycle until resolution.

In summary, during the test phase I focus on closing critical and high-priority tests and defects. Occasionally, the client may accept some medium- and high-priority defects into live operations, for example, subject to a future deadline for the fix. Our “done” criteria are determined through daily monitoring by the tester, test manager, project manager, and sponsor. During testing execution, I prioritise and run tests, providing input for daily reporting and keeping stakeholders informed about progress and any challenges.

After testing concludes, a completion report is shared with key stakeholders. This report contains selected data to inform decision-making. Testing does not always reach 100%, as risk-based decisions may lead to delivery even with low-priority functionality or minor defects outstanding. Regression tests verify that previous functionality remains unaffected after changes.

Is the right ‘thing’ being ‘done’ for manual testing? Each stage has a unique focus and purpose. This is where the action is and where clarity and priority can change.

System Integration Testing is where I run functional tests manually to verify they are built according to the functional technical specifications. Systems Integration Testing is where I run tests to explore the behaviour of integrated components, interactions between system interfaces, their requirements, and data to make sure they behave as expected and are handled and processed correctly.

UAT/OAT phases are where I oversee the exploration of system software behaviour in relation to how well it meets the documented business system user and operational requirements. This stage must be owned and managed by the business, as they are usually the end users of the system and understand the impacts and business needs.

It's why we must always try to oversee a UAT (User Acceptance Test) phase after we are ‘done’ to get a review of the system changes by genuine, experienced business users. It's this set of people who work with and use the systems to service their customers. That group will have their own ideas of how things should work and a closer awareness of business impact. UAT will typically involve a smaller subset of user acceptance tests, but they will follow a similar pattern in the way they are run, reviewed, and handled.

Regression testing is where I run checks on core areas of functionality that are not expected to be changed but may be inadvertently affected. I check it to ensure it has not been broken by the latest changes. Our customers increasingly use automated tests. The regression test phase is especially relevant because the functionality is less frequently changed and the return on investment (ROI) can be found in the repeatability of the testing scripts, along with the speed of the computers running them unattended.

The Role of Testers in Enhancing Efficiency

One of the key skills of a tester is to know or to explore where things can be done faster, better, or smarter, and to understand the impacts of those different options. Following this exploration, an onward key skill is to choose the most appropriate way of communicating the necessary information to others in a timely manner. An example of this is when an experienced tester can run high-level tests without needing to refer to them step by step. We cannot do that all the time, as often the test steps are handed over to in-house, less experienced teams or people who are new to testing.

Now for my favourite part, how do I measure when it is ‘done,’ and make sure that my expectations of ‘done’ match those of the project? That is, checking if the thing being done is right in a different sense, namely verification (are the changes as specified) and validation (do the changes meet the business need), followed by implementation (publishing to live operational systems). This is straight forward, in theory, provided the previous preparatory and test execution stages have been ‘done’ and ‘done’ well.

Conclusion

To ensure we are doing the right thing at the right time, our testing team employs various strategies. We create detailed test strategies and utilise automated testing tools to expedite our processes, saving time and resources. By leveraging robust test environments and development cycles, we align our testing requirements with the programming language used for the software application. Utilising test automation tools and an effective management tool, we streamline our efforts, making them more cost-effective and efficient. Collaboration among team members and clear communication within the QA team are crucial to meeting these objectives.

As a tester, I assess efficiency, explore options, and communicate effectively. To determine when a project is “done,” I rely on thorough preparation and successful test execution, then present my recommendation to the stakeholder for their decision. This means that once all tests have been attempted and all defects and observations closed or remediated, subject to pre-agreed exit criteria, we can issue the report and submit it for review and sign-off along with our recommendations.

Sometimes, automated testing can really come into its own, repeatedly returning value for customers. In my experience, a blended test approach, encompassing several types of testing and resources tailored to the delivery, the client, the content, deadlines, size, scale, and type of change, is the best strategy.

In summary, doing the right thing at the right time involves careful planning, strategic use of tools, and effective communication. By continuously refining our processes and adapting to new challenges, we can ensure the delivery of high-quality software products that meet the evolving needs of our clients.

nFocus SDET Academy

Topics: Software Testing, Quality Assurance, Time Management

nFocus Blog

Welcome to the nFocus software testing blog. As thought leaders and technical innovators, we created this blog to distil our thoughts, ideas and musings on improving software quality.

Fill out the form below to receive future communications from nFocus including our latest:

  • Blog articles
  • White Papers
  • and plenty more!

Follow us

Follow us on LinkedIn to see our latest content!

Subscribe Here!

Recent Posts

Posts by Topic

see all

Posts by Topic

see all