<img alt="" src="https://secure.wauk1care.com/164394.png" style="display:none;">

Importance of Software Quality - What Does ‘Quality’ Mean?

Posted by Tony Simms on 17/04/2023
Find me on:

Introduction: Exploring the Significance of Software Quality

We talk about 'quality' quite a bit in our line of work, we talk about:

  • Quality assurance
  • QA & Test
  • Improving quality
  • Ensuring quality
  • The quality of the software
_LinkedIn Posts (16)

But What Does “Quality” Mean?

The Oxford English Dictionary defines 'quality' as:

1: How good or bad something is.

2: A high standard.

3: A characteristic or feature of someone or something. 

(source: QUALITY | English meaning - Cambridge Dictionary)

I am not sure that the second definition is at all helpful for us as software testers but the first and third definitions help me to begin to understand what quality means to us. For me, the definition I most often use in regard to the 'quality' of what I am testing is:

“The measure to which an object or service meets the expected or desired requirements.”

That definition, of course, gives rise to the following questions:

  • How do we measure?
  • What do we measure?
  • Who decides what equals ‘Good’?

The Problem With Measuring Quality

If software were gold, it would be easy, right? We measure gold in karats. Pure gold has 24 karats. That is to say, it contains 100% gold. A product that contains 18 parts gold and 6 parts other metals; is said to be 18 karat gold or in other words 75% gold and 25% other metals.

Would you agree that 100% gold is of better quality than 75% gold? Well, it depends on what is important to you. Gold is soft and 24 karat gold will wear quickly. 18 karat gold on the other hand, has a 25% mix of other metals (zinc, copper, silver etc.) which results in the item being more durable.

So, which is the better quality? The softer 100% gold or the more resilient 75% gold? It all depends on what the gold item is being used for and what expectations the  stakeholder's hold.

Quality, it would appear, is in the eye of the beholder. It's this that makes defining and therefore measuring software quality so difficult. It’s a tough job but someone has to do it. Measuring quality is challenging because how do we agree on what we mean by quality? Not to mention the differing opinions different stakeholder hold as to what makes a good or bad product. Part of our job as software testers is to work with the team and the wider stakeholders to agree on what we are going to measure and report on. 

To make a judgement call about the quality of the product or solution, we need to establish some attributes to measure against. In effect, we need a “Gold Standard” for each of the attributes we are going to measure against. An integral part of our job is to ensure that we get the right set of standards.

Our Gold Standard

All testing whether waterfall or agile; manual or automated; functional or non-functional, needs standards to measure against. Without something to relate back to, it's impossible to report how well we are doing (or not!). 

Early on in a project, it often falls to the test team to review and refine the information to create a robust and comprehensive set of requirements to test for and report against.

Whilst it's easy to think that review is all about reading documents put in front of us, this is far from the whole story. To ensure that we get the full story, our review may be more about engaging stakeholders in conversation than about reading documents. Our aim should be to ensure that "we have the right requirements", not only that the "requirements we have are right!" We need to ensure that all the system requirements have been identified and that we have a comprehensive understanding of everyone’s expectations. 

Workshops documenting workflows, functions, regulatory requirements, users, actions and expected results can be incredibly effective in identifying the gaps. This allows for acceptance criteria for each item to be identified, discussed and agreed.

Where to start:

The ISO/IEC 25010 standard provides a useful starting point. They have created a software quality model that lists 8 areas to consider (each with sub sections), they are:

  • Functional stability
  • Reliability
  • Performance
  • Efficiency
  • Operability
  • Security
  • Compatibility
  • Maintainability
  • Transfer ability

Each of these areas provides an opportunity for us to consider what we are going to measure (over and above functional requirements) and what our stakeholders consider an acceptable outcome.

How to continue

Of course, the quality of the final product is only one aspect of the overall quality challenge for software testers. We also want to look at the quality of the team and the processes producing the product. We should continuously be looking to improve the way we do things. 

Oddly, one way to improve the team is to ensure that regular retrospectives and reviews look closely at what did not go too well to understand what needs to change. Many organisations are poor at this and “lessons learned” soon become “lessons ignored”. 

Whilst the focus should be on the failing processes, not failing people; a thorough review can be intimidating if the culture is not right. Developing a culture where failure is seen as a positive learning experience is a big step towards developing a process of continuous quality improvement. It can sometimes help to have deep-dive retrospectives facilitated by someone external to the organisation.  

nFocus has identified several factors that influence the effectiveness of reviews and quality improvement programs and whilst we have done this from an ‘outsider’s perspective', we believe they apply equally to internal reviews:

  • Upper management is thoroughly on board and supportive of the review.
  • Clear scope for the review.
  • The review and its purpose are widely communicated. 
Sufficient time for the required parties to participate and for the results to be analysed, documented and presented. The budget (time and money) is earmarked to allow recommendations to be acted upon. 

nFocus SDET Academy

Topics: Software Testing, Quality Assurance, Test Teams

nFocus Blog

Welcome to the nFocus software testing blog. As thought leaders and technical innovators, we created this blog to distil our thoughts, ideas and musings on improving software quality.

Fill out the form below to receive future communications from nFocus including our latest:

  • Blog articles
  • White Papers
  • and plenty more!

Follow us

Follow us on LinkedIn to see our latest content!

Subscribe Here!

Recent Posts

Posts by Topic

see all

Posts by Topic

see all