We’re fortunate in testing that we generally follow a well-defined process, have clear role definitions and test tools to support us. Our test tools record activity, manage planned and current activity, and support communication between teams and individuals. This is in the form of bugs and passed & failed tests. The process is there to steer us safely through difficult periods of a project when many people around us may be making irrational decisions.
Topics: Software Testing
I’m not sure why but automated security testing is, without doubt, the poor relation to all other types of automated testing. The software testing industry has been trying super hard to automate functional testing for well over 20 years – and the results have been patchy at best. I see all sorts of attempts but it’s rarely questioned as a sensible aspiration, even in situations when the return on investment (ROI) is nowhere to be seen. We relish the thought of automating unit tests and even have whole conferences dedicated to test driven development. Automated integration testing is considered an absolute necessity for DevOps and Continuous Integration (CI). We absolutely love to have automated build, deploy test capability. Unless performance and load testing are automated we don’t even consider doing it. We even have automated code review tools. Why is it then that whenever I recommend automating security testing to my clients, it feels like I really have to sell the idea. More often than not, they choose to do it manually. And I’m always surprised when they do.
One of the perks I’ve enjoyed about being a consultant is that I’ve been able to work in a number of different organisations in a range of roles. I’ve had the pleasure of working in some very small private companies to massive companies with offices around the world, as well as a number of public and government organisations, again both large and small. One would think that each of these different environments would have their own unique challenges, and they do to a certain degree, but you’d be surprised how many things are exactly the same across the board.
So, the big question is ‘Why Test’? Let’s face it, we do take it for granted that things just work or at least should work all the time. But products, services and applications are generally all thoroughly tested before they reach you, giving you a great user experience. It’s very easy to take things for granted, like taking a flight; you simply book your flight online, print or download your tickets and you’re off… But in the background, there’s a million other things happening you probably don’t even realise. So let’s start with the basics, your plane will have been pre-scheduled and have a flight time allocated, but this is done months before you’ve booked your flight, so the number of passengers, meals, drinks, cabin crew, ground staff, the amount of aviation fuel required (based on plane weight incl. luggage etc.) the wind direction and general weather are all unknown until pretty much an hour or so before take-off. So, there’s a lot of variables that need to be monitored and taken account for before you jet off to sunnier climates.
Topics: Software Testing
There are many different types of performance test – sometimes referred to as performance testing techniques. It’s not always easy to know which you need so this article aims to give some guidance on the performance testing technique you might want to consider for your system.
That’s not technically true… I’d love to spend time with family, eat chocolates and open a small selection of functional, relevant and meaningful gifts along with a stack of Christmas Pudding and brandy butter and maybe user acceptance test a glass or two of Baileys!
- Customer satisfaction by early and continuous delivery of valuable software
- Welcome changing requirements, even in late development
- Deliver working software frequently (weeks rather than months)
- Close, daily cooperation between business people and developers
- Projects are built around motivated individuals, who should be trusted
- Face-to-face conversation is the best form of communication (co-location)
- Working software is the primary measure of progress
- Sustainable development, able to maintain a constant pace
- Continuous attention to technical excellence and good design
- Simplicity—the art of maximising the amount of work not done—is essential
- Best architectures, requirements, and designs emerge from self-organising teams
- Regularly, the team reflects on how to become more effective, and adjusts accordingly
It’s interesting that despite the first principle being; ‘Customer satisfaction by early and continuous delivery of valuable software.’ UAT, as we have traditionally known it, doesn’t fit well into an Agile delivery model. Many Agile teams dispense with UAT and rely more heavily on the Show and Tell session to get customer sign off. In Scrum, this is done in the Sprint Review Meeting and involves a demonstration of the user stories that have been delivered (according to the Definition of Done) in the sprint. One of the objectives is to elicit stakeholder feedback.This is good practice, fosters collaboration and creates a high level of discipline while also meeting the objectives of the first principle. Product demonstrations should be interactive so that stakeholders have the chance to provide feedback however, I often wonder “Is this enough?”. Participants in the ‘Show and Tell’ and the Sprint Review Meeting should include, amongst others, the Product Owner, Stakeholder and Sponsors and Customers. However, in practise, I tend to find two problems;
Regression testing is performed to verify that a code change executed in the software does not impact the existing functionality of the product. By regression testing you are making sure that the product works fine with new functionality, any bug fixes or changes with existing features. Previously executed test cases are re-executed in order to verify the impact of change has not adversely affected existing functionality.
Like other engineering principles, software engineers should be responsible for delivering high quality, bug free products that work under all conditions. I wholeheartedly agree that developers (and the whole team) should be accountable for product quality and there needs to be a mind, and culture, shift so that the responsibility of quality control is not abdicated to the testing team.
As testers we are inclined to try to test everything, however factors such as timescales, available resources, technical complexities and costs can prevent this from happening.