Tuesday, 31 July 2012

Creating build and test labs in the sky

Thought I would share a great blog by fellow ALM Ranger Mathias Olausson. In his blog, he talks about how to use the power of Azure and the Hosted TFS Service - "Team Foundation Service" to execute Build-Deploy-Test scenarios.

The article is essentially about using Team Foundation Service to manage the build and deployment of a solution to an Azure test environment and then have an Azure test client run tests against that environment. The benefit here is that the only on-prem requirement is having a client workstation to orchestrate the scenario using Visual Studio 2012, I suppose you could even put that in the cloud too...

You can read the full article here : Using Visual Studio 2012 Lab Manager to Create a Build Lab in the Sky

Friday, 27 July 2012

Exploratory testing in Microsoft Visual Studio 2012

What is exploratory testing?
Some people think exploratory testing is completed in an uncontrolled manner. The tester will set out to determine if the software actually works without giving it a great deal of thought or consideration of his/her goals.

For exploratory testing to be considered a success the tester must have a certain level of knowledge about the AUT as well as being creative in approach. Unlike scripted testing an exploratory testing session may not be linked to any form of requirement, user story or back log item and so it can become an exercise which fails to deliver an audit trail.

Given the lack of test steps or testing context which you have with requirements, user stories and back log items exploratory testing can result in bugs being logged that are difficult for developers to reproduce, without test steps how is the bug reproduced? Unless the tester is actually capturing the exploratory test steps as they go, which would become a burden as well as a time overhead, then how do they demonstrate what has been done?

As testing is an activity which produces information about the AUT then in my opinion exploratory testing must be given the same level of control as any other testing approach. Exploratory testing sessions should be focused on product backlog items and each should have a direction and certain goals. It is also useful when performing an exploratory testing session to capture information about that steps are being taken and which data is being used, this information can be helpful when a bug is uncovered to allow the developer to reproduce the bug. Capturing this level of information along the way provides an opportunity to quickly convert an exploratory test in to a scripted test in the event of a bug being found, this then enables a bug to be retested when fixed.

Introducing MS VS2012
Microsoft Visual Studio 2012 has now reached Release Candidate and in this release we get some exciting new features with its quality tools. One of the more notable features is an exploratory testing component in Microsoft Test Manager (MTM).

With the Exploratory testing component we are able to be more focused with our exploratory testing sessions which also deliver full traceability. We have the option to complete our exploratory testing session in a completely maverick manner or from within the exploratory testing component we are presented with a full list of pre-defined Work Items.  If our Work items have been created correctly they will have Acceptance criteria which will provide the exploratory testing session with some focus.

The user can select a Work item and begin the exploratory testing session. This will ensure that every action carried out with this test will be associated back to the work item providing full traceability. The session will be in context with the focus on testing and the information gathered during the exploratory testing session is automatically linked to the Work Item for example; Bugs and Test steps are generated without any extra effort from the tester.

As with MTM in VS2010 when executing manual test scripts you are presented with a "Run options" dialog, a similar dialog "Explore options" is now presented when executing an exploratory test allowing the tester to choose which data and diagnostics to capture during the exploratory testing session.

During execution, by default,  the user is presented with a window similar to the test runner docked on the left which allows the application under test to be surfaced in the remaining desktop real estate. The exploratory test execution window presents many opportunities for the user to capture information as they are completing the exploratory test again presenting a huge time efficiency gain as well as an opportunity to maintain auditability.

When a bug has been identified, it is extremely simple to create a bug from the test execution window and capture all of the standard bug information. One of the most powerful aspects of this new component is that as a tester is completing the exploratory test it is recording the actual test steps being executed, so when a bug is being raised, all of the steps to reproduce are automatically populated along with other information such as; screenshots and user notes.

MTM Exploratory testing feature can re-use the information which has been captured during the exploratory test to quickly generate a test case with steps from the testers actions that where recorded by MTM. Both the new scripted test case and the newly identified bug are associated back to the work item thus providing a closed loop ensuring traceability and auditability of the exploratory testing process.

Another piece of information captured which is extremely valuable to the resolution of any new bug is the IntelliTrace feature which helps pin-point the actual line of code where the bug has impacted. This coupled with test steps and screenshots reduces the negotiation time between a tester and developer for all bugs - no more bug "ping pong"!

Microsoft Visual Studio – Microsoft Test Manager…… Don’t do exploratory testing without it!

Tuesday, 17 July 2012

ALM- an opportunity for testing?

nFocus' very own Sam Clarke will be speaking at the Software Testing Club Meetup being held at Birmingham Science Park Aston on Wednesday 25 July 2012.

Sam's session is titled, “ALM- an opportunity for testing?”

Abstract
Application Lifecycle Management is increasingly seen as delivering significant benefits to a business, by enabling governance throughout the life of a business application from conception through application retirement. ALM is independent of development methodologies and scales according to the size of the project.

This short workshop delivered by Sam Clarke from nFocus will look at how the testing and quality assurance profession can step up to the challenges of working in this environment, by being an integral part of the governance process driving quality throughout the application lifecycle.

For more information visit the Meetup page here.

nFocus attends Microsoft’s Worldwide Partner Conference 2012

Following on from another brilliant Microsoft Worldwide Partner Conference (WPC), I thought it would be good to update you with the latest and greatest news that emerged during the week. This year, WPC had over 16,000 delegates attending from over 156 countries, making it the largest WPC yet!

Sunday
nFocus landed on Saturday afternoon (07/07) and stayed up rather late to ensure we were on time-zone ASAP. This was particularly tough as we knew we had an early start on Sunday as we wanted to attend the pre-conference Application Lifecycle Management day.

One of the key announcements from the Visual Studio ALM airlift session was around the changes to the ALM Competency. From later on this year there will be more assessments that organisations will have to pass to renew the competency including new testing assessments (something that us at nFocus have been campaigning for). ALM competency holders will need to be well rounded in this space to become/remain Gold Partners!

There were some informative sessions during the day including Karthik Ravindran talking about modernising software development with ALM as can be seen below.

‪‬
We also discovered on Sunday why Microsoft had picked Toronto and its convention centre this year. Check out this photo of Danny Crone and Scott Summers at a restaurant next door!

Monday
Steve Ballmer opened WPC with a motivating session stating “It’s not just talk. It’s real. It’s happening. It’s here.” He talked about a new era, and how Microsoft partners are at the very core of the story.

One of the first announcements of the conference proper was the launch of Office 365 Open.

Interest stat alert: Office now has 1 billion users worldwide, and Windows 7 has reached 630 million users! Microsoft is not resting on its laurels and Steve made it quite clear that there are another 6 billion potential customers out there that he wants to grab.

Another exciting announcement came on the Windows 8 front. Windows 8 is on track for release to manufacturers during the first week of August, with general availability expected by the end of October. Enterprise customers could get Windows 8 as early as August!

Day one finished with a number of short demonstrations of the practical applications of Windows Kinect such as using it with a projector on a wall to recognise hand gestures – effectively turning the wall into a huge touch screen. However the announcement of Microsoft’s acquisition of Perceptive Pixel really stole the show. Perceptive Pixel is a hardware supplier of large touch displays of up to 72 inches! A demo showed how seamlessly Windows 8 can be used on such a large touch screen display, including practical applications of Bing maps and interactive slideshows. Very cool indeed and a must for any boardroom (but perhaps when they have dropped in price from the current price tag of $80k!!).

Tuesday
The Switch to Hyper-V Program was announced which aims to help customers transition from VMware infrastructure to Microsoft. This will provides guidance, training and software tools including the VM Migration Toolkit.

Tuesday also included a fascinating demo of the hardware capabilities coming to Windows Phone 8 including interesting features through integrating Windows Phone 8 and Windows 8. Whilst a load of new apps and games were shown, one of the unique features we noticed was the ability to customise live tiles, to receive varying levels of real time information, providing a completely personalised phone interface displayed. The phone really does get tailored to become a tool that suits each user's preferences and interests.

One of the last noteworthy announcements from day 2 including the launch of “What’s your cause challenge”. Microsoft has challenges its partners to nominate 500 eligible non-profit organisations between July 10 and August 31 2012. For more information visit the website.

Wednesday
The final global keynote session on Wednesday was dominated by a fantastic session by Chief Operating Officer, Kevin Turner. It was an inspiration session with the vision of a continuous cloud service for every person, every device, and every business. He also achieved a roar of laughter from the crowd with the viral video of when someone asked Siri “what’s the best smart phone ever”, you can watch it again here.

It was fascinating to see that Microsoft’s investment in R&D increased by £400M last year, eclipsing Google, VMWare, Oracle and their other competitors. The number of new releases across so many products from System Center 2012, SQL Server 2012, Windows Server 2012, improved Virtual Machine capabilities for Windows Azure, Windows 8, Windows Phone 8 is testament to this investment.
Thursday
The last day of the conference was made up of regional keynotes, with the UK session being titled “The Year Ahead – A winning Opportunity”. The event was compared by the fantastic Laura Atkinson (who we had the pleasure of spending some time with during the conference), and included sessions by Barry Ridgeway, Carl Noakes and an inspirational session around teamwork, working harder and working together by Canadian Olympic Gold Medalist Adam Kreek, who was also brave enough to pass around his gold medal!

Conclusion
This year’s WPC was one of the best we have attended. It is great to see Microsoft leading the way with innovating technology across the board from enterprise level platforms to small and medium business applications to home users and gamers. I came away very enthused and feeling that Microsoft is once again bringing technological advances to the world in consumable formats. I am confident that nFocus’ decision to specialise on the Microsoft platform is the right one when there is so much new and exciting technology coming out of what seems to be a rejuvenated and once again, exciting technological thought leader.

Monday, 9 July 2012

Is it a Glitch or a Meltdown?

It depends on who you talk to as to whether the most recent problem faced by the RBS group is described as a mere “glitch” or a catastrophic “meltdown”. As you might expect, Stephen Hester, CEO of RBS, firmly sticks to the former.

We still don’t know the full details of what went wrong at RBS, NatWest and Ulster Bank. What we do know is that there was a change to an overnight batch and that change caused a problem which meant millions of payments never got made and some people could not login to online banking systems. The transactions then backed up and so by the time the problem was fixed, there was way too much load for the system to cope with. Millions of people were affected for days on end with some very serious consequences such as being unable to pay mortgages, bills and medical costs. The unfortunate customers of Ulster Bank were shoved to the back of the queue for a fix and reports today say that an estimated 100,000 people are still unable to access their money. I don’t understand how a change to a payment batch process would have any impact on the login functionality of a web interface. Does this mean there were two separate problems that occurred simultaneously or perhaps that RBS purposely locked people out of their accounts to minimise the number of transactions they needed to process? I don’t want this posting to be (only) about my conspiracy theory or just an easy swipe at the banks, so I wanted to suggest a solution to what I see as a deep seated problem, based on my experience of working in banks and specifically RBS. The kind of problem that RBS is dealing with, could have happened many times over in any number of systems – many of which I used to be responsible for the quality of. It was just a matter of time and I think there are 3 root causes to the problem.

1. The risk of regression has changed and is not fully appreciated

All Financial Institutions make money out of taking risks. Whether it be the traders taking a risk on a share price rising, a mortgage manager taking a risk on whether someone will default on their loan or an insurer taking risk that the policy holder will make a claim on their car insurance. There are analysts who help determine the market risk of these things and this in turn helps make money. This attitude that taking risk is a good thing is applied to the management of their IT systems. Unfortunately, the same detailed risk analysis is not carried out when a change is made and as RBS have just found out, things can go badly wrong.

The problem facing banks is that things have changed in the past few years. Since Twitter and Facebook gave verybody a soapbox and a global reach, it became impossible to contain the fallout of a problem like this. Couple this with the zeal with which the media and general public pounce on any opportunity to bash the bankers since the Financial Crisis, the reputational and financial impact of problems increases exponentially. If we accept that:

Risk = Probability of Occurrence x Impact of Occurrence

then as risk is proportional to the Impact of occurrence, it becomes clear that something should be done to mitigate the exponentially increased risk. Unfortunately, in too many areas of a large organisation such as a bank, the decision makers have not recognised this change, have not changed their risk management strategy and are still managing IT with outdated beliefs and techniques.

2. The cost benefit of automated regression testing and performance testing doesn’t stack up

When changes are made to a system, the best and easiest way to regression and performance test it is to run a set of automated tests. This isn’t always easy. Maintaining the scripts takes time and money. There are high peaks and low troughs in the required effort and in a fast paced environment like a bank, system changes are rarely documented. Skills can be difficult to find and very difficult to mobilise. As a result, releases often go out having had fewer automated regression tests and performance tests than there were bankers in this year’s Honours List. However, it is a fallacy to say that the cost benefit does not stack up. As Stephen Hester has just found out, the benefit of running a regression test on the recent change and a performance test to ensure that the batch payment system in question could handle a 24 hour backlog of transactions would have far outweighed the costs he has just incurred.

3. There is a disconnect between Run the Bank and Change the Bank

Many organisations have an operational expenditure (opex) budget and a capital expenditure (capex) budget and the banks are no exception, often calling these Run the Bank and Change the Bank. In my experience this leads to a disconnect between the project delivery teams and the maintenance team once a release goes live. For example, it is not in the project managers interests to create a regression pack as part of the project as this absorbs resource and budget. However, once the project goes live, the knowledge and skills to create the regression pack are no longer available. In most large banks this problem is exasperated as multi-vendor outsourcing has led to completely different companies, with completely different loyalties, doing the delivery and maintenance. There is also a lack of accountability for poor quality code. Poorly written contracts and badly managed SLAs mean that development teams are not held to account for bugs or production incidents – or even worse, get paid to fix their own poor quality code. When cost per head is the benchmark by which CTOs and their IT organisations are judged, they don’t have the responsibility, incentive or capability to build quality into a product.

What is the Answer?

So, if the question is “How can we ensure that system changes don’t cause huge embarrassment to organisations, especially banks, if and when they go wrong?” what is the answer?

I think the answer is managing an application from cradle to grave, not just managing the delivery lifecycle and throwing a problem over the fence. It’s understanding the Total Cost of Ownership of a system, including the cost of thorough testing, following good practices, maintenance and future releases – even if that means delivering fewer changes and less functionality. It’s accepting that things can and will go wrong if too many risks are taken. It’s seeing the bigger picture and understanding that upfront investment in better practices and quality controls can provide a future cost savings and a return on that investment. It’s making use of tools to do more testing, more cost effectively. It’s creating efficiencies between each and every person involved in the lifetime of a system. It’s sharing information between relevant stakeholders and making sure that everyone involved is a stakeholder. It’s understanding that a larger proportion of the budget needs to be spent on quality assurance activity. It’s investment in the right tools that handle the interfaces between teams and helps them collaborate. It’s being able to easily create and share relevant information. It’s Application Lifecycle Management.

As this round of bank bashing starts to run out of steam and Bob Diamond and Marcus Agius take the heat off Stephen Hester with their amusing and heart-warming “I’m Spartacus” routine following the Barclays Libor scandal, I urge IT professionals everywhere to learn from the mistakes made at RBS. None of us can claim that we have worked in organisations that don’t have some (if not all) of the same problems. Let’s improve the way we deliver software with a better understanding of the risk of getting it wrong and greater focus on quality and stability. Isn’t that what users want after all?