Wednesday, 7 November 2012

Visual Studio 2012 Update 1 – Final CTP

Okay, some interesting things are on the horizon for Visual Studio 2012 with Update 1 having reached CTP final. That means that we will be getting the update real soon. But what does this mean for test?? 
Chuck's Post on the ALM blog shows the full list but thought that I would run this as a series of blogs on each of the bits relevant to test and what they mean to us but as a starter for 10, here is what we can look forward to: 
New Features in Visual Studio 2012 Update 1
(ordered by what we hear customers asking for regularly)  
  • Cross browser testing support for coded UI tests 
  • The ability to pause and resume manual test sessions in Microsoft Test Manager  
  • Edit Test Case properties directly from the test runner of Microsoft Test Manager  
  • The ability to create coded UI tests for SharePoint 2010 Applications  
  • Create an image action log from exploratory tests  
  • Populate Test Suites using hierarchical queries in Microsoft Test Manager.  
  • Command line functionality for copying entire test suites and all the containing test cases (Deep copy)
  • Data collectors populate trait information for Visual Studio unit test grouping  
  • Usability improvements for coded UI tests  
  • The ability for the Test Explorer to group and filter tests 
  • Publish test results to TFS from the command-line  
  • Web and Load testing support for SharePoint applications  
  • Automatic updates for Microsoft Test Manager and Microsoft Feedback Client  
  • IntelliTrace collection for SharePoint Applications  
 
Now there is some cool stuff coming with regard to Test Case management on the web which can be found on Brian's blog post but later he took this back in this post which is disappointing but is very much something to look forward to in Update 2. I can tell you that we have been beta testing this and it is very exciting :) 
 
Danny

Wednesday, 3 October 2012

Visual Studio 2012 Testing Launch Event








This month (October) nFocus will be hosting a Visual Studio 2012 Testing Launch Event in both Birmingham and London! This event will demo the valuable testing tools currently available within Visual Studio, as well as the new functionality available within the 2012 release. Following the sessions, we will be holding a drinks reception with canapés to celebrate the launch.

The event also features Giles Davies, Developer Tools Technology Specialist from Microsoft.

The agenda:
13:00 –13:30                
Registration and coffees 
13:30 – 14:15
Manual Testing with Microsoft Test Manager 2012
by Danny Crone, Technical Director, nFocus Testing
14:15 – 15:00
Performance testing in Visual Studio 2012
By Scott Summers, Director, nFocus Testing
15:00 – 15:30
Coffee break
15:30 – 16:15
New exploratory testing functionality within Visual Studio 2012
By Giles Davies, Developer Tools Technology Specialist, Microsoft
16:15 – 17:00
It’s not just pie in the sky, it's testing in the clouds
By Danny Crone, Technical Director, nFocus Testing
17:00 – 17:30
nFocus and Visual Studio Quality Tools
By Scott Summers, Director, nFocus Testing
17:30 – 19:00
VS2012 launch celebration drinks reception and canapés

If you would like to attend please click the appropriate venue below:
 

Tuesday, 31 July 2012

Creating build and test labs in the sky

Thought I would share a great blog by fellow ALM Ranger Mathias Olausson. In his blog, he talks about how to use the power of Azure and the Hosted TFS Service - "Team Foundation Service" to execute Build-Deploy-Test scenarios.

The article is essentially about using Team Foundation Service to manage the build and deployment of a solution to an Azure test environment and then have an Azure test client run tests against that environment. The benefit here is that the only on-prem requirement is having a client workstation to orchestrate the scenario using Visual Studio 2012, I suppose you could even put that in the cloud too...

You can read the full article here : Using Visual Studio 2012 Lab Manager to Create a Build Lab in the Sky

Friday, 27 July 2012

Exploratory testing in Microsoft Visual Studio 2012

What is exploratory testing?
Some people think exploratory testing is completed in an uncontrolled manner. The tester will set out to determine if the software actually works without giving it a great deal of thought or consideration of his/her goals.

For exploratory testing to be considered a success the tester must have a certain level of knowledge about the AUT as well as being creative in approach. Unlike scripted testing an exploratory testing session may not be linked to any form of requirement, user story or back log item and so it can become an exercise which fails to deliver an audit trail.

Given the lack of test steps or testing context which you have with requirements, user stories and back log items exploratory testing can result in bugs being logged that are difficult for developers to reproduce, without test steps how is the bug reproduced? Unless the tester is actually capturing the exploratory test steps as they go, which would become a burden as well as a time overhead, then how do they demonstrate what has been done?

As testing is an activity which produces information about the AUT then in my opinion exploratory testing must be given the same level of control as any other testing approach. Exploratory testing sessions should be focused on product backlog items and each should have a direction and certain goals. It is also useful when performing an exploratory testing session to capture information about that steps are being taken and which data is being used, this information can be helpful when a bug is uncovered to allow the developer to reproduce the bug. Capturing this level of information along the way provides an opportunity to quickly convert an exploratory test in to a scripted test in the event of a bug being found, this then enables a bug to be retested when fixed.

Introducing MS VS2012
Microsoft Visual Studio 2012 has now reached Release Candidate and in this release we get some exciting new features with its quality tools. One of the more notable features is an exploratory testing component in Microsoft Test Manager (MTM).

With the Exploratory testing component we are able to be more focused with our exploratory testing sessions which also deliver full traceability. We have the option to complete our exploratory testing session in a completely maverick manner or from within the exploratory testing component we are presented with a full list of pre-defined Work Items.  If our Work items have been created correctly they will have Acceptance criteria which will provide the exploratory testing session with some focus.

The user can select a Work item and begin the exploratory testing session. This will ensure that every action carried out with this test will be associated back to the work item providing full traceability. The session will be in context with the focus on testing and the information gathered during the exploratory testing session is automatically linked to the Work Item for example; Bugs and Test steps are generated without any extra effort from the tester.

As with MTM in VS2010 when executing manual test scripts you are presented with a "Run options" dialog, a similar dialog "Explore options" is now presented when executing an exploratory test allowing the tester to choose which data and diagnostics to capture during the exploratory testing session.

During execution, by default,  the user is presented with a window similar to the test runner docked on the left which allows the application under test to be surfaced in the remaining desktop real estate. The exploratory test execution window presents many opportunities for the user to capture information as they are completing the exploratory test again presenting a huge time efficiency gain as well as an opportunity to maintain auditability.

When a bug has been identified, it is extremely simple to create a bug from the test execution window and capture all of the standard bug information. One of the most powerful aspects of this new component is that as a tester is completing the exploratory test it is recording the actual test steps being executed, so when a bug is being raised, all of the steps to reproduce are automatically populated along with other information such as; screenshots and user notes.

MTM Exploratory testing feature can re-use the information which has been captured during the exploratory test to quickly generate a test case with steps from the testers actions that where recorded by MTM. Both the new scripted test case and the newly identified bug are associated back to the work item thus providing a closed loop ensuring traceability and auditability of the exploratory testing process.

Another piece of information captured which is extremely valuable to the resolution of any new bug is the IntelliTrace feature which helps pin-point the actual line of code where the bug has impacted. This coupled with test steps and screenshots reduces the negotiation time between a tester and developer for all bugs - no more bug "ping pong"!

Microsoft Visual Studio – Microsoft Test Manager…… Don’t do exploratory testing without it!

Tuesday, 17 July 2012

ALM- an opportunity for testing?

nFocus' very own Sam Clarke will be speaking at the Software Testing Club Meetup being held at Birmingham Science Park Aston on Wednesday 25 July 2012.

Sam's session is titled, “ALM- an opportunity for testing?”

Abstract
Application Lifecycle Management is increasingly seen as delivering significant benefits to a business, by enabling governance throughout the life of a business application from conception through application retirement. ALM is independent of development methodologies and scales according to the size of the project.

This short workshop delivered by Sam Clarke from nFocus will look at how the testing and quality assurance profession can step up to the challenges of working in this environment, by being an integral part of the governance process driving quality throughout the application lifecycle.

For more information visit the Meetup page here.

nFocus attends Microsoft’s Worldwide Partner Conference 2012

Following on from another brilliant Microsoft Worldwide Partner Conference (WPC), I thought it would be good to update you with the latest and greatest news that emerged during the week. This year, WPC had over 16,000 delegates attending from over 156 countries, making it the largest WPC yet!

Sunday
nFocus landed on Saturday afternoon (07/07) and stayed up rather late to ensure we were on time-zone ASAP. This was particularly tough as we knew we had an early start on Sunday as we wanted to attend the pre-conference Application Lifecycle Management day.

One of the key announcements from the Visual Studio ALM airlift session was around the changes to the ALM Competency. From later on this year there will be more assessments that organisations will have to pass to renew the competency including new testing assessments (something that us at nFocus have been campaigning for). ALM competency holders will need to be well rounded in this space to become/remain Gold Partners!

There were some informative sessions during the day including Karthik Ravindran talking about modernising software development with ALM as can be seen below.

‪‬
We also discovered on Sunday why Microsoft had picked Toronto and its convention centre this year. Check out this photo of Danny Crone and Scott Summers at a restaurant next door!

Monday
Steve Ballmer opened WPC with a motivating session stating “It’s not just talk. It’s real. It’s happening. It’s here.” He talked about a new era, and how Microsoft partners are at the very core of the story.

One of the first announcements of the conference proper was the launch of Office 365 Open.

Interest stat alert: Office now has 1 billion users worldwide, and Windows 7 has reached 630 million users! Microsoft is not resting on its laurels and Steve made it quite clear that there are another 6 billion potential customers out there that he wants to grab.

Another exciting announcement came on the Windows 8 front. Windows 8 is on track for release to manufacturers during the first week of August, with general availability expected by the end of October. Enterprise customers could get Windows 8 as early as August!

Day one finished with a number of short demonstrations of the practical applications of Windows Kinect such as using it with a projector on a wall to recognise hand gestures – effectively turning the wall into a huge touch screen. However the announcement of Microsoft’s acquisition of Perceptive Pixel really stole the show. Perceptive Pixel is a hardware supplier of large touch displays of up to 72 inches! A demo showed how seamlessly Windows 8 can be used on such a large touch screen display, including practical applications of Bing maps and interactive slideshows. Very cool indeed and a must for any boardroom (but perhaps when they have dropped in price from the current price tag of $80k!!).

Tuesday
The Switch to Hyper-V Program was announced which aims to help customers transition from VMware infrastructure to Microsoft. This will provides guidance, training and software tools including the VM Migration Toolkit.

Tuesday also included a fascinating demo of the hardware capabilities coming to Windows Phone 8 including interesting features through integrating Windows Phone 8 and Windows 8. Whilst a load of new apps and games were shown, one of the unique features we noticed was the ability to customise live tiles, to receive varying levels of real time information, providing a completely personalised phone interface displayed. The phone really does get tailored to become a tool that suits each user's preferences and interests.

One of the last noteworthy announcements from day 2 including the launch of “What’s your cause challenge”. Microsoft has challenges its partners to nominate 500 eligible non-profit organisations between July 10 and August 31 2012. For more information visit the website.

Wednesday
The final global keynote session on Wednesday was dominated by a fantastic session by Chief Operating Officer, Kevin Turner. It was an inspiration session with the vision of a continuous cloud service for every person, every device, and every business. He also achieved a roar of laughter from the crowd with the viral video of when someone asked Siri “what’s the best smart phone ever”, you can watch it again here.

It was fascinating to see that Microsoft’s investment in R&D increased by £400M last year, eclipsing Google, VMWare, Oracle and their other competitors. The number of new releases across so many products from System Center 2012, SQL Server 2012, Windows Server 2012, improved Virtual Machine capabilities for Windows Azure, Windows 8, Windows Phone 8 is testament to this investment.
Thursday
The last day of the conference was made up of regional keynotes, with the UK session being titled “The Year Ahead – A winning Opportunity”. The event was compared by the fantastic Laura Atkinson (who we had the pleasure of spending some time with during the conference), and included sessions by Barry Ridgeway, Carl Noakes and an inspirational session around teamwork, working harder and working together by Canadian Olympic Gold Medalist Adam Kreek, who was also brave enough to pass around his gold medal!

Conclusion
This year’s WPC was one of the best we have attended. It is great to see Microsoft leading the way with innovating technology across the board from enterprise level platforms to small and medium business applications to home users and gamers. I came away very enthused and feeling that Microsoft is once again bringing technological advances to the world in consumable formats. I am confident that nFocus’ decision to specialise on the Microsoft platform is the right one when there is so much new and exciting technology coming out of what seems to be a rejuvenated and once again, exciting technological thought leader.

Monday, 9 July 2012

Is it a Glitch or a Meltdown?

It depends on who you talk to as to whether the most recent problem faced by the RBS group is described as a mere “glitch” or a catastrophic “meltdown”. As you might expect, Stephen Hester, CEO of RBS, firmly sticks to the former.

We still don’t know the full details of what went wrong at RBS, NatWest and Ulster Bank. What we do know is that there was a change to an overnight batch and that change caused a problem which meant millions of payments never got made and some people could not login to online banking systems. The transactions then backed up and so by the time the problem was fixed, there was way too much load for the system to cope with. Millions of people were affected for days on end with some very serious consequences such as being unable to pay mortgages, bills and medical costs. The unfortunate customers of Ulster Bank were shoved to the back of the queue for a fix and reports today say that an estimated 100,000 people are still unable to access their money. I don’t understand how a change to a payment batch process would have any impact on the login functionality of a web interface. Does this mean there were two separate problems that occurred simultaneously or perhaps that RBS purposely locked people out of their accounts to minimise the number of transactions they needed to process? I don’t want this posting to be (only) about my conspiracy theory or just an easy swipe at the banks, so I wanted to suggest a solution to what I see as a deep seated problem, based on my experience of working in banks and specifically RBS. The kind of problem that RBS is dealing with, could have happened many times over in any number of systems – many of which I used to be responsible for the quality of. It was just a matter of time and I think there are 3 root causes to the problem.

1. The risk of regression has changed and is not fully appreciated

All Financial Institutions make money out of taking risks. Whether it be the traders taking a risk on a share price rising, a mortgage manager taking a risk on whether someone will default on their loan or an insurer taking risk that the policy holder will make a claim on their car insurance. There are analysts who help determine the market risk of these things and this in turn helps make money. This attitude that taking risk is a good thing is applied to the management of their IT systems. Unfortunately, the same detailed risk analysis is not carried out when a change is made and as RBS have just found out, things can go badly wrong.

The problem facing banks is that things have changed in the past few years. Since Twitter and Facebook gave verybody a soapbox and a global reach, it became impossible to contain the fallout of a problem like this. Couple this with the zeal with which the media and general public pounce on any opportunity to bash the bankers since the Financial Crisis, the reputational and financial impact of problems increases exponentially. If we accept that:

Risk = Probability of Occurrence x Impact of Occurrence

then as risk is proportional to the Impact of occurrence, it becomes clear that something should be done to mitigate the exponentially increased risk. Unfortunately, in too many areas of a large organisation such as a bank, the decision makers have not recognised this change, have not changed their risk management strategy and are still managing IT with outdated beliefs and techniques.

2. The cost benefit of automated regression testing and performance testing doesn’t stack up

When changes are made to a system, the best and easiest way to regression and performance test it is to run a set of automated tests. This isn’t always easy. Maintaining the scripts takes time and money. There are high peaks and low troughs in the required effort and in a fast paced environment like a bank, system changes are rarely documented. Skills can be difficult to find and very difficult to mobilise. As a result, releases often go out having had fewer automated regression tests and performance tests than there were bankers in this year’s Honours List. However, it is a fallacy to say that the cost benefit does not stack up. As Stephen Hester has just found out, the benefit of running a regression test on the recent change and a performance test to ensure that the batch payment system in question could handle a 24 hour backlog of transactions would have far outweighed the costs he has just incurred.

3. There is a disconnect between Run the Bank and Change the Bank

Many organisations have an operational expenditure (opex) budget and a capital expenditure (capex) budget and the banks are no exception, often calling these Run the Bank and Change the Bank. In my experience this leads to a disconnect between the project delivery teams and the maintenance team once a release goes live. For example, it is not in the project managers interests to create a regression pack as part of the project as this absorbs resource and budget. However, once the project goes live, the knowledge and skills to create the regression pack are no longer available. In most large banks this problem is exasperated as multi-vendor outsourcing has led to completely different companies, with completely different loyalties, doing the delivery and maintenance. There is also a lack of accountability for poor quality code. Poorly written contracts and badly managed SLAs mean that development teams are not held to account for bugs or production incidents – or even worse, get paid to fix their own poor quality code. When cost per head is the benchmark by which CTOs and their IT organisations are judged, they don’t have the responsibility, incentive or capability to build quality into a product.

What is the Answer?

So, if the question is “How can we ensure that system changes don’t cause huge embarrassment to organisations, especially banks, if and when they go wrong?” what is the answer?

I think the answer is managing an application from cradle to grave, not just managing the delivery lifecycle and throwing a problem over the fence. It’s understanding the Total Cost of Ownership of a system, including the cost of thorough testing, following good practices, maintenance and future releases – even if that means delivering fewer changes and less functionality. It’s accepting that things can and will go wrong if too many risks are taken. It’s seeing the bigger picture and understanding that upfront investment in better practices and quality controls can provide a future cost savings and a return on that investment. It’s making use of tools to do more testing, more cost effectively. It’s creating efficiencies between each and every person involved in the lifetime of a system. It’s sharing information between relevant stakeholders and making sure that everyone involved is a stakeholder. It’s understanding that a larger proportion of the budget needs to be spent on quality assurance activity. It’s investment in the right tools that handle the interfaces between teams and helps them collaborate. It’s being able to easily create and share relevant information. It’s Application Lifecycle Management.

As this round of bank bashing starts to run out of steam and Bob Diamond and Marcus Agius take the heat off Stephen Hester with their amusing and heart-warming “I’m Spartacus” routine following the Barclays Libor scandal, I urge IT professionals everywhere to learn from the mistakes made at RBS. None of us can claim that we have worked in organisations that don’t have some (if not all) of the same problems. Let’s improve the way we deliver software with a better understanding of the risk of getting it wrong and greater focus on quality and stability. Isn’t that what users want after all?

Friday, 11 May 2012

Testing and Finance Conference 16-17 May

nFocus is exhibiting and presenting at next week’s Testing and Finance conference being held in London.

We will also being offering an opportunity to try out the testing functionality within Visual Studio 2010. Come along to the stand to either see a demo, or even try the tools for yourself.

We’re currently giving away a limited number of free tickets to the audience of our blog. If you would like to come along please contact me directly at ryan_james@nfocus.co.uk. These tickets will be given out on a first-come, first-served basis so please get in touch to avoid disappointment.

For more information about the conference please click here, and to read the abstract of our session please click here.

Monday, 16 April 2012

Full application lifecycle testing… are Application Lifecycle Management (ALM) tools the solution?

A colleague and I recently presented at SIGIST on behalf of Microsoft and during that session we asked the question: How many of you do full lifecycle testing? We had explained what we meant by full lifecycle testing – testing of all the project deliverables from requirements and design, all the way through to code delivery and beyond. However we were amazed by the response. From an audience of 80+ people only a very small number put their hands up. I must admit this did surprise me, bearing in mind the type of people that usually attend SIGIST events are progressive test professionals keen on doing testing according to best practice and are mainly professionally qualified. So I asked myself the question… why don’t they?

Why doesn’t everyone do full lifecycle testing?
From my own experience, the reason can be due to cultural issues – testers only get involved during the development phase, due to issues with a lack of understanding of the value of testing and QA. Another reason could very well be the absence of integration between BA’s (Business Analysts), development and test teams. However in some cases it is due to the lack of cohesive processes and tools that support the process.

So are ALM tools the answer… ? Anything that can help support the testing and QA process will help embed such good practice.
In my recent experience of using Microsoft Visual Studio 2010 and Microsoft Test Manager 2010, I have found that features such as having the requirements, code, builds, and testware all in one repository coupled with the ability to create “rich, actionable bugs” makes full lifecycle testing not only easy, but it almost helps make it virtually a “no-brainer”, as it makes collaboration between all the parties that help build the software extremely easy. Even where you have disparate development and test teams (especially in the industry that has a proliferation of off-shore test and development functions), communication and collaboration is facilitated and even optimised by ALM tools. However, even with the availability of ALM tools such as Visual Studio, you can lead a horse to water as they say… but you can’t make it drink!

How do you get full lifecycle testing embedded within an organisation?
After the presentation at SIGIST, I was asked by a delegate the following question; “I understand how ALM tools can help, but how do you convince my company of the benefits of full lifecycle testing?”

This is a situation I have faced on many occasions at all levels of software testing during my career.  There was a time before it was widely acknowledge that the early prevention of defects was far cheaper than a cure.  What I would do to get help get this message across to the management and development teams would be to manually track the defects found and wherever possible trace them back to the requirements. This proved to be a powerful argument for the early prevention and as with tangible evidence I found it worked in all cases.

In the fast moving IT industry, the majority of people now understand the value of value of preventing defects as early as possible in the development lifecycle.  But even with this understanding they still require the knowledge and QA skills to help embed such principles. In my mind there is no doubt that ALM tools such as Microsoft Visual Studio 2010 can help make it a much easier and less painful process.

Monday, 2 April 2012

The HM Government Cloudstore; Innovative technology… innovative approach!

Thought I’d jot down a bit about one of our recent engagements I have been working on; the UK Gov Cloudstore App Store with SOLIDSOFT. This capability was launched in late February to much media fanfare and acclaim. Those of you that have followed this story would have seen that the first iteration was built and tested in around 4 weeks. Solidsoft asked us to work in partnership on this venture as they knew we could embed ourselves into their small agile project team and get deeply absorbed into the development lifecycle. We were excited to get involved in this project as it utilises “best of breed technologies” such as Windows Azure and bleeding edge Microsoft design principles namely Metro style UI .

The way we achieved the delivery of a relatively big undertaking in such a short space of time is due to our collective commitment to the agile/iterative development and testing. As soon as there was anything tangible to test, it was unit and system tested and continuously regression tested throughout the project lifecycle. This obviously meant that there was always a good quality code base for us to work from; that continuously increased in quality, iteration by iteration.

Other challenges we faced brought about by the compressed timeline was securing a scalable test environment. We managed to get around this issue by leveraging the applications Windows Azure powered technology. We placed performance test agents within the Azure cloud layer, which allowed us to utilise the on-demand elasticity to scale up and down capacity as required. I feel really proud of our achievements on this short-burn, high profile project and really enjoyed working so closely with our trusted partners such as SOLIDSOFT and working on cutting edge, relevant technologies such as Windows Azure.



What is in the CloudStore?

As part of the Government's G-Cloud programme we have set up a framework agreement for the public sector to be able to procure a range of cloud-based services. You can find out more about the programme at the G-Cloud blog.

The services include:
  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)
  • Specialist services such as configuration, management and monitoring