Friday, 11 December 2009

Share and Share alike in VS2010 "MSTLM"

Hi there, finally got round to writing another blog and with this one, I am really excited. Recently, I have been spending alot of time getting to know Visual Studio 2010 and what this new release means to us testers. Well, here's the thing, VS2010 is great for testers, finally Microsoft produce a tool that I want to use as a tester. There are many things that are great about VS2010 like Test Planning, Manual Testing, Action Recordings, CodedUI automated tests, IntelliTrace, Test Impact collector and Rich "actionable" bugs to name but a few. We also have a dedicated UI - Microsoft Test and Lab Manager(MSTLM)! I can see that I am going to be getting busy writing articles on all of these subjects but for today, I just want to talk about the feature called "Shared steps".
All too often the lowly manual test is neglected but this is not the case in VS2010, the Test work item now has the ability to record execution information against test steps, which is made all the more powerful with the implementation of shared steps. By way of an example, if your SUT is an online fashion and beauty retail system and many of your tests actually transact all the way through including a purchase using a visa credit card, then you may well have a candidate for a shared step - PayByCreditCard. In the MSTLM (Testing Center) you would go to the "Organize" Tab then the "Share Steps Manager"
Here we can see our shared step called "PayByCreditCard", so digging into the detail we can have a look at how the shared step is constructed:
We can see that the shared step is made up of a sequence of actions with their expected results, of interest here is the ability to include attachments with steps, by way of an example, we can see the step that tells the user executing the share step to "click on the radio button underneath the visa image". It could be that we have a Visa image and a Visa Debit image and a good way to distinguish would be to have a snapshot of what the image should look like. Also here we can see the use of parameters and default parameter values. As you can see we have a @enddate parameter and the calling test would be asked to use the value 09/12 for this step (unless of course it is overridden in the calling test).

So now, all we need to do is reference this shared step in an actual test:

In this step whilst editing the test case we can use the "Insert shared steps" button to reference our shared step "PayByCreditCard" that we created earlier, see also how it brings with it all of the parameters from the shared steps. This allows the user to override the default parameter values in the shared steps or create multiple iterations as you can see, here we have used a different credit card number 8888888888888888 for this test. Now as we use the TestRunner to execute the test, we are prompted for the values that we specified at the test level:


But here is the great news, the test runner supports "Action Recording" which is an elementary interface on the automation engine. If we use this to record the shared step, then for all subsequent tests that use the same shared step, we can play them back in a Fast Forward style. My experience of this feature is, so long as you pay good attention to what the engine is recording, then this will save you hours of manual testing. Pure genius!

Tuesday, 1 December 2009

Video Introduction to Google Wave

This is a really neat video which looks at 15 features of Google Wave...Enjoy.

Best viewed in full screen mode.


The Rosy translation robot is a really interesting and useful feature! More to follow.

Friday, 13 November 2009

Manual Intervention in an Automated Test

Supporting Files
- nFocusMessageBox.zip

You may be wondering why you would want to add manual intervention to an automated test run, I used to have the same thoughts as you, but I have seen the benefits of the approach. Before we start it is important to understand that normally any manual intervention in an automated test run is a bad idea, for one thing the tests will not run.

First let’s explore where this approach is useful:

  • Manual verification: You might have to verify objects that cannot be mapped, or are difficult to test, eg. Flash, Silverlight.
  • External processing: In order to automate the end to end process a batch process might have to be executed. If you can use Axe to execute the batch process great.
  • Get around CAPTCHA.

I have used manual verification successfully on a project a few years ago to verify the look and feel of a website, and I currently need to use external processing to test an ETL process.

For those of you that do not know what an ETL process is, it is basically Extract, Transform and Load. Testers do that all the time trying to set up test data.

The high level testing steps are as follows:

1. Stop the ETL process and clear down the system

2. Load all data into the source system using Axe

3. Start the ETL process

4. Verify the data in the target system using Axe

5. Stop the ETL process

6. Edit the source data

7. Restart the ETL process

8. Verify the data in the target system using Axe

9. Stop the ETL process

10. Delete the source test data

11. Restart the ETL process

12. Verify that the data has been marked as deleted on the target environment

I know that some of you are asking why we cannot use Axe to stop and start the ETL process. The answer is “Yes we can, but this is the first implementation of the ETL process and we have now ironed out the bugs”. To change the manual steps to automated steps using Axe is really easy and can be done later.

Enough ramblings, lets get to how we implement the functionality. I am going to assume that you have a project library file and ActionMap. As always you can always download a working zip file from here and unzip to your C:\ drive.

We are going to implement parts of the .NET MessageBox class, specifically a standard dialog with an OK button, and a dialog with Yes and No buttons. The latter can be used for manual verification.

1. Add a reference to System.Windows.Forms in the __testInit section of the project ActionMap: using System.Windows.Forms

2. Add the following method to your project library file:

public static int YesNoMessageBox(AxeMainAPI axe, string message, string title, DialogResult expected)

{

int result;

DialogResult actual = MessageBox.Show(message, title, MessageBoxButtons.YesNo);

if (actual == expected)

{

result = 0;

}

else

{

result = 1;

}

return axe.StepValidate(expected.ToString(), actual.ToString(), result);

}

3. Hook up the MessageBox class and custom method to the ActionMap as follows:

Table 1

4. Add the following to the ObjectMap:

Table2

5. Next step, create a Subtest page for the mapped objects.

As you can see from the Actions, the MsgBoxOk class is overloaded with the possibility to have a title on your dialog box, eg: set( MessageBox title). The MessageBox text is read from the data column in the test step.

The Expected parameter for the MsgBoxYesNo Action can be one of the following:

  • DialogResult.Yes, eg: set(nFocus MessageBox Example, DialogResult.Yes)
  • DialogResult.No, eg: set(nFocus MessageBox Example, DialogResult.No)

As in the MsgBoxOk class the MessageBox text is read from the data column.

6. Last step is to download the example and try it out.

I hope that you found the blog interesting and understand the potential to have manual intervention in an automated test.

Marc Maurhofer

Wednesday, 4 November 2009

ComputerWeekly.com IT Blog Awards- Please vote for us!

This is just a short message to let you know we have been shortlisted for the ComputerWeekly.com IT Blog Awards in the Company/corporate SME's category.

If you have found our articles helpful or interesting over the past 6 months, please take the time out to register your vote (no sign up required!)

You can register your vote here:

http://www.computerweekly.com/Articles/2009/11/03/238190/vote-in-the-computer-weekly-it-blog-awards-2009.htm

We are in category 6: Company/corporate: SME's as seen in the image below.






















Thanks for voting, and happy testing!

Wednesday, 28 October 2009

Using Non-Functional Tests Tools with Axe and WatiN

As a functional automation tester have you ever been asked the following questions:

  • From Infrastructure - We have upgraded the network and are expecting an increase in throughput. Can you please run a quick test and let us know the improvement stats?
  • From a Developer – We have made some changes to the framework and are expecting some issues. Can you please run some tests and let us know the error codes?

I was faced with the second question, and after a bit of googling and discussions with the Dev team I found out about HttpWatch: http://www.httpwatch.com/

A quick read of their website gave me this:

“All web applications make extensive use of the HTTP protocol (or HTTPS for secure sites). Even simple web pages require the use of multiple HTTP requests to download HTML, graphics and javascript. The ability to view the HTTP interaction between the browser and web site is crucial to these areas of web development”

If you want to investigate more about the features that HttpWatch provides, go here: http://www.httpwatch.com/features.htm

A cool thing about HttpWatch is that it has a COM based automation interface, which allows you to use it from C#, as well as Javascript and Ruby. This means that I could integrate it into Axe, a bit more googling gave me this which formed the basis for the integration of HttpWatch and Axe:

Using HttpWatch with WatiN: http://blog.httpwatch.com/2008/10/30/using-httpwatch-with-watin/

The steps to integrate HttpWatch into Axe are as follows, or you can just download the sample Axe project, install HttpWatch and get testing:

1.Axe and WatiN installed and configured.
2.A WatiN project with a local Action Map and custom library file.
3.Download and install the free Basic Edition of HttpWatch.
4.Add two methods to the custom project library file.
5.Modify the local Action Map and Run config files.

The output from HttpWatch is a hwl.file per Axe test, so if you have 100 Axe tests, you will get a 100 hwl.files. Not a good thing, when you are pressed for time. To minimise the time spent pouring over the output files you should create a test run that exercises most pages, and add a switch to the Run config file to turn the HttpWatch logging on or off.


Installing HttpWatch

I downloaded the free Basic Edition of HttpWatch (http://www.httpwatch.com/download/), version 6.2.10 and installed it using the default options. A HttpWatch folder was created in my ProgramFiles folder on the C drive.


Adding Methods to the Project Library File

If you do not have a custom project library file, see the Axe documentation for creating one.

The first step is to add a reference to HttpWatch to your custom library project.

HttpWatch-Axe





















We are going to add two methods to the Project Library file called:

  • InitialiseHttpWatch(), and
  • CloseHttpWatch()
public static HttpWatch.Plugin InitialiseHttpWatch(Browser ie, AxeMainAPI axe)

{

// HttpWatch
if (String.Compare(axe.GetRunCategoryOption("Option", "HttpWatch"), "TRUE", true) == 0)
{
// Attach HttpWatch to instance of IE
HttpWatch.Controller ct = new HttpWatch.Controller();
HttpWatch.Plugin plugin = ct.IE.Attach((SHDocVw.IWebBrowser2)ie.IE.InternetExplorer);

// Start recording a log file in HttpWatch
plugin.Record();

return plugin;
}
else
{
return null;
}
}


public static void CloseHttpWatch(HttpWatch.Plugin plugin, string resultDir, string testID)
{
if (plugin != null)
{
plugin.Stop();
plugin.Log.Save(resultDir + @"\" + testID + ".hwl");
HttpWatch.Summary logSummary = plugin.Log.Entries.Summary;
}
}

There are two variables in the CloseHttpWatch() method , testID and resultDir that need to be declared and set. This is done in the __testPrefix section of the Local Action Map.


Modify the Run Config File

Add the following switch to the Run config file and set it to False:
CategoryOptionValue
OptionHttpWatchFalse

This is where you turn on the HttpWatch logging. The code looks for a value of TRUE to start logging, any other value and the logging will be turned off.


Modify the Local Action Map

Add the following to the __testPrefix section of the Local Action Map:
// Run settings
string testID = "%TESTID%";
string resultDir = @"%RESULTDIR%";
HttpWatch.Plugin plugin = null;

Modify the testEnd(id) section of the Local Action Map to look like the following:

}
catch(Exception ex)
{
axe.StepInfo(ex.Message);
axe.TestAbort();
NovaTest.NovaTestLib.CloseHttpWatch(plugin, resultDir, testID);
if(ie.IE != null) ie.IE.Dispose();
Environment.Exit(1);
}
axe.TestEnd()


You should now be able to Build and Run your Axe project and view the HttpWatch output in the Results directory.

Integrating HttpWatch into Axe does not make Axe a performance test tool, but it does give you some extra ammo in the hunt for bugs.

Happy testing

Marc Maurhofer

Senior Test Automation Analyst

Support Files
nFocusHttpWatch.zip

Note
Users must extract the zip file to c:\nfocushttpwatch because some paths are hardcoded for expediency.

Versions of software for reference:
Axe : 2.0.3
Watin : 1.3 (.NET 2.0)
HttpWatch Basic : 6.2.10

Wednesday, 21 October 2009

Announcements, a catch up and watch this space!

Firstly, I must start this entry with an apology for being away from the blog during the last few weeks... It's been a busy period over here at nFocus, especially during the last few weeks, but I am glad to be back, and writing.

Secondly, I need to thank all of our visitors for taking the time out to read our articles over the past few months. It makes me proud to announce that in the short time the blog has been online, we have achieved over a 1000 visits, and our last article, 'Automated Functional Testing Tools, QTP vs Selenium,' was our most popular article so far!

I am also proud to announce that the blog has been entered for this years Computer Weekly IT Blog Awards for the Corporate SME catergory, so thank you for nominating us, and I will do my best to keep you updated with how this goes.

So enough of the announcements, let's get to the main event. My entry today is to introduce an absolutely brilliant article written by a colleague of mine here at nFocus; Marc Maurhofer. Before the article is posted, I thought I would include Marc's short bio, in his own words:

"I have been a software tester for over 10 years, and learnt a long time ago that the only way not to drown in a sea of regression is to automate as much as possible, using a good test framework. I do not like the idea of creating automated regression packs after the product has gone live, but instead prefer to automate as you go along. When your overnight run takes longer than 12 hours, it is time to get another test rig."

The practical walkthough is titled, "Using Non-Functional Test Tools with Axe and WatiN," and we will publish it here within the next week.

So all this leaves me to say is, I hope you find his article useful and please feel free to make Marc feel welcome with plenty of comments after his article : )

Monday, 21 September 2009

Automated Functional Testing tools, QTP V's Selenium

Hi all, been away for a bit, so apologies. At nFocus we've been using selenium for a while now and I've always been interested in what the uptake might be in the coming weeks/months/years. As with all these tools, there are always pros and cons and the real winner often depends on the individual requirements for an automated tool. It's difficult therefore to produce a generic, objective comparison between two tools, so, when I came across this blog entry, I was suitably impressed : "QTP Vs Selenium"

What is interesting about this is that QTP wins in some situations and Selenium in others for example Selenium supports a better range of OS/platforms due to is Java nature, whereas QTP wins on its breadth of applications supported (Selenium just does Web Browser based apps).

As a personal preference, I would choose Selenium for any web based application and love the fact that with my breakthrough on the Axe/Selenium integration some of the shortcommings of Selenium are overcome by using Axe. By way of an example :

"Selenium recognizes objects on the basis of the DOM structure of the HTML Page. The UI objects in selenium have vague descriptions and don't comply with WYSWYG policy."

Axe has an ObjectMap spreadsheet where you can abstract the logical object names away from the physical recognition strings, thus the user is given the ability to use friendly object names.

Thursday, 27 August 2009

Successful testing = Successful reputation

Following a number of fascinating comments made by nFocus’s clients on how vitally important software-testing was to maintaining their professional reputation, nFocus embarked upon a search of articles and white papers to see what other evidence it could find on the subject.

A number of articles emerged; none more so than one written by Jack Danahy, CTO of Ounce Labs, entitled, “Your company’s reputation: Critical but fragile” dated 7 April 2009. His article concentrates on a software breach at Heartland Payment Systems back in January 2009 and explores the implications on its reputation following negative media in the press.

We found that this article supported our own research and so we have written a brief commentary containing the fundamental take-away points.

First, how do you define reputation? And how tangible is it? Jack provides an interesting criterion for judging this whilst investigating the aftermath of the breach three months on. The criterion was a simple Google search that highlighted a great deal of negative media about the company, which most likely will have been read by clients, prospects and employees alike and so caused immeasurable damage to the business.

In Jack's own words, the Google search for Heartland Payment Systems is pretty illuminating and he says, “As one would expect, the first natural topic is the corporate website. Beyond this, it goes downhill pretty fast. Of the remaining nine items in the natural search list, with the exception of a pointer to a secondary company site and the company’s Hoovers listing, everything relates to the breach. That’s a pretty high percentage.”

He continues, “...querying for a vendor and having the second item have “breach” in the URL would likely be a warning flag to someone trying to learn about Heartland....[suggesting] that reputation is a critical, yet fragile thing. Building it and defending it are not small tasks, and a fall from favor can be swift and absolute.”

With that in mind, what is the cost of a damaged reputation? Jack’s view is that there is no simple or short-term solution. He says, “Rebuilding a tarnished reputation after a breach will require effort... and is always much more difficult than creating it in the first place, because breaches result in headlines that are free, interesting, popular media, while fixes and cleanup result in little beyond whitepapers, which are costly and unpopular media”

This dramatic security breach highlights the critical - but often underestimated - role that quality software testing plays on the day-to-day running of many businesses. Mistakes can be very costly indeed and can even put the future of some businesses in jeopardy.

If you would like to learn more about how high-calibre software testing could help to preserve the reputation of your company (and your own reputation too!) then please call us anytime.

You can click here to read Jack’s original article and to learn more about Jack Danahy’s insights into security, visit Jack's bio, suitablesecurity.blogspot.com or http://www.ouncelabs.com/.

Friday, 21 August 2009

Agile estimating- A practical quick start

Agile estimating techniques described by QSTC are based on the Wideband Delphi forecasting method, a refinement of the Delphi method developed by Rand during the cold war to forecast the impact of technology on warfare. Existing scientific laws didn't really work too well but there was a great deal of experience and expert opinions around. The challenge was to aggregate all of this expertise into a single forecast. A bit like trying to estimate how long the testing of software will take I guess (lol).

The technique uses a facilitator to gather and consolidate information to get a broad consensus on the estimate. After all a guess by the experts is better than a guess by the project manager (sorry PMs out there but you do tend to be a little optimistic at times!)

OK how do we do it? You can follow a quite detailed process in the links at the bottom of the page, but let's fast track it.

First we need to break down the system under test into manageable chunks; a good start point is to discuss with the developers what the content of the first "build" of the system will contain.

For each chunk decide and list what will be tested. Consider the GUI, any hidden client server functions, data bases access (stored procedures/SQL etc), infrastructure, performance checks, stress checks - user and technical (incoming interface overload), and infrastructure checks.

Define the major activities for testing e.g.









Remember tests have to be designed and written. Data has to be identified and created (why do we always overlook the test data strategy). Tests have to be executed and logged, some tests will be repeated (software does sometimes go wrong). Problems have to be analysed, reported, fixed and retested.

This forms the background for the experts at the estimating workshop.

Up to five experts are selected for the estimating workshop for their expertise and knowledge, (no bag carriers, observers this is a working session) covering:

• Experience of developing and testing software's using the chosen technology
• Experience of testing systems
• Experienced business user who know how the system will be used
• Experienced service delivery (the guys who have to run and maintain the system when it is live)
• Project Management

The facilitator describes a chunk of the system to be tested and the attendees vote on the effort required (person hours/days etc). Use the Fibonacci sequence to estimate the effort with hold up cards or post its:-

1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144

If there are estimates that are significantly different (someone votes 1 day when everyone else thinks it is 8 days) then those experts are asked what made them come to that conclusion. After discussion another vote is taken repeat until consensus is reached.

Record any assumptions that the team have made when estimating.

Thank you for reading, and I hope you found it useful. If you would like further information please feel free to comment below, or visit the following pages:

http://en.wikipedia.org/wiki/Delphi_method
http://www.processimpact.com/articles/delphi.html

It is also worth mentioning that I will be presenting a number of software testing sessions for Intellect (London), during October and November 2009, and if you would like further information please click here.

Friday, 7 August 2009

Are you following nFocus_ltd on twitter?

Just a short blog entry today to mention that we are using twitter to point out useful articles, websites and so on throughout each day. For more info on this please visit http://twitter.com/nfocus_ltd.

It is also worth mentioning that I am currently working on a number of helpful blog articles, so please feel free to check back soon.

In the mean time, happy testing.

Wednesday, 29 July 2009

ICT Conference and Excellence Awards 2009

Just thought I would write a quick note about the ICT Conference and Excellence Awards being held at the National Motorcycle Museum, Bickenhill, Birmingham on 10 November 2009.

If you haven’t been before, the conference is the largest technology focused conference within the West Midlands, and is currently in its 6th year. The conference seems have built in popularity each year, culminating in 420 delegates attending last year.

In the organisers own words, “The purpose of this exciting event is to provide SMEs with unrivalled access to technology partners, thought leaders, business innovators and support organisations. It’s also a fantastic opportunity to network with like-minded individuals in the region.”

nFocus have attended this conference for a few years running, and having won at the awards previously with their Virtual Test Service, will be going back this year. We hope to see some of you there.

The theme for this year’s event is “Competing in the New World – Innovating your way to success” and if you would like to attend, you can do so by sending an email to
events@ncc.co.uk. Still not made up your mind? You can view the initial programme by clicking here.

And the ICT Excellence Award categories for this year include:

  • Best Innovative Product

  • Best Innovation Service

  • Best Added Value Product

  • Best Added Value Service

  • Best Added Value Project

  • Most Improved Business

  • Best Knowledge Transfer Project

Here’s some key dates for the diary, if you would like to enter your organisation for these awards, you have until 28 September to put in an initial entry. If you are shortlisted you will be notified by 2 October, and then be invited to give a presentation to the panel of judges on 14 or 15 October.

For more information on the event you can ask us about our experiences by leaving a comment below or visit:

http://www.wmictcluster.org/events/2009/awards2009

http://www.nccmembership.co.uk/POOLED/ARTICLES/BF_EVENTART/VIEW.ASP?Q=BF_EVENTART_310685

Friday, 17 July 2009

What's the deal with Coded UI tests in VS2010?

So I was intrigued as to what was in VS2010 from an automated UI testing perspective, I'll be looking into this in more detail but some of the specs look pretty good. It’s great to hear that it is going to have some silverlight support but I think we are going to have to wait till closer to RTM to find out what support for which types of apps actually make it in.

This video http://videos.visitmix.com/MIX09/T83M shows the proposed platform support but there are a few “Best efforts” statements nevertheless, Brian Keller demos the Coded UI testing functionality.

It looks like it is going to be really powerful with some great features like the “Video Recorder” which could change the face (and size) of execution logs forever!

We are going to be installing the Beta here at nFocus to see what the little nuances are and we will report back on our findings.

Monday, 6 July 2009

nFocus Axe's Selenium

I've been working on getting Selenium to work with Axe for a couple of weeks now and I am pretty happy with the results so far, so I thought that I would share with you this practical walkthrough of how to get the two to talk together. Firstly, it is fair to note the versions that I have been running with, Axe : version 2.0.2.622, Selenium RC : version 1.0.1, Java : version 6 update 14

To start with, you are going to have to download and install all these tools. In order to get Selenium RC up and running, you are going to need to download and install Java, like I said, I'm using version 6 update 14, typing:

"java -version"

in a command prompt reveals :


Once you have Java and Selenium RC installed (you only need to unzip Selenium into your desired location), then you need to fire up the Java server, I created a little batch file containing the command "java -jar selenium-server.jar" and saved it in my

"C:\Program Files\Selenium\selenium-remote-control-1.0.1\selenium-server-1.0.1"

directory calling it "StartServer.bat". Finally I created a shortcut to this on my desktop and a simple double click will start the Selenium server running on the default port of 4444.

Now you will need to get Axe ready to deal with Selenium, firstly, we are going to need to edit the AxeConfig.xml file in the Axe program directory, we need to add a "tool" entry for Selenium, in the IntegrationFiles.zip file, you will find an xml snippit file "AxeConfigSnippit.xml", you can insert this into your "AxeConfig.xml" file. The first thing to note about this tool entry is that it handles some of the actions performed while using the "New Project Wizard" like copying template files into the new project and some of the actions responsible for building and running the tests in Selenium (the <build> and <run> sections). You will notice that in the build/postbuildarguments, I have used a number of parameters like:

/reference:"%AXEDIR%/ThoughtWorks.Selenium.UnitTests.dll"

These are so that the ThoughtWorks dll's are referenced during the compilation and copied over to the scripts directory where the final Axe tests are built to. In order for this to work, you are going to need to copy the ThoughtWorks dll's (where they exist will depend on where you installed/unzipped Selenium RC) into the Axe program files directory. For me that looks like :

copy "C:\Program Files\Selenium\selenium-remote-control-1.0.1\selenium-dotnet-client-driver-1.0.1\ThoughtWorks.Selenium.Core.dll" "C:\Program Files\Odin Technology\Axe"
copy "C:\Program Files\Selenium\selenium-remote-control-1.0.1\selenium-dotnet-client-driver-1.0.1\ThoughtWorks.Selenium.IntegrationTests.dll" "C:\Program Files\Odin Technology\Axe"
copy "C:\Program Files\Selenium\selenium-remote-control-1.0.1\selenium-dotnet-client-driver-1.0.1\ThoughtWorks.Selenium.UnitTests.dll" "C:\Program Files\Odin Technology\Axe"


I copied them here just to make the process of building and compiling easier.

The rest of the files that are included in the zip file are as follows :

C:\IntegrationFiles\AxeConfigSnippit.xml
C:\IntegrationFiles\ActionMap\Selenium.ActionMap.xml
C:\IntegrationFiles\samples\OdinPortal\ActionMap\OdinPortalSelenium.ActionMap.xml
C:\IntegrationFiles\samples\OdinPortal\Config\Selenium.BuildConfig.xml C:\IntegrationFiles\samples\OdinPortal\Config\SeleniumChrome.RunConfig.xml C:\IntegrationFiles\samples\OdinPortal\Config\SeleniumFireFox.RunConfig.xml
C:\IntegrationFiles\samples\OdinPortal\Config\SeleniumIE.RunConfig.xml
C:\IntegrationFiles\samples\OdinPortal\Config\SeleniumSafari.RunConfig.xml
C:\IntegrationFiles\samples\OdinPortal\ObjectMap\OdinPortalSelenium.ObjectMap.xml
C:\IntegrationFiles\templates\Selenium\Selenium.BuildConfig.xml
C:\IntegrationFiles\templates\Selenium\Selenium.objectmap.xml
C:\IntegrationFiles\templates\Selenium\SeleniumChrome.RunConfig.xml
C:\IntegrationFiles\templates\Selenium\SeleniumFireFox.RunConfig.xml
C:\IntegrationFiles\templates\Selenium\SeleniumIE.RunConfig.xml
C:\IntegrationFiles\templates\Selenium\SeleniumSafari.RunConfig.xml

With the exception of the AxeConfigSnippit.xml file, all of the other files should be copied into their counterpart Axe (Program Files) directories e.g. :

copy "C:\IntegrationFiles\ActionMap\Selenium.ActionMap.xml" "C:\Program Files\Odin Technology\Axe\ActionMap"

Note also that you will have to create the "Selenium" directory as this will not currently exist:

C:\Program Files\Odin Technology\Axe\Templates\Selenium

With all this in place, you should be able to fire up the OdinPortal sample and kick off a few tests in the different browsers (assuming you have all of them installed).

Next week we will be putting together a tutorial on how to create a new Axe-Selenium project to run some tests against google.

The files to support this entry can be downloaded here....

Thursday, 2 July 2009

Google Test Automation Conference 2009

Google have announced that the 4th Annual Google Test Automation Conference will be taking place on the 21st and 22nd of October at Google's Zurich office, Switzerland.

In their own words, "Google Tech Talks are designed to disseminate a wide spectrum of views on topics including Current Affairs, Science, Medicine, Engineering, Business, Humanities, Law, Entertainment, and the Arts."

Boring stuff over...the basis of the conference is to solve software engineering challenges through the use of tools and automation (just as an aside, this is one of our areas of specialism). This year's conference is specialising in testing web applications, services and systems. The conference is also hoping to cover mobile device application testing. All sounds very exciting, and interesting to us, so we are going to send a few delegates.

I thought you may like to take a look at the keynote presentation from last year's GTAC, presented by (surely) one of Google's newest employees, James Whittaker. The presentation is titled, "The Future of Software Testing", and in my opinion, is definitely worth a watch.

If you plan to present at the conference you should probably take note of August 1st, by this date you need to have submitted your proposal, with Google hoping to accept or decline the proposal by the 8th of August.

If you're looking to attend you may like to take a look here.

And finally...If you have any further questions, feel free to email and ask me or the team at marketing@nfocus.co.uk, or contact Google directly at gtac-2009@google.com.

Tuesday, 30 June 2009

Does software testing suck ?

I have recently read a blog article titled, “testing sucks” by the charismatic industry guru, James Whittaker. The article gives an interesting and unique insight into the life of a testing professional. I am keen to see whether other testing professionals agree that this is a reasonable explanation for miserable testers.

If you have read the article, (if you haven’t you can find it here) what is your opinion? Has James Whittaker’s article struck a chord with you? Do you agree or disagree? I welcome your comments below...

Tuesday, 9 June 2009

Testing applications on Windows 7 RC1

Testing in a box.............in a box?

Recently at nFocus, we had to test some applications out to ensure that they would operate correctly on Windows 7 for one of our clients. RC1 was out and in order to expidite the task we decided to use virtualisation instead of applying the release candidate to some of our hardware. Virtual PC seemed like a good choice and following Brian Kellers blog "Installing the Windows 7 Beta with Virtual PC 2007 SP1" (with a few tweaks for RC1 instead of the Beta) it was a breeze!

We managed to do our testing but interestingly, as an aside, some testing tools we used worked just fine out of the box on windows 7 RC1 too.

If your looking at doing something similar you might want to hold off (if you believe some of the rumor mills), Wzor put out some info, noting that the Windows 7 Release Candidate 2 is signed off and will be released on June 12th. It is expected that it will only be available to a few select groups, MSDN subscribers being one. Please feel free to apply appropriate levels of sodium to this :)

Thursday, 28 May 2009

TestComplete 7 - complete !

Okay, although not in detail and in my spare time (at nFocus that's a rare thing) my review of TestComplete7 and the new keyword functionality is complete.

So what do I think, well, as a tool in its own right, it is great, it covers a multitude of different situations and technologies (for a great price too) but for me, it just doesn't cut it, for these reasons:

  • I found it complicated to navigate, this is a tool that needs a lot of learning

  • I couldn't let my non technical testers loose on this even with the keyword-driven tests

  • the mapping tool is a bit difficult to navigate

This wasn't really supposed to be a side by side comparison between Axe and TestComplete7 but I just found myself constantly comparing the two.

For my money, using something like Excel with say the Axe toolset is a much more attractive proposition, I like the fact that Axe uses Excel, I can show my non technical testers how to fill in the spreadsheets and away they go! Object mapping can be a pain in Axe (depending on which automation toolset you are using) but at least it enforces you to be methodical and think carefully about the logical names you might want to use.

At the end of the day, picking a toolset to use when employing automated testing is always difficult and one tool that wins in one situation will lose in another. There isn’t a “one size fits all” automation tool. I think that TestComplete7 would get my vote if I was a solo tester in an organisation where I need to test a number of different smaller apps in different technologies (web app, Java client, Windows client, pda app, WPF client etc).

Tuesday, 19 May 2009

AutomatedQA Introduces TestComplete 7

AutomatedQA just released it's latest version of it's TestComplete offering - version 7 !

So what's new?

...it seems that the biggest news is the addition of it's keyword tests or otherwise known as keyword-driven tests. It is clearly trying to compete with some of the test automation frameworks such as Axe from Odin Technology. I'm gonna be giving this version a trial over the next couple of days and will report back on my findings.

Also of note is that you can test applications that run on PDAs, PocketPCs and smartphones - no iPhone support yet but hey it's a good start :)

Watch this space for an update on the trial!

Wednesday, 13 May 2009

iPhone apps - iTest...uTest, can anyone Test?

A couple of weeks ago saw the 1 billionth download from the Apple App Store and in just 9 short months this is quite some achievement but what impact does this have on the testing world. Well it seems that more and more business apps are being produced for the iPhone, apps that need testing! So how do we test these apps? With difficulty it would appear! The answer to this would really depend on what sort of app it is we are testing, if it is a browser app that is designed for the safari browser on the iPhone then there are a number of solutions:

iBBDemo - Blackbaud iPhone Browser Simulator:
http://labs.blackbaud.com/NetCommunity/article?artid=662

and TestiPhone.com - iPhone Simulator:
http://www.testiphone.com/

If, however, we are to test a native iPhone app, then this proves to be a little more tricky. The only way to really test it before it goes live is on an actual iPhone itself or from within the development environment. There does not appear to be any emulators out there on the windows platform right now and with the absence of any sort of controller you are going to be limited to just manual testing. If anyone knows of a good simulator or test automation tools for iPhone apps, then let me know!

Thursday, 30 April 2009

Hello everyone and welcome to our blog

I must admit we are all very excited to be launching our long awaited blog…(yuk cheese)

We at nFocus are an independent software and systems testing consultancy (zzzzzzzzzZZZZZZZZ); but more importantly, we are a cluster of people that are passionate and perhaps obsessive about software testing.

With this blog, we hope to inform and indeed make sense of the many perplexing issues we come across, and hopefully entertain you in the process.

Thanks for taking the time to read our blog, check back again soon!
In the mean time feel free to take a look around our site.