Thursday, July 4, 2013

Another Selenium system that incorporates GRID for parallel distribution of tests

System Components

The system is made up primarily of open source components as well as custom libraries written in Perl. The Selenium Standalone Server is the browser automation portion of the system. Selenium has been around since 2004 (although it was called JavaScript TestRunner then). It is a very stable browser automation platform, from our experience. It was chosen not only because of its stability and community support, but also because it supports ALL major browsers in use today; mainly Internet Explorer, Firefox, Chrome and Safari.

The test scripts and libraries are written using (as much as possible) the Page Object design pattern. This model gives us the capability of abstracting specific functions in a class and exposed as a method that can be accessed by the test scripts using wrapped function calls to the Selenium Perl Client Driver.

All html objects that the user (i.e. Test Script developer) will access are also abstracted and mapped to human readable names. This is a simple process that maps Selenium element names and locator strategy, for example the human readable “menu” “database_setup” will map to “mnuMain_DXI5_T” “ID” (element name, locator strategy) respectively.

As shown on the below diagram, we are using Selenium Grid to distribute our tests to different environments. Grid by itself does not support choosing a specific platform (i.e. you can tell it to use VISTA but you cannot specify architecture (e.g. x64), for example); we forked and modified the original code to support this functionality. Basically we are using a capability called applicationName to pinpoint the exact node we want to execute our test against.

Implementation

As shown on the below diagram, we are using Selenium Grid to distribute our tests to different environments. Grid by itself does not support choosing a specific platform (i.e. you can tell it to use VISTA but you cannot specify architecture (e.g. x64), for example); we forked and modified the original code to support this functionality. Basically we are using a capability called applicationName to pinpoint the exact node we want to execute our test against. See examples section for more details.

System Diagram









Automatically Executing the Scripts

There are many options for executing the test scripts. Which one is chosen will be determined by the specific project needs. The important thing to note is that these are Perl scripts that can be executed by any system that can run a Perl interpreter. Below are some of the options:
  • Windows task scheduler (Windows platform)
  • Windows Service Manager (Windows platform)
  • Cron scheduler (Linux platform)
  • Chron or Automator (MAC platforms)
To execute the scripts use any of the above tools to “Schedule” and “Run” at specific times the automated test suite.

Test Script Flow

When a test script is launched (either manually via command line or via one of the tools listed above) the following occurs:
  • Test Script requests a specific environment from the Grid’s hub (Browser, OS, Arch)
  • The hub checks for an available node that matches the desired capabilities
  • The node then executes the script, once a matching node is found
  • The script logic is designed to iterate through a list of browsers and platforms and execute the test against each, see here for an example.
  • After each the test run (i.e. all steps executed on all browsers and platforms), results are stored in the Automation Results Data Store. These results are then reflected in Quality Center’s Test Lab module
Quality Center Integration Module
Integration with Quality Center is accomplished using the HP Quality Center Open Test Architecture. OTA is a COM API exposed by Quality Center to facilitate integration with third party (from QC’s perspective) tools. It also enables developers to code custom applications to interface with QC. As an example the QCExplorer application uses the same API.

Let us know via email or comment any feedback you may have. As well, if you'd like for us to design a custom automated testing framework for Web or Client applications, please let us know.

Saturday, June 8, 2013

Which automation framework should I choose?

Lately, I've been hearing a lot of folks ask questions like "which one should I choose QTP or Selenium?" or "which one has a better future, QTP or Selenium?". In this post I will attempt to clear up the misconceptions regarding the comparisons of QTP to Selenium.

By far the most common argument I hear agasint QTP is that if you want to use anything other than just record and playback, you must learn VBScript (this is an easy enough task to do, however, especially if you are a novice programmer). As well you must learn some type of language in order to realize and / or get the most juice out of any framework you choose for your automation projects. Which leads me to one of the main advantages of QTP, it is a framework! What that means is that you not only get to drive your Application Under Test (AUT), you also get reporting of test results, you get support for data driven tests, you get an object repository. Basically is a framework youc an use right out of the box.

Selenium on the other hand, is just an API, a good one at that! The reason it is so good is because all of the development efforts go into having a robust API for automation architects and programmers to use and incorporate into their own frameworks, rather than having to support all of the features a ready-to-use framework provides. And there lies the difference! With Selenium you don't get reporting, you don't get data driven capabilities, you don't get an object repository so it is not a framework. However, it IS flexible enough that you can incorporate into your own framework that already has all these features. The caveat is that you must design and build these features yourself. A lot of folks prefer to have this type of control. I am one of them :)

A more realistic comparison would be SmartBear's TestComplete -> Froglogic Squish -> HP QTP; all three of these are frameworks that give you, right out of the box, reporting, object repositories, data driven capabilities, keyword (another type of data driven) capabilities, etc. Again, in order to realize the full potential of any of the just mentioned frameworks (not including Selenium which is not a framework) you have to learn some language (e.g. Perl, PHP, Python, Ruby, VBScript, TCL, Java).

That being said, lets not forget about many general purpose frameworks that already exist and that are ready for you, the automation architect, to "just add water" (or Selenium if you wish ;), some examples:

STAF: http://staf.sourceforge.net/
Robot Framework: http://robotframework.org/

So there you have it, I hope this helps some folks clear up (at least in their minds) what to choose or not to choose when designing a framework to be used in any type of software verification. Contact me if you are interested in learning more or wish to secure my services to design your Automation framework.

Saturday, March 16, 2013

Add image to Quality Center using OTA and Perl

I thought I would share this little snippet of code that implements the adding of an image file to a specific test step in Quality Center. To accomplish this we will use HP's recommended way of interfacing with their system: Open Test Architecture (OTA). As opposed to trying to go directly to the QC DB.

From the Perl side there are three modules needed (all available from CPAN):
  1. Win32::OLE qw(in valof);
  2. Win32::OLE::Enum;
  3. Win32::OLE::Variant;
You will also need to have the OTA client installed on the computer being used to update the Test Lab. This usually gets downloaded when you first connect to QC. The file name is OTAClient90.dll.

First, the code:

1:  # ATTACH TEST STEP IMAGE IF IT EXISTS  
2:  my $step_up_count = 0;  
3:  if ($TEST_RUN_RESULTS{$test_timestamp}{step_results}->[$step_up_count]{image_link} ne '') {  
4:     my $file_name = "$TEST_RUN_RESULTS{$test_timestamp}{step_results}->[$step_up_count]{image_link}";  
5:          
6:     my $attachment_factory = $run_step->Attachments;  
7:     my $attachment = $attachment_factory->AddItem(Win32::OLE::Variant->new(VT_NULL)); # This returns a Win32::OLE::Variant object type NULL required by AddItem()  
8:     $attachment->{FileName} = "$file_name";  
9:     $attachment->{Type} = 1;  
10:     $attachment->Post;  
11:  }  
12:  $step_up_count++;  

We begin by keeping track of the steps we are adding to QC on line 2.
For attaching the images I implement an IF condition (Line 3) that checks whether an image exists for this step or not. If it does find one we get an instance of the attachment factory for the step being currently updated (Line 6).

A very important step (as in this will not work unless you do this) is to pass Win32::OLE::Variant of type VT_NULL to AddItem. If you do not do this the image will not be updated. That is, $attachment (Line 7) must first be NULL.

After we add the file_name and attachment type (Lines 8-9) we then Post the image and associated attributes to Quality Center Test Lab module.

That is all it takes to add an image to the Test Lab of Quality Center via a program that uses HP's COM interface to QC, Open Test Architecture. If you would like us to implement an automated solution for your company that includes QC integration please contact us; we are happy to provide a quote.

Sunday, March 10, 2013

Squishrunner error: Runner exited with value -1

I am currently involved in the implementation of a keyword based framework that will be used to verify an application currently being developed, against requirements initially, for regression detection later.

The below issue I encountered and thought I share my findings with the hopes of saving others valuable time.

1. When you launch your AUT (Application Under Test) you observe that it launches the app and then close the app and the following Squish pop up is displayed:


2. As well the following entries are displayed in the "Runner/Server Log" window:
S: DLLPreload (89640001): FindEntryPoint: The parameter is incorrect.
S: DLLPreload (89640001): done
R: Runner exited with value -1
This can be caused by, according to Froglogic and verified by me, 32 bit / 64 bit mixup. For example attempting to automate (control) a 64 bit application with the 32 bit version of Squish will yield the above error. As well, the converse is true, attempting to control (automate) a 32 bit application with the 64 bit version of Squish will yield the above error.

Saturday, March 2, 2013

Qt application Keyword driven verification

Recently I had the pleasure of designing a test framework for a Qt based application that incorporated Squishserver, Squishrunner and the Squish IDE initially as well as a system of keywords developed in house with the sole purpose of executing some actions that verified some requirement.

The below diagram illustrates the major components of the automated system design. Below it is a brief explanation of each part:
Figure 1 - System Layers (abstraction)

Test Execution Layer

This is the only layer that faces the user (i.e. the test script programmer). From here we make calls to the methods / services exposed by the functional layer. For example: create_report, load_data. Pass / Fail status is determined at this layer (based on 1 or 0 returned from functional layer).

Functional Layer

This is the ONLY layer that speaks AUT language and, as such, drives the AUT. It makes calls to the System Layer as needed to access services like the DB, capture screenshots, write to log files, etc.
All actions and functions that we need to perform on the AUT (application under test) are defined here and made available to the test execution layer. All functions should return 1 when successful or 0 when unsuccessful.

System I/F Layer

The System Layer is responsible for interfacing with any other component that is not the AUT. It should provide services for db access, reading and writing. Log file creation and parsing. Test results processing and QC integration (post test run).


Keyword Testing Framework

Using the keywords approach the separation / abstraction of layers will be supported by having two sets of keywords. Both sets of keywords are exposed as methods in a Perl library file and are:

Functional keywords: These keywords interact with the AUT and define the actions to be followed when the keyword is invoked. 

System keywords:  Interface with the system and provide services to db access, log files, copying and moving files, launching the application, etc.

Keywords System Diagram


As shown above, the system includes 4 major parts: 

  1. keyword / actions definitions (actions.pl)
  2. data files (keywords+arguments), can be .tsv, .csv, .xls (keywords.csv)
  3. test case file (test.pl)
  4. driver (driver.pl)

Each part has its own responsibilities and functions to carry out (as explained below). In conjunction they drive the Application Under Test and capture test artifacts along the way. Including execution PASS / FAIL determination and reporting. 

Data files in the form of tsv, csv or xls contain “keyword” and “argument” data separated by a tab, comma, etc depending on file format used.

The keywords are the method names used in the function definition. For example, start_application keyword is mapped to start_application Perl sub. The “arguments” in the data file are the functions parameters. They are like passed in parameters, being provided by the keyword data file.

Before the keywords with arguments included in the keywords data file can be executed (called upon) they must be turned into the scripting language specific format. This is the job of the driver.pl file. It parses the data file constructing the method name as it goes and then eventually executing the function call using, in our case, Perl eval function.

The function call defined in the action class is now called upon with the arguments in the data file passed in. The actions are executed on the AUT and success or failure is reported back as 1 or 0 respectively.

Contact us if you'd like to hear more or if you'd like to hire us to provide a similar service.

Wrap Selenium Webdriver Calls for your benefit

Wrapping Webdriver API calls seems like a good idea to me. Not only do you get a level of abstraction that you can more closely control, but you can speak the language of your API because you are defining it. As an example lets examine the following code who's purpose is to wrap Selenium's selectFrame function call. In our case since we are using Selenium::Remote::Driver, switch_to_frame.

1:  sub select_frame {  
2:    
3:    my $self = shift;  
4:    my $element_type = shift;  
5:    my $name = shift;  
6:    
7:    if ($element_type) {  
8:    
9:      my $query = "SELECT element_name, locator  
10:             FROM html_element_tbl  
11:             WHERE element_type = '$element_type'  
12:             AND name = '$name'  
13:             AND is_active = true;";  
14:      
15:      # Get DB handler  
16:      my $dbh = Custom::Wepa::db_get_handle();  
17:      
18:      # Get Statement handler  
19:      my $sth = Custom::Wepa::db_get_st_handle($dbh, $query);  
20:      
21:      # Execute the statement  
22:      Custom::Wepa::db_execute($sth);  
23:      
24:      while (my($target, $locator) = $sth->fetchrow_array() ) {  
25:    
26:        $self->{driver}->switch_to_frame($target);  
27:      }      
28:    }  
29:    else {  
30:      $self->{driver}->switch_to_frame(undef)  
31:    }  
32:    return;  
33:    
34:  }  

In order to use this function you must first instantiate an object of the class this method is a member of; in this case that's Wepa.pm (our main system interface module). Then you can call it with an HTML element type and element name. These are human readable names that have been "mapped" to their HTML counterparts. In our example they are: frame and payment_terms respectively, where frame and payment_terms are id tags of both these elements.

So what are the benefits being gained by the above wrap? One thing it is doing is adding a layer of abstraction to the way the UI is interacted with by introducing the concept of an object map where sometimes machine readable or cryptic names are mapped to human readable names. For example some QButton, might be mapped to Purchase Item, Button. This method seems to support two modes, which are the exact same modes the method it wraps supports:

  1. Switch to $target frame
  2. Switch to default frame (undef)
By wrapping calls like this and separating what us humans call the UI controls from what the application / computer calls it and creating a mapping between the two, we have effectively created a system that is more maintainable; but more importantly easier to maintain! AND your tests will never have to change just because the GUI controls changed ID's or were moved around.

How much easier to maintain? Consider that now that you have a catalog of elements and objects that your tests reference, this catalog can be automatically updated whenever a new checkin occurs that makes a change to an object already contained in the map table; effectively and automatically propagating the new information where it is consumed by the object map causing it to update itself with the new object information so the next time the tests run they wont fail just because of object mis-identification properties. As is normally the case when verifying GUI's.

Creative Commons License
VGP-Miami Web and Mobile Automation Blog by Alfred Vega is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.