Sunday, July 8, 2012

Web Application Automated Test Template (Perl)

We specialize in designing custom automated web application testing harnesses and frameworks that seamlessly integrate into your current development process; independent of methodology currently being used.

Consider the below test template that can be used to test any web application or page. This example uses Selenium Remote Driver, however, we can easily replace it with any other browser automation api we choose.The only limit you will have, in regards to what can be tested using this template, is your imagination.
Creative Commons License
Webapp Test Template by Alfred Vega is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available at

If you would like for us to implement this, or a more customized, automated web test strategy, contact us!

NAME - Run tests on a web browser for the purpose of validating a web application.
To read this document from our website (its a lot cleaner in display, unlike here through blogger), you may click HERE!


    use strict;
    use warnings;
    use Custom::WebApp;
    use Custom::AppSubs;
    use Test::More 'no_plan';
    use constant { APPHOME => 'http://<webapproot>/' };
    my $res_file = 'C:\Automation\Results\Test_Logs\<test_name>_test_output.txt';
    my $err_file = 'C:\\Automation\Results\Test_Logs\<test_name>_error_output.txt';
    open (FH, ">$res_file") or die "couldn't open file: $!";
    FH->autoflush(1);  # Make FileHandle HOT. Set to 0 to turn autoflush off
    Test::More->builder->output (*FH{IO});  # Redirect test output to result log file
    Test::More->builder->failure_output ($err_file); # and test failures to error log file
    my ($cap_file, @cap_files, @error_caps);            # Screenshot collection init
    my $cap_list_ref = \@cap_files;          # Normal verification screen shots are stored in this reference
    my $error_cap_ref = \@error_caps;                 # Error screenshots are stored in this reference
    my @browser = ('firefox', 'internet explorer', 'safari');
    my $test_site = 'http://<webapproot>/Register.aspx';
    my $test_plan_name = '<test_name>';                 # Ex. registration, login, data_entry
    my $timestamp = Custom::AppSubs::get_timestamp();  # Test run's unique id
    my $query = 'SELECT user_id, password, first_name, last_name, dob
                 FROM test_data_tbl
                 WHERE is_active = true
                 LIMIT 1;';
    foreach my $browser (@browser) {
        # Get DB handler
        my $dbh = Custom::AppSubs::db_get_handle();

        # Get Statement handler
        my $sth = Custom::AppSubs::db_get_st_handle($dbh, $query);

        # Execute the statement

        # Get data that will be used during the test
        while (my($user_id, $password, $first_name, $last_name, $dob) = $sth->fetchrow_array() ) {
            my $driver = Custom::WebApp::setup_selenium(undef, $browser);
            my $web_app = Custom::WebApp->new( driver => $driver,
                                                browser => $browser,
                                                user_id => $user_id,
                                                password => $password,
                                                test_site => $test_site,
                                                test_plan_name => $test_plan_name,
                                                test_timestamp => $timestamp,
                                                log_file => *FH{IO},
                                                cap_list_ref => $cap_list_ref,
                                                error_cap_ref => $error_cap_ref );
            $web_app->goto_page( APPHOME );
    #   TEST LOGIC GOES HERE!!!!   #
            undef $web_app;
    # Strip forward slashes from url and replace with a '.'
    $test_site =~ s/\//./g;
    # Save the test run files, after stamping them with the unique id for the test run
    my @return_vals = Custom::AppSubs::cleanup($res_file, $err_file, $test_site, $test_plan_name, $timestamp );
    # Cleanup() returns the newly formatted and stamped test run file
    my $test_result_file = $return_vals[0];
    # Add test results (from test logs) to the automation_db.test_results_tbl
    Custom::AppSubs::parse_results($test_plan_name, $test_result_file, $timestamp);
    # Update the test_run_tbl with this runs unique_id
    Custom::AppSubs::db_insert_rows("INSERT INTO test_run_tbl (test_timestamp, test_plan_name)
                                      VALUES ('$timestamp', '$test_plan_name');");
    close FH;

Test Template Walkthrough

The test script can be summarized as well as broken up into three parts:

    a) Test Initialization and Setup
    b) Test Execution
    c) Test Cleanup and Reporting

During 'Test Initialization and Setup' we declare all of the variables that will hold our data to be used during the 'Test Execution'. This data must have already been preloaded unto a database (recommended) or you can use whatever method you'd like (as long as you get rows of data that you can iterate through).

There are two loops of interest in this test temlate: a foreach loop that iterates through the list of browsers we are testing against, AND a while loop that is used in conjunction with MySQL's fetchrow_array function to iterate through each row of test data that is fetched from the database.

'Test Cleanup and Reporting' takes care of tagging all test artifacts (e.g. test logs, screencaps) and inserting them into the automation database. This data is then mined, analyzed and displayed in one of the Eng Test Dashboards.

A more detailed description of the above Test Script follows below:
  • Required Modules
  • There are three modules required to run the scripts as described below:  Class module for web application. Contains methods that give
                  access to the main app page offered services (e.g. goto_page, click_on). Utility and support functions used by test scripts and WebApp
                  (e.g. write_log, parse_results)    A framework for writing test scripts (we use OK and NOT OK methods only).
    The above files are located in the directory C:\Perl\lib\Custom, this is done to preserve the files in case Perl is updated at a later date.
  • Test Results & Error Log Configuration
  • Here we can define where the system will send its test results (e.g. OK, NOT OK) as well as errors during test execution.
     $res_file: this holds the location of the test result (OK, NOT OK) for the currently running test
     $err_file: this is where the system will send any errors associated with the test runs
                (e.g. when a NOT OK occurs)

  • Screenshot Collection Initialization
  • Any screenshots that are taken by the system are stored in an array. One for "NORMAL" type screenshots (i.e. the ones we planned to take) as well as "ERROR" type screenshots (i.e. taken by the system when an error occurs).

        $cap_list_ref is a reference to the array used to store normal screenshots, @cap_files
        $error_cap_ref is a reference to the array used to store error screenshots, @error_caps

  • Test Environment & Data Initialization
  • Here we define what our test environment looks like.

        @browser is a list made up of browser names we wish to invoke during the test.
         Please note that each browser used must already be installed on the PC.
        $test_site is the full URL of the page we are driving (if known).
        $query is a string with a sql query that when executed will return rows of data for the test.
        $test_plan_name self explanatory. Comes from automation_db.test_plan_tbl
        $timestamp holds this test runs Unique Identifier (includes milliseconds).

  • Browser Loop
  • The purpose of this loop is to iterate through our list of @browsers and execute the "test" for each one. Browser windows are automatically closed after test execution.

        foreach my $browser (@browser) {
   test data loop for ie...
   test data loop for ff...
   test data loop for safari...
    Possible values for browser are: firefox|internet explorer|htmlunit|iphone|chrome
  • Test Data Loop
  • After making a connection to our automation_db and selecting the current runs test data we iterate through each row of data retrieved using a while loop and DBI's fetchrow_array function.

        while ($criteria1, $criteria2) = $sth->fetchrow_array() {
   test for each set of criteria (i.e. row) returned... 

  • Test Cleanup
  • After every test is run all test artifacts are tagged (with the test run unique id) and stored for later display in the dashboard or through sql queries of the test_results_tbl

    A few things happen during cleanup:
        1. Call a function cleanup to rename all test artifacts with <filename>.$test_site.$timestamp
        2. Adds test results (from test logs) to the automation_db.test_results_tbl
        3. Updates the automation_db.test_run_tbl with this runs unique id and test plan name
        4. Closes any opened file handles
        5. Exits
Creative Commons License
VGP-Miami Web and Mobile Automation Blog by Alfred Vega is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.