Sunday, October 7, 2012

Selenium - Click and tell me what page I landed on

In this example I show you how to, using Perl and Selenium, write a reusable function that will click on a given element and report back the page it landed on. This is useful when checking menus as well as process flows. For example at each step of a registration or an order process. First the code:
1:  sub click_and_verify {  
2:    
3:    my $self = shift;  
4:    my $element_type = shift;  
5:    my $name = shift;  
6:    
7:    my $query = "SELECT element_name, locator  
8:           FROM html_element_tbl  
9:           WHERE element_type = '$element_type'  
10:           AND name = '$name'  
11:           AND is_active = true;";  
12:    
13:    
14:    my $dbh = Custom::WepaSubs::db_get_handle();  
15:    
16:    my @elem = $dbh->selectrow_array($query);  
17:    
18:    my ($target, $locator) = ($elem[0], $elem[1]);  
19:      
20:    $self->{driver}->find_element($target, $locator)->click() and Custom::WepaSubs::wait_for(1.5);  
21:    my $landing_page = $self->{driver}->get_current_url();  
22:    return $landing_page;  
23:      
24:  }  
Then from your test script you call this function and pass to it the element type and name of the object on the page you want to click on:
 my $landing_page = $web_app->click_and_verify('menu_item', 'send_mail');  
click_and_verify returns the URL of the page we landed on after the click action occurred. You can then use this URL ($landing_page) to compare to the Expected Result criteria URL ($ref_page):
1:  if ($landing_page eq $ref_page) {  
2:    # we didn't navigate so fail  
3:    unlike ($landing_page, qr/$ref_page/, 'Clicked on ' . $menu_item . ' and landed on ' . $landing_page);  
4:    $web_app->get_screenshot('ERROR', 'ReqID_0380_sendmail_dbsetup_access_' . 'menu_' . $menu_item . '_');  
5:  }  
6:  else {  
7:    cmp_ok ($landing_page, '=~', APPHOME, 'Clicked on ' . $menu_item . ' and landed on ' . $landing_page);  
8:    $web_app->get_screenshot('NORMAL', 'ReqID_0380_sendmail_dbsetup_access_' . 'menu_' . $menu_item . '_');  
9:  }  
If you're wondering about the cmp_ok and unlike methods I urge you to check out Test::More on CPAN (you'd be glad you did).

That's it. Just another tool to add to your testing arsenal. Any clarifications just ask (the code above is extracted from working code). I use page objects to model the web sites and web applications I develop test suites for. As well Selenium commands are abstracted. This is why you see references to $self and $web_app in the sample code.

Monday, September 24, 2012

Extend Selenium::Remote::Driver to support applicationName capability

Why would you need applicationName support incorporated into your automated testing framework? Well if for no other reason of having more granularity when defining the environment in which your automated web application tests will execute in. It is my opinion that we have to be in full control (to the largest extent possible) of the test environment. And this type of control begins when we are defining the environment.

The module that we will be making changes to is Driver.pm (available from CPAN). It is an implementation of the driver that Selenium provides (in Perl) to the Selenium stand alone server (which is required to use the Perl language bindings to Selenium).

NOTE: I recommend the use your Perl package manager to download and install Selenium::Remote::Driver plus its dependencies and familiarizing yourself with it before going through the below changes.

So how do you add this support to the already great Perl language bindings to the Selenium driver? Below is how I did it:

Changes to Driver.pm to support applicationName capability:

1. Add application_name property to the new method (see code below)
2. Add applicationName (see code below) to new_session %args hash

Basically in order to support applicationName property you need to update the new and new_session methods respectively as outlined below: (note code is a fragment used for reference). The code below is found around line 173 of the Driver.pm module; the newly inserted line is number 12, depicted by >>>> <<<< in both.
1:  sub new {  
2:    my ( $class, %args ) = @_;  
3:    my $ress = new Selenium::Remote::Commands;  
4:    
5:    # Set the defaults if user doesn't send any  
6:    my $self = {  
7:      remote_server_addr => delete $args{remote_server_addr} || 'localhost',  
8:      browser_name    => delete $args{browser_name}    || 'firefox',  
9:      platform      => delete $args{platform}      || 'ANY',  
10:      port        => delete $args{port}        || '4444',  
11:      version      => delete $args{version}      || '',  
12:   >>>>> application_name  => delete $args{application_name}  || undef, <<<<<<  
13:      session_id     => undef,  
14:      remote_conn    => undef,  
15:      commands      => $ress,  
16:      auto_close     => 1, # by default we will close remote session on DESTROY  
17:      pid        => $$,  
18:    };  

1:  sub new_session {  
2:    my ($self, $extra_capabilities) = @_;  
3:    $extra_capabilities ||= {};  
4:    my $args = {  
5:      'desiredCapabilities' => {  
6:        'browserName'    => $self->{browser_name},  
7:        'platform'     => $self->{platform},  
8:        'javascriptEnabled' => $self->{javascript},  
9:        'version'      => $self->{version},  
10:        'acceptSslCerts'  => $self->{accept_ssl_certs},  
11:        'proxy'       => $self->{proxy},  
12:    >>>>>> 'applicationName'  => $self->{application_name},  <<<<<<  
13:        %$extra_capabilities,  
14:      },  
15:    };  
After the above modifications you can, in your test scripts, add the applicationName property to your desired capabilities hash. Below is an example:
1:    my %desired_capabilities = ( remote_server_addr => $grid_server,  
2:                   browser_name => $browser,  
3:                   platform => $os_platform,  
4:                   port => $port,  
5:                   application_name => $app_name,  
6:                   proxy => {  
7:                     proxyType => 'system'  
8:                   });  
9:    
10:    my $driver = Selenium::Remote::Driver->new(%desired_capabilities);  
Benefits of running with the above:
You can now define specifically which Browser, OS, Platform to run your test against. For example lets say you want to run a test against Firefox on Windows 7 x64; with the current implementation there is no way to force the grid to select a speficic platform.

With this change you can define an environment (application) when the node is being registered with the hub:
 java -jar selenium-server-standalone-2.25.0.jar -role node -hub http://localhost:4445/wd/hub -browser browserName=firefox,maxInstances=3,platform=VISTA,applicationName=WIN7x64FFX -port 5556  

The above command registers a node with the Firefox application running on the node using the specified applicationName=WIN7x64FFX. When test scripts request this specific environment (via the applicationName property) they will be sent to this node for test execution.

If you do not feel like doing all this yourself then feel free to download the file Driver.pm from my fork on github of the Selenium Perl bindings by clicking the following link: https://github.com/freddyvega/Selenium-Remote-Driver/blob/support-application-name-capability/lib/Selenium/Remote/Driver.pm

Saturday, September 15, 2012

Another method for running Selenium tests in parallel using Perl

Here is another quick tutorial on how to implement a parallel test runner (for Selenium or any other type of test).

First the code:
1:  #!C:/Perl64/bin/perl  
2:  use strict;  
3:  use warnings;  
4:  use File::Find::Rule;  
5:    
6:  my $start_dir = shift || 'C:\Tests';  
7:  my $count = 0;  
8:    
9:  my @subdirs = File::Find::Rule->directory->in($start_dir);  
10:    
11:  my @files = File::Find::Rule->file()  
12:                ->name( '*.pl' ) # Only get perl files  
13:                ->in( $subdirs[0] );  
14:  foreach my $test (@files) {  
15:    system(1, $test);  
16:    print "Executing Test: " . $test . " Sequence #: " . $count . "\n";  
17:    print "My Father Is: " . $$ . "\n"; # process that started me  
18:    $count++;  
19:  }  
20:  exit(0);  
The magic here is being performed by line 15. Using the perl function system in this special form we can force it to not wait for the process it is launching before continuing, effectively allowing us to spawn multiple tests at the [almost] same time.

From the Perl docs:
system(1, @args)
spawns an external process and immediately returns its process designator, without waiting for it to terminate.
The above script will output the following:
 Executing Test: C:\Temp\test_template.pl Sequence #: 0  
 My Father Is: 8692  
 Executing Test: C:\Temp\test_template2.pl Sequence #: 1  
 My Father Is: 8692  
 Executing Test: C:\Temp\test_template3.pl Sequence #: 2  
 My Father Is: 8692  
 Executing Test: C:\Temp\test_template4.pl Sequence #: 3  
 My Father Is: 8692  
The below image shows how, in my debugger, the test runner has stopped but the 4 spawned processes are still running (theoretically running automated tests) as indicated in my Windows Task Manager by the running perl.exe processes.



That's all it takes to launch multiple automated tests at the same time, using Perl's system function.

Thursday, September 13, 2012

Running Selenium tests in parallel using Perl

Some are of the opinion that you should not try to start to experiment with parallelization of Selenium tests until you have a need for it. I do not share that opinion. I think you should dive into parallelization of your tests as soon as you have enough tests to run in parallel (e.g. TWO! :))

So what does it take to run tests in parallel? Well all you need is a parallel test runner. Fortunately for us, Perl is the Swiss Army knife of the internet (and automation as well). For our implementation we'll be using one module and perl's fork().

The first module is available here: File::Find::Rule
The second comes with your Perl distribution

We'll be using File::Find::Rule to parse directories and create lists (arrays) of files in the process; and for managing our multiple tests running in parallel fork().

1:  #!C:/Perl64/bin/perl  
2:  use strict;  
3:  use warnings;  
4:  use File::Find::Rule;  
5:    
6:  my $start_dir = shift || 'C:\Tests\';  
7:  my $count = 0;  
8:    
9:  my @subdirs = File::Find::Rule->directory->in($start_dir);  
10:    
11:  my @files = File::Find::Rule->file()  
12:                ->name( '*.pl' ) # Get only perl files  
13:                ->in( $subdirs[0] );  
14:    
15:  foreach my $test (@files) {  
16:    my $pid = fork;  
17:    die "fork failed" unless defined $pid;  
18:      
19:    if ($pid == 0) { # We Got a Child  
20:      print "Executing Test: " . $test . " Sequence #: " . $count . "\n";  
21:      my $exit_code = system($test);  
22:      print "Done Executing Test: " . $test . " Sequence #: " . $count . "\n";  
23:      print "Exit Code: " . ($exit_code >> 8) . "\n";  
24:      exit ($exit_code >> 8);  
25:    }  
26:    $count++;    
27:  }  
28:  exit(0);  

If you run the above script with two .pl files in the C:\Temp\ directory you get the following output:

 Executing Test: C:\Temp\test1.pl Sequence #: 0  
 Executing Test: C:\Temp\test2.pl Sequence #: 1  
 Done Executing Test: C:\Temp\test2.pl Sequence #: 1  
 Exit Code: 2  
 Done Executing Test: C:\Temp\test1.pl Sequence #: 0  
 Exit Code: 1  

This is what the above script is doing:
1. You pass in a directory or it uses C:\Tests\ as the default
2. Create a list of subdirectories
3. Go into each directory and get a list of files
4. Fork each one of those files (making sure they exit)

NOTE: Number 4 above is important! Unless you like zombies, that is!

Sunday, July 8, 2012

Web Application Automated Test Template (Perl)


We specialize in designing custom automated web application testing harnesses and frameworks that seamlessly integrate into your current development process; independent of methodology currently being used.

Consider the below test template that can be used to test any web application or page. This example uses Selenium Remote Driver, however, we can easily replace it with any other browser automation api we choose.The only limit you will have, in regards to what can be tested using this template, is your imagination.
    Email: dev@teamvgp.com
Creative Commons License
Webapp Test Template by Alfred Vega is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available at http://www.teamvgp.com/.

If you would like for us to implement this, or a more customized, automated web test strategy, contact us!

NAME

my_test_script.pl - Run tests on a web browser for the purpose of validating a web application.
To read this document from our website (its a lot cleaner in display, unlike here through blogger), you may click HERE!

SYNOPSIS

    use strict;
    use warnings;
    use Custom::WebApp;
    use Custom::AppSubs;
    use Test::More 'no_plan';
    
    use constant { APPHOME => 'http://<webapproot>/' };
    
    # CONFIGURE TEST REPORT AND LOGING OPTIONS
    my $res_file = 'C:\Automation\Results\Test_Logs\<test_name>_test_output.txt';
    my $err_file = 'C:\\Automation\Results\Test_Logs\<test_name>_error_output.txt';
    
    open (FH, ">$res_file") or die "couldn't open file: $!";
    FH->autoflush(1);  # Make FileHandle HOT. Set to 0 to turn autoflush off
    
    Test::More->builder->output (*FH{IO});  # Redirect test output to result log file
    Test::More->builder->failure_output ($err_file); # and test failures to error log file
    
    my ($cap_file, @cap_files, @error_caps);            # Screenshot collection init
    my $cap_list_ref = \@cap_files;          # Normal verification screen shots are stored in this reference
    my $error_cap_ref = \@error_caps;                 # Error screenshots are stored in this reference
    my @browser = ('firefox', 'internet explorer', 'safari');
    my $test_site = 'http://<webapproot>/Register.aspx';
    my $test_plan_name = '<test_name>';                 # Ex. registration, login, data_entry
    my $timestamp = Custom::AppSubs::get_timestamp();  # Test run's unique id
    
    my $query = 'SELECT user_id, password, first_name, last_name, dob
                 FROM test_data_tbl
                 WHERE is_active = true
                 LIMIT 1;';
        
    foreach my $browser (@browser) {
        
        # Get DB handler
        my $dbh = Custom::AppSubs::db_get_handle();

        # Get Statement handler
        my $sth = Custom::AppSubs::db_get_st_handle($dbh, $query);

        # Execute the statement
        Custom::AppSubs::db_execute($sth);

        # Get data that will be used during the test
        while (my($user_id, $password, $first_name, $last_name, $dob) = $sth->fetchrow_array() ) {
            
            my $driver = Custom::WebApp::setup_selenium(undef, $browser);
        
            my $web_app = Custom::WebApp->new( driver => $driver,
                                                browser => $browser,
                                                user_id => $user_id,
                                                password => $password,
                                                test_site => $test_site,
                                                test_plan_name => $test_plan_name,
                                                test_timestamp => $timestamp,
                                                log_file => *FH{IO},
                                                cap_list_ref => $cap_list_ref,
                                                error_cap_ref => $error_cap_ref );
            
            $web_app->goto_page( APPHOME );
    
    ################################
    #   TEST LOGIC GOES HERE!!!!   #
    ################################
    
            $driver->quit();
    
            undef $web_app;
        }
    
    }
    # Strip forward slashes from url and replace with a '.'
    $test_site =~ s/\//./g;
    
    # Save the test run files, after stamping them with the unique id for the test run
    my @return_vals = Custom::AppSubs::cleanup($res_file, $err_file, $test_site, $test_plan_name, $timestamp );
    
    # Cleanup() returns the newly formatted and stamped test run file
    my $test_result_file = $return_vals[0];
    
    # Add test results (from test logs) to the automation_db.test_results_tbl
    Custom::AppSubs::parse_results($test_plan_name, $test_result_file, $timestamp);
    
    # Update the test_run_tbl with this runs unique_id
    Custom::AppSubs::db_insert_rows("INSERT INTO test_run_tbl (test_timestamp, test_plan_name)
                                      VALUES ('$timestamp', '$test_plan_name');");
    
    close FH;
    exit(0);


Test Template Walkthrough

The test script can be summarized as well as broken up into three parts:

    a) Test Initialization and Setup
    b) Test Execution
    c) Test Cleanup and Reporting

During 'Test Initialization and Setup' we declare all of the variables that will hold our data to be used during the 'Test Execution'. This data must have already been preloaded unto a database (recommended) or you can use whatever method you'd like (as long as you get rows of data that you can iterate through).

There are two loops of interest in this test temlate: a foreach loop that iterates through the list of browsers we are testing against, AND a while loop that is used in conjunction with MySQL's fetchrow_array function to iterate through each row of test data that is fetched from the database.

'Test Cleanup and Reporting' takes care of tagging all test artifacts (e.g. test logs, screencaps) and inserting them into the automation database. This data is then mined, analyzed and displayed in one of the Eng Test Dashboards.

A more detailed description of the above Test Script follows below:
  • Required Modules
  • There are three modules required to run the scripts as described below:

     WebApp.pm:  Class module for web application. Contains methods that give
                  access to the main app page offered services (e.g. goto_page, click_on).
      AppSubs.pm: Utility and support functions used by test scripts and WebApp
                  (e.g. write_log, parse_results)
      More.pm:    A framework for writing test scripts (we use OK and NOT OK methods only).
        
    
    The above files are located in the directory C:\Perl\lib\Custom, this is done to preserve the files in case Perl is updated at a later date.
       
  • Test Results & Error Log Configuration
  • Here we can define where the system will send its test results (e.g. OK, NOT OK) as well as errors during test execution.
     $res_file: this holds the location of the test result (OK, NOT OK) for the currently running test
        
     $err_file: this is where the system will send any errors associated with the test runs
                (e.g. when a NOT OK occurs)
      

  • Screenshot Collection Initialization
  • Any screenshots that are taken by the system are stored in an array. One for "NORMAL" type screenshots (i.e. the ones we planned to take) as well as "ERROR" type screenshots (i.e. taken by the system when an error occurs).

        $cap_list_ref is a reference to the array used to store normal screenshots, @cap_files
        
        $error_cap_ref is a reference to the array used to store error screenshots, @error_caps
    
    

  • Test Environment & Data Initialization
  • Here we define what our test environment looks like.

        @browser is a list made up of browser names we wish to invoke during the test.
         Please note that each browser used must already be installed on the PC.
                 
        $test_site is the full URL of the page we are driving (if known).
        
        $query is a string with a sql query that when executed will return rows of data for the test.
        
        $test_plan_name self explanatory. Comes from automation_db.test_plan_tbl
        
        $timestamp holds this test runs Unique Identifier (includes milliseconds).
    
    

  • Browser Loop
  • The purpose of this loop is to iterate through our list of @browsers and execute the "test" for each one. Browser windows are automatically closed after test execution.

        foreach my $browser (@browser) {
            ...do test data loop for ie...
            ...do test data loop for ff...
            ...do test data loop for safari...
        }
    
    
    Possible values for browser are: firefox|internet explorer|htmlunit|iphone|chrome
       
  • Test Data Loop
  • After making a connection to our automation_db and selecting the current runs test data we iterate through each row of data retrieved using a while loop and DBI's fetchrow_array function.

        while ($criteria1, $criteria2) = $sth->fetchrow_array() {
            ...do test for each set of criteria (i.e. row) returned... 
        }
    
      

  • Test Cleanup
  • After every test is run all test artifacts are tagged (with the test run unique id) and stored for later display in the dashboard or through sql queries of the test_results_tbl

    A few things happen during cleanup:
        1. Call a function cleanup to rename all test artifacts with <filename>.$test_site.$timestamp
        2. Adds test results (from test logs) to the automation_db.test_results_tbl
        3. Updates the automation_db.test_run_tbl with this runs unique id and test plan name
        4. Closes any opened file handles
        5. Exits
    
    

Friday, March 16, 2012

Software Development Life Cycle

When it comes to Software Development, we all know there is no one size fit all solution. Adopting a process that includes requirement analysis and design of some sort as well as unit tests not only produces a better product, it gives you an adequate (for your shop) level of documentation in the form of User Stories, TDD Unit Tests or a Requirements Doc for example. You may ask, why do I need documentation? Consider the case of an employee that leaves unexpectedly. All this employee leaves behind is code (maybe even well documented code), however, there is a lot of knowledge that is taken in the brain by that employee. This knowledge and subject matter expertise in the context of your products or services is imperative to your current business model, IMO, and we must try to retain as much as possible. Thinking about a design and filling the gap between what the customer wants and what we can offer and documenting that, some way, is what requirement gathering and design stage is all about and fulfills the documentation requirement. The point is we can be agile all we want, but we have to document our work.

Consider the following Software Development Life Cycle diagram with which TDD, as well as any other software development methodology, I contend, can be used.


Fig 1 - SDLC should accomodate any development methodology


As you can see from the diagram above, there are distinct phases that have been defined. The actual design and coding of the software, if following this process, can be executed using any method chosen by the developers. What matters is that we get unit tested code ready for integration.

Your thoughts and comments are welcomed.

TDD is Unit Testing re-defined

Well developed and thorough unit tests will always reduce the number of bugs to be found downstream, where they are more expensive to fix. Some benefits of Test Driven Development (from the QA/SVT perspective):

(1) Testable code (i.e. all promised features at least "work")
(2) Less "careless/oversight/lazy" bugs
(3) More time to spend on finding "real" bugs
(4) More time testing and less time haggling

(5) Less time spent on test system maintenance

(insert yours here)

However, this can also be accomplished using a DCUT (Design, Code, Unit Test) approach regardless of methodology used. Well written unit tests not only provide code verification it is also a key component of the developers personal feedback loop. There is one huge benefit I see TDD has brought to the table; the developer is forced to write the unit test before he actually writes the code. This has always been the sticky point in every software house I have worked in, Unit Tests! Or lack there of.


There are exceptions to every rule. I have known many software engineers that write unit tests; they follow (no matter what methodology is being employed) a Design, Code, Unit Test approach. Their outputs were _ALWAYS_ free of "obvious" bugs. These, in my opinion, look at software development as an engineering process and not an artistic one; although creativity is a must. In contrast I have known many software developers that do not write unit tests and, further, feel they are unnecessary use of time. I believe there lies the difference; Software Engineer vs Software Developer; Engineer vs Artist.

I see TDD as one of the ignitors of a "pattern training and modification" for the way developers think about unit testing and developing in general that was very needed in the software industry.

This ties into the another discussion, is SW Development a Manufacturing process? If you are not writing unit tests how will you measure the output of your function? You wouldn't so in the case where SW development is considered an art. If, on the other hand, you're writing unit tests, then you'll be able to measure your output and hence be following a more manufacturing like, engineering process.

While [TDD] has been a success in getting a lot of folks in the software development industry to embrace unit testing, DCUT has been employed by friends and colleagues of mine and I'm sure most of you since the 80's and 90's; it just was not called TDD. These are the folks that, in my opinion, have always understood software development. Unit tests are like a personal feedback loop. One in which I as a developer can measure my output and decide success or failure. Any developer that does not unit test, is employing what I call "faith development" practices and is the weakest link in the organization; in my opinion and regardless of methodology / development process being used.




Consider the following Software Development Life Cycle diagram with which TDD, as well as any other software development methodology, I contend, can be used.


Fig 1 - SDLC should accomodate any development methodology



Your thoughts, comments and feedback is welcomed.

Wednesday, March 14, 2012

Are bug reporting and counting useful activities?

There are some in our profession (SVT/QA) that feel that counting bugs is somehow a waste of time. They instead favor a more ad-hoc method for working through issues/bugs found during the PDLC and addressing those issues even before they get documented. The consensus among them is that, why document it if it already happened? Ha! This is the same camp that claims to follow the scientific method of designing experiments or tests!

As far as bug counts go, my view is that if bugs are going to be counted for a purpose then that purpose has to be defined. I am NOT of the camp that believes that we can do without bug reports; simply because we need these reports to do our work. Just like scientists document their work, we document our work. How else are we going to turn this "craft", as some folks in the aforementioned camp call it, into a Profession? One step at a time that's how. Bug reports is just one of those steps.

Yes, you can get all philosophical and start asking questions about why and how bug reports or how and why count them, but at the end of the day we're building a product; even if you're knee deep in software development "arts", that masterpiece is still going be a part of a greater system that make it [masterpiece] a "Product". This "product" was engineered, designed and planned ahead of time (not necessarily in that order); no matter what methodology was used to develop it [product]. This implies research has to be performed and notes compared.

Yes you can also argue that you do not need this information because it already happened its in the past and its over and done with. Sure, if you want to think like that, I'm certain there is a place in the testing world for you. In my camp, there is no place for you. :) and IMO my product will be better than your product in the following criteria:


1. Customer Acceptance (no changes after ship)
2. Usability
3. Install, Setup, Configuration
4. Customer Returns
5. Much better 2.0 version of the product

This product does not have to be SW or HW, BTW; I'm talking about any product. Anybody from the "no bugs" report/counts camp up for the challenge? 


Comment to challenge :)

Monday, March 5, 2012

The test team is dispensable

I have been following a couple of discussions on LinkedIn where the consensus is that the test team is dispensable because it is a department that does not produce revenue yet incurs a lot of overhead.

Well I disagree with that. Not only is testing a necessity, it is also the only form of risk management available to stake holders that actually places the product in (as much as possible) the same environment the end user of the product under test would. Furthermore testing subjects the product under test to conditions a regular user would while utilizing it [product] under a wide range of scenarios. These scenarios can vary depending on the testing method employed.

Below is my top [in no particular order] 10 list why we test:
1. Risk assessment / management / mitigation
[ the rest are related to number 1 above in some way ]
2. Assess the end result against what we envisioned the end product to be
3. Verify implemented product features against feature requirements
4. Verify product integrated with the system in which it will run in production / customer
5. Verify the intended behavior of the product when known error conditions occur
6. Assess how the product performs under stress (at both ends of the scale)
7. Verify the intended behavior of the product when catastrophic / unknown error conditions occur
8. Provide feedback to development team during design, development and sit
9. Verify product features against user requirements
10.  Identify deviations

 How can anyone argue that the above list is not needed by any organization that has external customers yet alone an actual product that is sold? Yes, you may be able to get away without testing if all of your customers are internal; for example the IT department in a company with 100 employees.

It think that the majority of the folks whom opined in the LinkedIn discussions mentioned are confusing "testing" with "test team".  Or perhaps using the terms interchangeably. See in my opinion a test team can be dispensable but not testing. Just like accountants are dispensable but not keeping track of the company finances. Just like marketing product managers are dispensable as long as the product is marketed. Just like salesmen are dispensable as long as product gets sold.

My point is that some people try to create a chaos in the testing world by creating these false alarms about the testing team somehow needing some justification for its existence. The fact is that _ALL_ departments need justification for their existence and no one is immune from the all mighty axe when push comes to shove. This "justify your existence" notion is a fallacy that if you acknowledge it means you already failed. There are way more constructive ways for an organization to manage their employees performance in regards to helping the revenue stream than to ask individuals to justify themselves. I'll list some below.

How to avoid the "justify yourself" fallacy:
1. Set employee goals
2. Measure employee goals
3. Report / give feedback to employee regarding goals
4. Reward goals met (not necessarily with money)
5. Warn employee when goals are not met (consequences should be clear)
6. Retain only employees that meet goals (this consequence must be stated)
8. Hire only employees that will meet your goals (screening / interviewing process)

If you still feel, after reading all this, that testing teams need to justify themselves then you are a proud new member of the "Justify Yourself Fallacy Club".

Thoughts?

Wednesday, January 11, 2012

Handy query to find duplicate rows in a MySQL DB

My most recent project has me dealing with huge (we're talking hundreds of megabytes) lists of email addresses. These email addresses have been collected over years and I found out the hard way that there were not only duplicates and triplicates, but some email addresses were listed in there more than 6 TIMES!!!

My first order of business was to clean up the email lists which reside in MySQL data tables. Here is the query to grab all the duplicates. This query will also tell you how many times the email address is duplicated:

SELECT email_address,
COUNT (email_address) AS NumOccurrences
FROM email_marketing_tbl
GROUP BY email_address
HAVING (count (email_address) > 1);


The above query will produce output like this:


+----------------------------+----------------+
| email_address              | NumOccurrences |
+----------------------------+----------------+
| email1@comcast.net         |              2 |
| email2@ktis.net            |              2 |
| email3@hotmail.com         |              2 |
| email4@netzero.com         |              2 |



Once dupes are identified and reviewed the next step is to delete the duplicate records. We do so using the following SQL:

CREATE TEMPORARY TABLE email_temp AS SELECT * FROM email_marketing_tbl
GROUP BY email_address;
DELETE FROM email_marketing_tbl;
INSERT INTO email_marketing_tbl SELECT * FROM email_temp;


Below are the results of the above three queries:

Query OK, 539 rows affected
Records: 539  Duplicates: 0  Warnings: 0
Query OK, 548 rows affected

Query OK, 539 rows affected
Records: 539  Duplicates: 0  Warnings: 0
To recap:
1. Identify which records are duplicates
2. Create a temporary table from the output of the select of the email_marketing_tbl (the one with the dupes in it)
3. Delete all records from email_marketing_tbl
4. Insert all (cleaned up) records from email_temp into email_marketing_tbl

Thats IT!

Creative Commons License
VGP-Miami Web and Mobile Automation Blog by Alfred Vega is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.