Friday, December 4, 2015

Verifying PDF file contents

Sometimes the applications we are testing require some level of verification of reports that are output by the application under test under a variety  of formats. One of these formats is the ever popular PDF.

An option for verifying PDFs might be using PDFLib Text Extraction Toolkit from PDFlib GmbHT.  This toolkit appears to be very powerful and supports the extraction of text, images as well as all objects that make up a PDF file. 

The example below are using Squish's API to drive an application developed using Qt framework. As well, while the script examples here are in Perl, you can replace it with the language of  your choice. As long as it offers similar services to the programmer.

In the following example: 

The function get_pdf_objs conceptually uses the external application tet.exe (Text Extraction Toolkit) in combination with Perl’s system function to extract all text and images from a PDF file and store it in a txt file or image file respectively.




parse_pdf is not really a parser, in this example its only role is to produce an array of text strings that it will extract from the text file produced by get_pdf_objs. 



Using Perl or any other of the supported scripting languages, we can easily script the handling of PDF files using an external tool like PDFLib TET and a custom utility function developed for this purpose. The above example illustrates this concept; the process is outlined below: 
  1. Start the application 
  2. Navigate the UI until a report is generated
  3. Print that report 
  4. Pass the file to a PDFLib function to extract text, images, etc.
  5. Parse the extracted text, images, etc. and compare to known good values 

I'd like to know your thoughts via comments.

Saturday, October 24, 2015

Test Cases - Some Benefits Derived

On numerous occasions I've been involved in discussions regarding test cases with other folks involved in a software development life cycle where the overall accepted sentiment is that when all test cases are "executed" we are done testing; or testing has not been done unless test cases have been executed.

Unfortunately for our field, I've often found myself alone in trying to show that just because you executed 100 test cases does not necessarily mean that you did any testing at all. You could have, but this depends on the person executing the test cases and whether they understand test cases and how to use them in testing.

The below presentation I gave at a former employer that had a huge catalog of test cases with detailed steps and was still struggling with product quality. They were hesitant, however, to accept that they weren't really testing when purely executing the test cases. Mainly because they feared that all the effort that went into the creation and maintenance of the test cases would be wasted.

In the session I explained to them how they can actually use these test cases to get the maximum return on your investment in the creation of these artifacts. Test cases, since they are designed by subject matter experts, contain all the information that most real world users of your product will need to perform the very same functions; and since they contain detailed steps, are suitable for many applications.

Your thoughts, comments are welcomed.



Saturday, February 28, 2015

Virtual (Simulated) Users Rock!

I've been absent from blogosphere for the last two months because I've been busy working on a new assignment. This assignment has me knee deep into designing an automated web application framework for a leading SaaS company.

One of the first things I identified as being needed for this implementation of my framework (other than the basics: reporting, logging, etc.) was the need to have a group of virtual users (little robots if you will) that will mimic not only real users of the system, but their behaviors as well.

User State System
Enter the User State System. While the name may sound fancy this is nothing more than a series of database tables who's sole responsibility is to keep stored the state of each user as well as their interactions with the SaaS system and any artifacts they create along the way. When running tests in parallel (or even sequentially) keeping track of user state has the following advantages:
  •  Restore the entire user list to a known default state
  •  Restore a particular user to a known or default state
  •  Test scripts can share users and their current states (e.g. logged in, not logged in)
A user state system holds the application's user information (e.g. user name, password, email), as well as current user state as it relates to the application under test (e.g. user is busy, paid user, free user, security answers).

An Example
As an example, lets say you have a system that supports a SaaS for an accounting firm. You have SLA's in place that guarantee certain response times and load times as well as unscheduled down time. In a user state table for this type of system all users will share a common starting state, however this state is only guaranteed when the user record is first added or reset to a default state; from then on each user’s state will be represented in this table. In other words this is a dynamic table constantly being updated / changed by the running test scripts or it may be updated by resetting it to a default state either globally (for all users) or for a particular group of users (i.e. 1 or more users).

With a user state system implemented when test scripts execute they request, as part of the test setup, a specific user with a specific set of characteristics from the user state system. The specific set of characteristics would be determined by the features / functions being tested. Test scripts then execute using the requested user. For example if you wanted to test invoices being generated properly you would request a user that has the appropriate rights (i.e. is able to add invoices) and that has an account configured with a budget with enough money to satisfy the invoice. Or conversely a user that has an account configured with a budget that does not have enough money to satisfy the invoice.

Conclusion
As you can see in the above example you can effectively use a state system to pinpoint specific functionality as required by the test you are designing without the need of building this test data into the test. This level of abstraction gives the test designer, and the automated test system, the flexibility and speed that is gained from using shared virtual users with a known current state.

As well, although there is no one size fit all solution when it comes to saving user state, there are some common features that can be shared in any user state system that will be utilized for functional verification in QA or even for TiP (testing in production). I will share some of these in a future post.

Wednesday, November 12, 2014

Testing with Perl - Test::More

The More module available via CPAN is part of the Test::* series and provides a wide range of testing functions. In the previous post we learned that TAP is just a simple text-based interface between testing modules that are part of  a test harness; Test::More is one of these modules that utilizes TAP and expands the "simpleness" of the protocol by providing various ways to say "ok". It includes, among other things:
  • Better diagnostics (not just OK / NOT OK, but also for example WHY)
  • Capability to skip tests given a range of criteria
  • Capability to test futures features (using TODO)
  • Compare complicated data structures
Lets jump right in and use the above referenced module to test a web application available at myapp.com (this is a fictitious web site that does not exist). As part of our test we should:
  1. Navigate the login page
  2. Make sure we are in the correct starting point
  3. Enter our login credentials
  4. Login to the site
  5. Verify our landing page after the login operation.
Implemented as a Perl test script that uses Selenium::Remote::Driver and Test::More we might write the above as:
1:  $driver->get($page);  
2:  my $loc1 = $driver->get_current_url();  
3:  is ( $loc1, LOGINHOME, 'Verify landing page' );  # THIS IS TEST 1
4:  login_as( $user_id, $password );  
5:  my $loc2 = $driver->get_current_url();  
6:  is ( $loc2, APPHOME, 'Verify landing page after login' );  # THIS IS TEST 2

In line 1 we navigate to our starting page (the login page) using Selenium's get method. In line 2 we declare a variable that will hold the URL we just navigated to and that will be returned by Selenium's get_current_url method. Then, in line 3, we utilize Test::More's is() is function to assert that the page we landed on, '$loc1', is in fact the expected one 'LOGINHOME'. Line 4 executes a page object who's sole purpose is to login to the web application under test. After the login operation we once again get the url of the page we landed on, '$loc2', and compare that to the page we expect to be after the login which is 'APPHOME'.

NOTE: I used the term Selenium above for readability to refer to Selenium::Remote::Driver - the Perl binding to WebDriver. 

Below is the output that would be sent to the harness in a PASS case:
 ok 1 - Verify landing page  
 ok 2 - Verify landing page after login  

And the output for a FAIL case:
 not ok 1 - Verify landing page  
 not ok 2 - Verify landing page after login  

Whenever a test fails, one of the features of Test::More is that it gives you meaningful information (the whys) you can use when evaluating test failures. For example, in the above failure the following would be sent to the error output file:
 #  Failed test 'Verify landing page'  
 #  at C:\Users\Freddy Vega\SsApp\create_decision_tree.pl line 107.  
 #        got: 'http://myapp.com/myloginpage'  
 #   expected: 'http://myapp.com/Account/Login'  
 #  Failed test 'Verify landing page after login'  
 #  at C:\Users\Freddy Vega\SsApp\create_decision_tree.pl line 113.  
 #        got: 'http://myapp.com/Apphome'  
 #   expected: 'http://myapp.com/Home'  
 # Looks like you failed 2 tests of 2.  

As you can probably see by now, testing with Perl means not to re-invent the wheel every time a testing problem arises. In our solution we were able to use Selenium::Remote::Driver to drive our application under test (a web app). We used Test::More to make our assertions while we are testing and TAP::Harness to tie it all together and produce results that can later be mined, interpreted and presented to different audience types (management, users, developers, etc).

In the next post in this series I tell you about test harnesses and how you can combine these with other tools to help you design a robust automation framework.

Monday, November 10, 2014

Testing with Perl - TAP (the Test Anything Protocol)

While there are many programming languages out there that folks seem to prefer when implementing automation for web and PC based applications (e.g. Python, Ruby), I've been inclined to choose Perl just about every time I have been asked to solve some testing problem. There are several reasons, I believe, why I have always chosen Perl over other candidates, below are my current top three:
  1. Cross platform support for your tests (write once run everywhere)
  2. Powerful text parsing
  3. No need to re-invent the wheel (there are thousands of modules available on CPAN)
TAP (or Test Anything Protocol) is a "simple text-based interface between testing modules in a test harness. TAP started life as part of the test harness for Perl"; it should be noted that, even though it was originally designed for and used in the development of Perl itself, TAP now has implementations in C, C++, Python, PHP, Perl, Java, JavaScript, and others as well. So this should tell you that at least programmers have noticed and have found the protocol helpful enough to port it to their language.

Our purpose for TAP is very simple: to say 'ok' or 'not ok' in a standard way with the aim of facilitating communication between the tests and the services used to run / support the tests. That is we use the Test Anything Protocol to report their successes or failures. We can then use other tools to aggregate the information produced by our tests and present it in a human readable form. Below is a sample TAP stream so you can get an idea of what it looks like.

  1..4  
  ok 1 - Input file opened  
  not ok 2 - First line of the input valid  
  ok 3 - Read the rest of the file  
  not ok 4 - Summarized correctly # TODO Not written yet  

The above stream output states that we ran 4 tests (1..4). Two of them passed (1,3) while two of them failed (2,4). Simple, isn't it?

So what do you need to test with Perl? You need three things:
  1. The AUT
  2. TAP producer (e.g. Test::More module)
  3. TAP consumer (e.g. TAP::Harness module)
I'll go into details on each one of these on future posts but for now just know that a TAP producer is just a module that does "automation magic" (such as Test::More) and communicates its successes and failures to a TAP consumer (such as TAP::Harness) using the test anything protocol.

There are many testing modules on CPAN that use TAP to report their successes or failures. In this blog series we will focus on Test::More. We'll also cover (briefly) the other modules in the Test::* family of modules mainly just to be aware that they are there for you if you need them. To write robust tests in Perl to verify a web application or web site, however, you'll find that Test::More is more than adequate (no pun intended).



Wednesday, October 22, 2014

#stop29119. Campaign? Or a classic example of the "We Have to Do Something" fallacy?

 There is no denying that there has been a lot of activity regarding ISO29119 since August of this year and it doesn't seem like its going to be dying down anytime soon. The standard has certainly created a rift in the training and consulting space that has aggravated a long time rivalry between two schools of thought.

Now, in order to run a successful and, more importantly, profitable business we need to be able to compete and use any tool at our disposal to reach our vision, this includes public debates and functions. One of the things we must keep in mind when talking about standards and certifications is that its a business, most folks know this already but if you don't now you do. And its a business whether you issue a certificate at the end of the training or not, by the way. Its a business with a bottom line just like Sears. Its a business that needs to fight for its existence or fall prey to its competition.

Aside from a response from Stuard Reid the WG convenor, the ISO camp has been fairly quiet throughout this debacle. This hasn't been the case for the stop campaign side, however. From them we see statements used like: "where is your skin in the game" or "if the standard is approved all testers will be forced to succumb to and abide by it". You also read some folks say "you'll be forced to produce tons of wasteful documentation" or "before your every move you'll need to get a sign off" when talking or interacting (via social media) with folks that either don't know of the petitions' existence or have decided to abstain from signing it for a variety of different reasons.

In following this debate on Twitter, LinkedIn and the web (via individual's blogs). I have noticed a pattern in the rhetoric used by the stop campaign folks which I believe its an almost perfect implementation of the "Scare Tactic"[*1] argument which inevitably leads to a "We Have to Do Something"[*2] fallacy. In other words the standard is going to be so bad that we should all unite and do something, no matter what that something is. Even if the something is just to stop the darn thing. Sounds counter productive doesn't it? Why not offer a real solution rather than a call to arms? This is the part that has me, and a lot of others, baffled a bit.

Finally, and for the record once again, I want to be clear that I am not saying that the arguments raised by James Christie based on his own experiences and knowledge is in any way shape or form invalid. I am saying, however, that the ensuing madness does appear to fall within the model of a "Scare Tactic" and "We Have to Do Something" fallacies.

I'm not advocating for just silently accepting the standard either, I'm advocating for doing your own research and coming to your own conclusions based on your own independent investigation.

Even if one believes the allegations expressed by the supporters of the stop campaign it makes you wonder why the scare tactic? As humans we are thinking creatures. We like to be presented with information and be able to analyze that information and come up with our own conclusions. But when things are framed in a way that it is meant to scare or force people into signing, it makes one wonder if there is anything more to this debate. Anything more than business profit, business market share, and of course the all important human mind share.

What say you?



[*1] Scare Tactic (Also Paranoia): A variety of Playing on Emotions, a raw appeal to fear. A corrupted argument from Pathos.(E.g., "If you don't do what I say we're all gonna die! In this moment of crisis you can't afford the luxury of thinking or trying to second-guess my decisions. Our very lives are in peril!  We need united action, now!")

[*2] We Have to Do Something: The dangerous contemporary fallacy that in moments of crisis one must do something, anything, at once, even if it is an overreaction, is totally ineffective or makes the situation worse, rather than "just sit there doing nothing." (E.g., "Banning air passengers from carrying ham sandwiches onto the plane probably does nothing to deter potential hijackers, but we have to do something to respond to this crisis!") This is a corrupted argument from pathos.

Thursday, October 16, 2014

Purpose, Mission and Vision keep testers self focused on things that matter

Purpose, Mission and Vision may sound like pointy hair  mumbo jumbo to you but what if it isn't?. In fact I believe it applies to testing and, specifically to Context Aware Testing.

To be context aware means to adapt according to the location, the collection of nearby people, and accessible resources as well as to changes to such things over time. To be a context aware tester means that you have the capabilities to examine the computing and human environments and react to changes to the environments that may affect the product under test.

To help guide us in our journey the Context Aware Tester always lays out his "Test Pact" from the onset. The pact includes the Purpose (the why), Mission (the how) and Vision (the what) for her testing. Lets review an example:

As a contractor she bids for and wins an assignment to test Widget A (an address book application for Windows). During a meeting with the stakeholders you find out this application is for internal use only, their main concern is stability and don't want the app to crash and cause loss of contact information.

From this bit of information we can begin to define our Test Pact:
  • You start with your purpose. To make sure the contacts application is stable by testing it using real world scenarios.
  • We then lay out what it is that we anticipate when we're done, our vision. This is our definition of done. In this case we anticipate application stability.
  • Next we state the mission. How are we going to accomplish our goal of verifying the state of the application stability. Load testing, stress testing, volume testing.
By defining your purpose, mission and vision before starting your testing project (no matter how small) you'd have given yourself a road map as well as a set of constraints to wrap around your testing effort to help keep you focused on the things that matter most (i.e. what's important). Once you start working, this is also a great way to gauge if what you are being asked to do now (an interruption) interferes with or contradicts any of the Test Pacts you are currently working on.

In nutshell Test Pacts encapsulate the definition of testing for its specific context in the form of purpose, vision and mission. This implies that for a context-aware tester, the definition of testing is not only depending on context, but also possibly different each time.

To a context aware tester, purpose (why) is her guide while the mission (how) is what drives her towards the vision (what). This keeps us closely and tightly aligned with, not only the technical aspects, but also the vision, of the stakeholders as captured in the Test Pacts.



Creative Commons License
VGP-Miami Web and Mobile Automation Blog by Alfred Vega is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.