Friday, March 16, 2012

Software Development Life Cycle

When it comes to Software Development, we all know there is no one size fit all solution. Adopting a process that includes requirement analysis and design of some sort as well as unit tests not only produces a better product, it gives you an adequate (for your shop) level of documentation in the form of User Stories, TDD Unit Tests or a Requirements Doc for example. You may ask, why do I need documentation? Consider the case of an employee that leaves unexpectedly. All this employee leaves behind is code (maybe even well documented code), however, there is a lot of knowledge that is taken in the brain by that employee. This knowledge and subject matter expertise in the context of your products or services is imperative to your current business model, IMO, and we must try to retain as much as possible. Thinking about a design and filling the gap between what the customer wants and what we can offer and documenting that, some way, is what requirement gathering and design stage is all about and fulfills the documentation requirement. The point is we can be agile all we want, but we have to document our work.

Consider the following Software Development Life Cycle diagram with which TDD, as well as any other software development methodology, I contend, can be used.


Fig 1 - SDLC should accomodate any development methodology


As you can see from the diagram above, there are distinct phases that have been defined. The actual design and coding of the software, if following this process, can be executed using any method chosen by the developers. What matters is that we get unit tested code ready for integration.

Your thoughts and comments are welcomed.

TDD is Unit Testing re-defined

Well developed and thorough unit tests will always reduce the number of bugs to be found downstream, where they are more expensive to fix. Some benefits of Test Driven Development (from the QA/SVT perspective):

(1) Testable code (i.e. all promised features at least "work")
(2) Less "careless/oversight/lazy" bugs
(3) More time to spend on finding "real" bugs
(4) More time testing and less time haggling

(5) Less time spent on test system maintenance

(insert yours here)

However, this can also be accomplished using a DCUT (Design, Code, Unit Test) approach regardless of methodology used. Well written unit tests not only provide code verification it is also a key component of the developers personal feedback loop. There is one huge benefit I see TDD has brought to the table; the developer is forced to write the unit test before he actually writes the code. This has always been the sticky point in every software house I have worked in, Unit Tests! Or lack there of.


There are exceptions to every rule. I have known many software engineers that write unit tests; they follow (no matter what methodology is being employed) a Design, Code, Unit Test approach. Their outputs were _ALWAYS_ free of "obvious" bugs. These, in my opinion, look at software development as an engineering process and not an artistic one; although creativity is a must. In contrast I have known many software developers that do not write unit tests and, further, feel they are unnecessary use of time. I believe there lies the difference; Software Engineer vs Software Developer; Engineer vs Artist.

I see TDD as one of the ignitors of a "pattern training and modification" for the way developers think about unit testing and developing in general that was very needed in the software industry.

This ties into the another discussion, is SW Development a Manufacturing process? If you are not writing unit tests how will you measure the output of your function? You wouldn't so in the case where SW development is considered an art. If, on the other hand, you're writing unit tests, then you'll be able to measure your output and hence be following a more manufacturing like, engineering process.

While [TDD] has been a success in getting a lot of folks in the software development industry to embrace unit testing, DCUT has been employed by friends and colleagues of mine and I'm sure most of you since the 80's and 90's; it just was not called TDD. These are the folks that, in my opinion, have always understood software development. Unit tests are like a personal feedback loop. One in which I as a developer can measure my output and decide success or failure. Any developer that does not unit test, is employing what I call "faith development" practices and is the weakest link in the organization; in my opinion and regardless of methodology / development process being used.




Consider the following Software Development Life Cycle diagram with which TDD, as well as any other software development methodology, I contend, can be used.


Fig 1 - SDLC should accomodate any development methodology



Your thoughts, comments and feedback is welcomed.

Wednesday, March 14, 2012

Are bug reporting and counting useful activities?

There are some in our profession (SVT/QA) that feel that counting bugs is somehow a waste of time. They instead favor a more ad-hoc method for working through issues/bugs found during the PDLC and addressing those issues even before they get documented. The consensus among them is that, why document it if it already happened? Ha! This is the same camp that claims to follow the scientific method of designing experiments or tests!

As far as bug counts go, my view is that if bugs are going to be counted for a purpose then that purpose has to be defined. I am NOT of the camp that believes that we can do without bug reports; simply because we need these reports to do our work. Just like scientists document their work, we document our work. How else are we going to turn this "craft", as some folks in the aforementioned camp call it, into a Profession? One step at a time that's how. Bug reports is just one of those steps.

Yes, you can get all philosophical and start asking questions about why and how bug reports or how and why count them, but at the end of the day we're building a product; even if you're knee deep in software development "arts", that masterpiece is still going be a part of a greater system that make it [masterpiece] a "Product". This "product" was engineered, designed and planned ahead of time (not necessarily in that order); no matter what methodology was used to develop it [product]. This implies research has to be performed and notes compared.

Yes you can also argue that you do not need this information because it already happened its in the past and its over and done with. Sure, if you want to think like that, I'm certain there is a place in the testing world for you. In my camp, there is no place for you. :) and IMO my product will be better than your product in the following criteria:


1. Customer Acceptance (no changes after ship)
2. Usability
3. Install, Setup, Configuration
4. Customer Returns
5. Much better 2.0 version of the product

This product does not have to be SW or HW, BTW; I'm talking about any product. Anybody from the "no bugs" report/counts camp up for the challenge? 


Comment to challenge :)

Monday, March 5, 2012

The test team is dispensable

I have been following a couple of discussions on LinkedIn where the consensus is that the test team is dispensable because it is a department that does not produce revenue yet incurs a lot of overhead.

Well I disagree with that. Not only is testing a necessity, it is also the only form of risk management available to stake holders that actually places the product in (as much as possible) the same environment the end user of the product under test would. Furthermore testing subjects the product under test to conditions a regular user would while utilizing it [product] under a wide range of scenarios. These scenarios can vary depending on the testing method employed.

Below is my top [in no particular order] 10 list why we test:
1. Risk assessment / management / mitigation
[ the rest are related to number 1 above in some way ]
2. Assess the end result against what we envisioned the end product to be
3. Verify implemented product features against feature requirements
4. Verify product integrated with the system in which it will run in production / customer
5. Verify the intended behavior of the product when known error conditions occur
6. Assess how the product performs under stress (at both ends of the scale)
7. Verify the intended behavior of the product when catastrophic / unknown error conditions occur
8. Provide feedback to development team during design, development and sit
9. Verify product features against user requirements
10.  Identify deviations

 How can anyone argue that the above list is not needed by any organization that has external customers yet alone an actual product that is sold? Yes, you may be able to get away without testing if all of your customers are internal; for example the IT department in a company with 100 employees.

It think that the majority of the folks whom opined in the LinkedIn discussions mentioned are confusing "testing" with "test team".  Or perhaps using the terms interchangeably. See in my opinion a test team can be dispensable but not testing. Just like accountants are dispensable but not keeping track of the company finances. Just like marketing product managers are dispensable as long as the product is marketed. Just like salesmen are dispensable as long as product gets sold.

My point is that some people try to create a chaos in the testing world by creating these false alarms about the testing team somehow needing some justification for its existence. The fact is that _ALL_ departments need justification for their existence and no one is immune from the all mighty axe when push comes to shove. This "justify your existence" notion is a fallacy that if you acknowledge it means you already failed. There are way more constructive ways for an organization to manage their employees performance in regards to helping the revenue stream than to ask individuals to justify themselves. I'll list some below.

How to avoid the "justify yourself" fallacy:
1. Set employee goals
2. Measure employee goals
3. Report / give feedback to employee regarding goals
4. Reward goals met (not necessarily with money)
5. Warn employee when goals are not met (consequences should be clear)
6. Retain only employees that meet goals (this consequence must be stated)
8. Hire only employees that will meet your goals (screening / interviewing process)

If you still feel, after reading all this, that testing teams need to justify themselves then you are a proud new member of the "Justify Yourself Fallacy Club".

Thoughts?
Creative Commons License
VGP-Miami Web and Mobile Automation Blog by Alfred Vega is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.