Wednesday, December 8, 2010

Common mistakes in Test case writting

The seven most common mistakes in test case writting is as follows:

1-Making cases too long
2-Incomplete, incorrect, or incoherent setup
3-Leaving out a step
4-Naming fields that changed or no longer exist
5-Unclear whether tester or system does action
6-Unclear what is a pass or fail result
7-Failure to clean up

Tuesday, December 7, 2010

Cardinal axiom of all user interface design

Few days back I read an article from Joel Spolsky. I found his statement quite interesting as he said:

The cardinal axiom of all user interface design:
" A user interface is well-designed when the program behaves exactly how the user thought it would. "
you can read the detailed article from here.

Wednesday, December 1, 2010

What is a good test case?

Few important traits of the good test case is as follows:
Accurate - tests what it’s designed to test
Economical - no unnecessary steps
Repeatable, reusable - keeps on going
Traceable - to a requirement
Appropriate - for test environment, testers
Self standing - independent of the writer
Self cleaning - picks up after itself

Wednesday, November 3, 2010

When to Quit.....

In this video, Seth godin summarizes the main idea of his book "The Dip".

Monday, November 1, 2010

Some Inspiring Quotes from Seth Godin

Few days back, I cam across a small interview from Seth Godin, and I really liked few of his statements:

1-Successful people reject mediocrity in all parts of their lives.
2-Exclusive Focus and clarity of purpose leads to success.
3-Success is a journey, filled with endless opportunities and challenges.
4-When enthusiasm fades, chances of opportunity erode.
5-Difficulties are the signal to focus our energies.

Wednesday, October 27, 2010

Pros and Cons of Manual Testing

Pros of Manual
•If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
•It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.
Cons of Manual
•Running tests manually can be very time consuming
•Each time there is a new build, the tester must rerun all required tests – which after a while would become very mundane and tiresome.

You can read a detailed article by Sara Ford from here

Monday, October 18, 2010

The Mindset of the Winners

Excellent talk by Seth Godin.

Monday, September 20, 2010

Pareto Analysis 80/20 Rule

Pareto Analysis is a statistical technique in decision making that is used for the selection of a limited number of tasks that produce significant overall effect. It uses the Pareto Principle (also know as the 80/20 rule) the idea that by doing 20% of the work you can generate 80% of the benefit of doing the whole job. Or in terms of quality improvement, a large majority of problems (80%) are produced by a few key causes (20%). This is also known as the vital few and the trivial many.

The 80/20 rule can be applied to almost anything:

•80% of customer complaints arise from 20% of your products or services.
•80% of delays in schedule arise from 20% of the possible causes of the delays.
•20% of your products or services account for 80% of your profit.
•20% of your sales-force produces 80% of your company revenues.
•20% of a systems defects cause 80% of its problems.

Sounds quite interesting.........

Sunday, August 15, 2010

Test case Review Checklist

High level checklist of test cases review is as follows:

1. All the requirements mentioned in FRS are covered.
2. All negative scenario tests are covered.
3. Boundary Value Conditions are covered. i.e Tests covering lower/upper bounds are covered.
4. Data Validity tests are covered
5. All the GUI related test cases (if mentioned in FRS) are covered
6. To check is there any invalid Test case
7. To check is there any redundancy in Test cases
8. To check the Test case Priority
9. To check Narration of Test case
10. To check no major scenarios is missing in test cases
11. Test step is written complete and understandable
12. Clear Expected result is mentioned for each step
13. Checking for all text/ grammatical errors
14. Length of test steps is appropriate or not
15. Information related to setup of test environment, pre-requisties, what are the success/ Failed end condition

Check list can be vary upon the types of test cases that is functional , regression, perfomance etc.

Monday, August 9, 2010

What is a vulnerability?

A vulnerability is a flaw or weakness in a system's design, implementation, or operation and management that could be exploited to violate the system's security policy.
A threat is a potential attack that, by exploiting a vulnerability, may harm the assets owned by an application (resources of value, such as the data in a database or in the file system). A test is an action that tends to show a vulnerability in the application.

What is Penetration Testing

A penetration test is a method of evaluating the security of a computer system or network by simulating an attack. A Web Application Penetration Test focuses only on evaluating the security of a web application.
The process involves an active analysis of the application for any weaknesses, technical flaws, or vulnerabilities. Any security issues that are found will be presented to the system owner together with an assessment of their impact and often with a proposal for mitigation or a technical solution.

Thursday, June 17, 2010

Project Management Sin:Lust

Lust is intense or unrestrained craving for features; it is experienced when we try to put many features into the product within allocated time and treat every feature as a critical one.
This result in
• Reduced quality
• Overtime
• Lots of surprises in the end.

Agile helps in 3 ways in order to avoid lust during product management.

Developing features in priority order: even if everything is required, product team still needs to prioritize tasks and focus on what is the most important thing to do first.

Incremental Gratification: See progress after every 2 to 4 weeks, so that lust does not get a chance to accumulate.

Working as sustainable pace: Working on a sustainable pace avoids overtime.
You can read last post regading Project management sins form here.

Monday, June 14, 2010

What is pair testing?

Pair Testing is a successful approach to exploratory testing. Inspired by pair programming, this practice groups two testers together for an exploratory testing session. One tester sits at the keyboard and exercises the feature or application while the other tester stands behind or sits next to the first tester and helps to guide the testing.
Both testers are performing exploratory testing, but while one concentrates on driving the functionality, one is thinking about the application from a high level. The testers switch roles at regular intervals.
In an experiment in Micorsoft, it is observed that in a single 8-hour session, 15 pairs of testers found 166 bugs, including 40 classified as severity 1 (bugs that must be fixed as soon as possible). In feedback collected from a survey sent to the 30 participants, only 3 thought that pair testing was less fun than an individual approach is, and 4 four thought it was less effective.

Thursday, June 10, 2010

Why we need gray-box testing?

We often follow quite common approaches like black box and white box testing for SUT (System under test). Black box testing is an approach based on testing an application without any knowledge of the underlying code of the application. Black box approach to testing is a useful method of simulating and anticipating how the customer will use the product. On the other hand, pure black box approaches often end up over-testing certain parts of the application while under-testing other portions.
Conversely, white box testing is an approach that uses test design based on analysis of the underlying source code or schemas not visible to the end user. Test cases founded solely on a white box approach are typically thorough, but nearly always miss key end-user scenarios.
The answer to this dilemma is a gray box (sometimes called glass box) approach. Tests are designed from a customer-focused point of view first (that is, black box), but white box approaches are used to ensure efficiency and test case coverage of the application under test. Testers are responsible for both the customer viewpoint and for determining the correctness of an application. They cannot cover both areas effectively without considering both black box and white box approaches.
Reference: "How we test software at microsoft".

Tuesday, June 8, 2010

Code Debugging while exploratory testing

Yesterday, I was reading a book “How we test software at Microsoft” by Alan page and his colleagues. I liked his approach of debugging the code before designing test cases or doing exploratory testing. I am quoting his words below:

Frequently, when I am testing a component or feature for the first time and I have source code available, I use the debugger to test. Before I even write test cases, I write a few basic tests. These might be automated tests, or they might be just a few ideas that I have scribbled onto a pad. I set a breakpoint somewhere in the component initialization, and then use the debugger to understand how every code path is reached and executed. I note where boundary conditions need testing and where external data is used. I typically spend hours (and sometimes days) poking and prodding (and learning and executing) until I feel I have a good understanding of the component. At that point, I usually have a good idea of how to create an efficient and effective suite of tests for the component that can be used for the lifetime of the product.”

Friday, May 28, 2010

Project Management Sin: Gluttony

Few days back, I just posted a figure describing seven sins of project mangement.It is a great presentaion by Mike Cohan , you can see this video from software testing club.
Gluttony implies that fixing all dimensions (Scope, resources, cost and quality) of the project at the start. Such situations arise when we want to achieve more than the expected goal, in result quality is affected. It results in impossible schedules and death marches in the end. As we know Project iron triangle between cost, resources and schedule, we need to vary on thing. You can read more from here.
Time Boxing is quite good technique to avoid gluttony, in agile mode, each iteration is time boxed and we are always focus on project velocity and what we can achieve next, after few iterations , we can guess about the project velocity and bug rhythm, knowing this prevents the temptation of over-commit.

Friday, May 21, 2010

Question of the day

Q: How many test cases were written for Microsoft Office 2007?
A: More than a Million

Reference: Taken from the book "How we test software at Microsoft"

Thursday, May 20, 2010

Famous: Iron Triangle

The iron triangle must be respected. The iron triangle refers to the concept that of the three critical factors – scope, cost, and time – at least one must vary otherwise the quality of the work suffers. Nobody wants a poor quality system, otherwise why build it? Therefore the implication is that at least one of the three vertexes must be allowed to vary. The problem is that when you try to define the exact level of quality, the exact cost, the exact schedule, and the exact scope to be delivered you virtually guarantee failure because there is no room for a project team to maneuver.
It’s better to think of it as an “elastic triangle” and vary the cost, schedule, and/or scope as required. It is critical to understand how flexible you are with respect to each vertex. Perhaps your resources are limited due to financial cutbacks but you're willing to develop less functionality as the result of lower expectations due to the cutback. Perhaps the schedule is critical because you have a legislated deadline (e.g. for Sarbox or Basel-2) to meet, and due to the potential repercussions senior management is willing to spend whatever it takes to get the job done. Once you understand your situation, you can choose one of the following strategies for elasticizing the iron triangle:
Vary the scope. You can do this by timeboxing, which enables you to fix resources and schedule by dropping low-priority features out of iteration when you’ve run out of time. It is interesting to note that many agile development processes, such as Scrum or Extreme Programming (XP) take this sort of approach with an agile strategy for change management.
Vary the schedule. You can set the scope and the resources and then let the schedule vary by varying the number and type of people on the team which enables you to deliver the required functionality at the desired cost. If you're tight for budget, a small team may deliver the same functionality that a large team would but take longer calendar time to do so. The more people you have on a team, the greater the amount of money you need to spend on coordination and therefore your overall costs increase. Furthermore, a handful of highly productive people may produce better work and do so for far less money than a larger team of not-so-productive people.
Vary the resources. If you set the schedule and scope you may need to hire more and/or better people to deliver the system. However, remember that there are limits to this approach: nine women can't deliver a baby in one month.
You can read more about iron triangle form here.

Seven Project Management Sins

Mark Cohn describes following as project management sins:

Friday, May 14, 2010

Seven Testing Principles

In this video, basic testing principles are summarized, video is embeded from software testing club.

Thursday, May 13, 2010

What is Forced Browsing?

Forced browsing is a technique used by attackers to gain access to resources that are not referenced, but are nevertheless accessible. One technique is to manipulate the URL in the browser by deleting sections from the end until an unprotected directory is found

Thursday, April 8, 2010


In 1990, Boris Beizer, in his book Software Testing Techniques, Second Edition, coined the term pesticide paradox to describe the phenomenon that the more you test software, the more immune it becomes to your tests. The same thing happens to insects with pesticides.
If you keep applying the same pesticide, the insects eventually build up resistance and the pesticide no longer works.
Software undergoing the same repetitive tests eventually builds up resistance to them.
In the spiral model of software development, the test process repeats each time around the loop. With each iteration, the software testers receive the software for testing and run their tests. Eventually, after several passes, all the bugs that those tests would find are exposed. Continuing to run them won't reveal anything new.
To overcome the pesticide paradox, software testers must continually write new and different tests to exercise different parts of the program and find more bugs.

Wednesday, March 24, 2010

What is Startup?

Its a great presentation about start ups from Eric Ries, Soon I will try to share key points of this presentation.

Exploratory Testing Vs Scripted Testing

Frequent themes in the management of an effective exploratory test cycle are:
  • Tester
  • Test strategy
  • Test reporting

The scripted approach to testing attempts to mechanize the test process by taking test ideas out of a test designer's head and putting them on paper. There's a lot of value in that way of testing. But exploratory testers take the view that writing down test scripts and following them tends to disrupt the intellectual processes that make testers able to find important problems quickly.

The more we can make testing intellectually rich and fluid, the more likely we will hit upon the right tests at the right time. That's where the power of exploratory testing comes in: The richness of this process is only limited by the breadth and depth of our imagination and our emerging insights into the nature of the product under test.
On the other hand, Scripting has its place. We can imagine testing situations where efficiency and repeatability are so important that we should script or automate them. For example, in the case where a test platform is only intermittently available, such as a client-server project where there are only a few configured servers available and they must be shared by testing and development. The logistics of such a situation may dictate that we script tests carefully in advance to get the most out of every second of limited test execution time.
Exploratory testing is especially useful in complex testing situations, when little is known about the product, or as part of preparing a set of scripted tests. The basic rule is this:
“Exploratory testing is called for any time the next test you should perform is not obvious, or when you want to go beyond the obvious. In my experience, that's most of the time.”
You can read detailed James Bach's article from here.

Tuesday, March 2, 2010

Try New Things

“Anyone who has never made a mistake has never tried anything new.”

– Albert Einstein

Friday, February 26, 2010

what is the Quality Criteria For Software Applications?

James Bach defines categories for quality criteria, which is as follows:
Can it perform the required functions?
Will it work well and resist failure in all required situations?
How easy is it for a real user to use the product?
How well is the product protected against unauthorized use or intrusion?
How well does the deployment of the product scale up or down?
How speedy and responsive is it?
How easily can it be installed onto it target platform?
How well does it work with external components & configurations?
How economical will it be to provide support to users of the product?
How effectively can the product be tested?
How economical is it to build, fix or enhance the product?
How economical will it be to port or reuse the technology elsewhere?
How economical will it be to publish the product in another language?

Tuesday, February 23, 2010

Testing Priciples

In the premise of software testing, there are few guidelines or testing principles, which appears to be quite obvious but often overlooked. Let’s discuss these testing principles.
A necessary part of the test case is the definition of the output of expected data
While designing test cases, it is quite important to describe the output or expected result on the basis of given input, along with input data execution environment is also quite important to analyze output data. As it is said, that testing is destructive process and testers tries to break the code by giving expected or un-expected input to the system but still it is human psychology that it still desires to see the correct result, So if proper out put is not defined then one can consider un desired output as desired output of the system.
A programmer should avoid to test his own program
Software design and development is a constructive job and Software testing on the other hand, breaks the code, so it is not possible for the software developer to change his mind set overnight from constructive activity to destructive testing activity, because he can’t bring himself into frame of mind of exposing errors.
Secondly, along with psychological problem, it can be possible that programmer is not fully aware about the software specification or misunderstood the specification. If it is the case, that programmer will have the same misunderstanding while testing his own program.
This does not implies that programmer can not test his own program; rather quality of a software would be great if programmer tests his code by executing test cases, but testing is more effective and successful if performed by some other party.
Test cases should be written for valid and expected, invalid and unexpected input conditions
It is common practice that while testing, testers concentrate on valid and expected input conditions, and neglect the invalid and unexpected input conditions. But it is observed that most of errors in production come from the unusual and unexpected data input. and frequency of errors are great when testing with invalid and unexpected input conditions.
Examining a program
Examining a program to see if it does not do what it is supposed to do is only half of the battle, we also have to see if program does what it is not supposed to do. For example if
Program generates ticket in ticket reservation system, it should be make sure that it does not generate duplicate ticker or ticker without charging fare amount.
Avoid throwing test cases
Do not throw test cases until the program life has ended, when we create test cases to test a specific functionality of a module, effort is there to create test cases to cater all scenarios. But after testing if the test cases are thrown away, it would create difficulty when we have to re-test the program it mean the effort we put for creating test cases will be lost so maintaining the repository of test cases is a good practice.
I read these testing principles from the "The Art Of Software Testing" by Glenford Myers.

Monday, February 15, 2010

Comparison between Severity and Priority

I came across nice short description of severity and priority by Dave Whalen. The summary is as follows:

Severity defines the impact that a given defect has on the system. A severe defect may cause the system to crash or leads to an un-defined state.
Priority, on the other hand, defines the order in which we should resolve a defect depending upon:
  1. Should we fix it now, or can it wait?
  2. How difficult is it to resolve?
  3. How many resources will be tied up by the resolution?

Example Issue 1 (system crash) is definately severe, may be difficult to resolve, but only happens rarely. When should we fix it? Contrast that with the second issue (spelling error). Not severe, just makes you look bad. Should be a real easy fix. One developer, maybe 10 minutes to fix, another 10 to validate (if that). Which should get the higher Priority? Which should we fix now?

I'm going to recommend fixing the typo immediately, and if there is sufficient time, fix and resolve the blue screen before the next build. Many commercial defect tracking systems have either Severity or Priority. Some may have both. Personally I know of only one. But others allow you to modify existing fields or add additional fields. Of course those are severely lacking in other areas.

Bottom-line: you need to define both Severity and Priority for your application, and based on your user's needs.

Tuesday, February 9, 2010

Important factors involved in test automation

Tests automations depends on several factors, some of the important factors which Play an important rule while deciding whether given test should be automated or not are as follows:
  1. How often we execute the test?
  2. Do you really need to automate this test case? Is it cost effective?
  3. What is the mechanism you have to execute manual and automated testing in parallel, how you would compare the results?
  4. Is it quite complicated and time-consuming to execute it manually?
  5. How accurate results do you want after executing this test?
  6. Do you have skilled resources available for its automation?
  7. Are you automating it for regression testing? As automation plays key role in regression testing?
  8. Scripts written for automation are scalable to cope with the added functionality in future?

It is quite obvious that there are other factors involved in test automation as well, I just mentioned few of them.

Friday, January 29, 2010

First Job post by micrsoft for software tester

I read first 2 chapters of interesting book “How we test software at Microsoft”, I found historical job ad posted by Microsoft in Seattle times in 1985 for the position of software testers.

Wednesday, January 27, 2010

Static Testing Vs Dynamic Testing

Yesterday I read some interesting comparison between static and dynamic testing, so I decided to share it on my blog.
Software testing is a process of analyzing or operating software for the purpose of finding bugs.According to the definition testing can involve either analyzing or operating software. Test activities that are associated with analyzing the products of software development are called static testing. Static testing includes code inspections walkthroughs and desk checks. In contrast test activities that involve operating the software are called dynamic testing.
  1. Static testing is about prevention, dynamic testing is about cure.
  2. The static tools offer greater marginal benefits .
  3. Static testing is many times more cost-effective than dynamic testing.
  4. Static testing beats dynamic testing by a wide margin.
  5. Static testing is more effective.
  6. Static testing gives you comprehensive diagnostics for your code.
  7. Static testing achieves 100% statement coverage in a relatively short time, while dynamic testing often often achieves less than 50% statement coverage, because dynamic testing finds bugs only in parts of the code that are actually executed.
  8. Dynamic testing usually takes longer than static testing. Dynamic testing may involve running several test cases, each of which may take longer than compilation.
  9. Dynamic testing finds fewer bugs than static testing. Static testing can be done before compilation, while dynamic testing can take place only after compilation and linking.
  10. Static testing can find all of the followings that dynamic testing cannot find: syntax errors, code that is hard to maintain, code that is hard to test, code that does not conform to coding standards, and ANSI violations.

You can read more about static testing from here.

Wednesday, January 13, 2010

Cowboys Coders

I came across interesting term “Cowboy coding “, so I thought to put some brief information about Cowboy coding on my blog, this term used to describe software development where the developers have autonomy over the development process. This includes control of the project's schedule, algorithms, tools, and coding style.
A cowboy coder can be a lone developer or part of a group of developers with either no external management or management that controls only non-development aspects of the project, such as its nature, scope, and feature set (the "what", but not the "how").

Are there any cowboys around you? :)

Wednesday, January 6, 2010

How to create good powerpoint Presentation

I read an interesting e-book by Seth godin about good presentations; here I am summarizing few remarkable points about how to prepare for attractive PowerPoint presentations.

First, make yourself cue cards. You should be able to see your cue cards on your laptop’s screen while your audience sees your slides on the wall. Now, you can use the cue cards you made to make sure you’re saying what you came to say.
Second, make slides that reinforce your words, not repeat them. Create slides that demonstrate, with emotional proof, that what you’re saying is true not just accurate.
Third, create a written document. A leave-behind. Put in as many footnotes or details as you like. Then, when you start your presentation, tell the audience that you’re going to give them all the details of your presentation after it’s over, and they don’t have to write down everything you say.

Here are the five rules you need to remember to create amazing PowerPoint Presentations:

  1. No more than six words on a slide. EVER.
  2. No cheesy images. Use professional images from instead
  3. No dissolves, spins or other transitions. None
  4. Sound effects can be used a few times per presentation, but never (ever) use the sound effects that are built in to the program. Instead, rip sounds and music from CDs and leverage the Proustian effect this can have.
  5. Don’t hand out print-outs of your slides. They’re emotional, and they won’t work without you there.

Friday, January 1, 2010

How Google works

Millions of people use Google or any other search engine to search their relevant material. Some of these people might be potential clients looking for your particular offering. There are two ways for you to show up on the results page when users are doing a search. The first is paid advertising and the second is “organic” (or “natural”) search. The natural search results are based Google search algorithm.

Google’s algorithm is quite complicated and tricky, it is:
Search Ranking = Relevance * PageRank

Relevance is basically the measure of how your website (or more accurately one of your web pages) matches the search phrase the user has entered. Measuring relevance is a relatively sophisticated process, but it boils down to some fundamentals like the title of the page, words on the page and how frequently they occur

PageRank is an independent measure of Google’s perception of the quality/authority/credibility of an individual web page. It does not depend on any particular search phrase. So, assuming that two web pages have the same relevance – then whoever has the higher PageRank gets the better ranking – and shows up at the top of the results page.

How PageRank Is Calculated

PageRank is primarily determined by how many other web pages are linking into you. Google considers this kind of inbound a link a vote of confidence.
But, here’s the trick: Not all inbound links are created equal. Web pages with more credibility that link to you have more “value” to your PageRank than those with less credibility.

That is if your web page is having almost 20 inbound link from website ABC (having PR of 3) is equivalent if your web page has one inbound link from a web site having PR 8.
I have summarized my understanding in this article; you can read more details on inbound marketing from here. I have already written about need of inbound marketing on my blog.