Friday, February 26, 2010
Can it perform the required functions?
Will it work well and resist failure in all required situations?
How easy is it for a real user to use the product?
How well is the product protected against unauthorized use or intrusion?
How well does the deployment of the product scale up or down?
How speedy and responsive is it?
How easily can it be installed onto it target platform?
How well does it work with external components & configurations?
How economical will it be to provide support to users of the product?
How effectively can the product be tested?
How economical is it to build, fix or enhance the product?
How economical will it be to port or reuse the technology elsewhere?
How economical will it be to publish the product in another language?
Tuesday, February 23, 2010
Software design and development is a constructive job and Software testing on the other hand, breaks the code, so it is not possible for the software developer to change his mind set overnight from constructive activity to destructive testing activity, because he can’t bring himself into frame of mind of exposing errors.
Secondly, along with psychological problem, it can be possible that programmer is not fully aware about the software specification or misunderstood the specification. If it is the case, that programmer will have the same misunderstanding while testing his own program.
This does not implies that programmer can not test his own program; rather quality of a software would be great if programmer tests his code by executing test cases, but testing is more effective and successful if performed by some other party.
It is common practice that while testing, testers concentrate on valid and expected input conditions, and neglect the invalid and unexpected input conditions. But it is observed that most of errors in production come from the unusual and unexpected data input. and frequency of errors are great when testing with invalid and unexpected input conditions.
Examining a program to see if it does not do what it is supposed to do is only half of the battle, we also have to see if program does what it is not supposed to do. For example if
Program generates ticket in ticket reservation system, it should be make sure that it does not generate duplicate ticker or ticker without charging fare amount.
Do not throw test cases until the program life has ended, when we create test cases to test a specific functionality of a module, effort is there to create test cases to cater all scenarios. But after testing if the test cases are thrown away, it would create difficulty when we have to re-test the program it mean the effort we put for creating test cases will be lost so maintaining the repository of test cases is a good practice.
Monday, February 15, 2010
Severity defines the impact that a given defect has on the system. A severe defect may cause the system to crash or leads to an un-defined state.
- Should we fix it now, or can it wait?
- How difficult is it to resolve?
- How many resources will be tied up by the resolution?
Example Issue 1 (system crash) is definately severe, may be difficult to resolve, but only happens rarely. When should we fix it? Contrast that with the second issue (spelling error). Not severe, just makes you look bad. Should be a real easy fix. One developer, maybe 10 minutes to fix, another 10 to validate (if that). Which should get the higher Priority? Which should we fix now?
I'm going to recommend fixing the typo immediately, and if there is sufficient time, fix and resolve the blue screen before the next build. Many commercial defect tracking systems have either Severity or Priority. Some may have both. Personally I know of only one. But others allow you to modify existing fields or add additional fields. Of course those are severely lacking in other areas.
Bottom-line: you need to define both Severity and Priority for your application, and based on your user's needs.
Tuesday, February 9, 2010
- How often we execute the test?
- Do you really need to automate this test case? Is it cost effective?
- What is the mechanism you have to execute manual and automated testing in parallel, how you would compare the results?
- Is it quite complicated and time-consuming to execute it manually?
- How accurate results do you want after executing this test?
- Do you have skilled resources available for its automation?
- Are you automating it for regression testing? As automation plays key role in regression testing?
- Scripts written for automation are scalable to cope with the added functionality in future?
It is quite obvious that there are other factors involved in test automation as well, I just mentioned few of them.