Wednesday, December 8, 2010
1-Making cases too long
2-Incomplete, incorrect, or incoherent setup
3-Leaving out a step
4-Naming fields that changed or no longer exist
5-Unclear whether tester or system does action
6-Unclear what is a pass or fail result
7-Failure to clean up
Tuesday, December 7, 2010
The cardinal axiom of all user interface design:
" A user interface is well-designed when the program behaves exactly how the user thought it would. "
Wednesday, December 1, 2010
Accurate - tests what it’s designed to test
Economical - no unnecessary steps
Repeatable, reusable - keeps on going
Traceable - to a requirement
Appropriate - for test environment, testers
Self standing - independent of the writer
Self cleaning - picks up after itself
Wednesday, November 3, 2010
Monday, November 1, 2010
1-Successful people reject mediocrity in all parts of their lives.
2-Exclusive Focus and clarity of purpose leads to success.
3-Success is a journey, filled with endless opportunities and challenges.
4-When enthusiasm fades, chances of opportunity erode.
5-Difficulties are the signal to focus our energies.
Wednesday, October 27, 2010
•If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
•It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.
Cons of Manual
•Running tests manually can be very time consuming
•Each time there is a new build, the tester must rerun all required tests – which after a while would become very mundane and tiresome.
You can read a detailed article by Sara Ford from here
Monday, October 18, 2010
Monday, September 20, 2010
Pareto Analysis is a statistical technique in decision making that is used for the selection of a limited number of tasks that produce significant overall effect. It uses the Pareto Principle (also know as the 80/20 rule) the idea that by doing 20% of the work you can generate 80% of the benefit of doing the whole job. Or in terms of quality improvement, a large majority of problems (80%) are produced by a few key causes (20%). This is also known as the vital few and the trivial many.
The 80/20 rule can be applied to almost anything:
•80% of customer complaints arise from 20% of your products or services.
•80% of delays in schedule arise from 20% of the possible causes of the delays.
•20% of your products or services account for 80% of your profit.
•20% of your sales-force produces 80% of your company revenues.
•20% of a systems defects cause 80% of its problems.
Sounds quite interesting.........
Sunday, August 15, 2010
1. All the requirements mentioned in FRS are covered.
2. All negative scenario tests are covered.
3. Boundary Value Conditions are covered. i.e Tests covering lower/upper bounds are covered.
4. Data Validity tests are covered
5. All the GUI related test cases (if mentioned in FRS) are covered
6. To check is there any invalid Test case
7. To check is there any redundancy in Test cases
8. To check the Test case Priority
9. To check Narration of Test case
10. To check no major scenarios is missing in test cases
11. Test step is written complete and understandable
12. Clear Expected result is mentioned for each step
13. Checking for all text/ grammatical errors
14. Length of test steps is appropriate or not
15. Information related to setup of test environment, pre-requisties, what are the success/ Failed end condition
Check list can be vary upon the types of test cases that is functional , regression, perfomance etc.
Monday, August 9, 2010
The process involves an active analysis of the application for any weaknesses, technical flaws, or vulnerabilities. Any security issues that are found will be presented to the system owner together with an assessment of their impact and often with a proposal for mitigation or a technical solution.
Thursday, June 17, 2010
This result in
• Reduced quality
• Lots of surprises in the end.
Agile helps in 3 ways in order to avoid lust during product management.
Developing features in priority order: even if everything is required, product team still needs to prioritize tasks and focus on what is the most important thing to do first.
Incremental Gratification: See progress after every 2 to 4 weeks, so that lust does not get a chance to accumulate.
Working as sustainable pace: Working on a sustainable pace avoids overtime.
You can read last post regading Project management sins form here.
Monday, June 14, 2010
Both testers are performing exploratory testing, but while one concentrates on driving the functionality, one is thinking about the application from a high level. The testers switch roles at regular intervals.
In an experiment in Micorsoft, it is observed that in a single 8-hour session, 15 pairs of testers found 166 bugs, including 40 classified as severity 1 (bugs that must be fixed as soon as possible). In feedback collected from a survey sent to the 30 participants, only 3 thought that pair testing was less fun than an individual approach is, and 4 four thought it was less effective.
Thursday, June 10, 2010
Conversely, white box testing is an approach that uses test design based on analysis of the underlying source code or schemas not visible to the end user. Test cases founded solely on a white box approach are typically thorough, but nearly always miss key end-user scenarios.
The answer to this dilemma is a gray box (sometimes called glass box) approach. Tests are designed from a customer-focused point of view first (that is, black box), but white box approaches are used to ensure efficiency and test case coverage of the application under test. Testers are responsible for both the customer viewpoint and for determining the correctness of an application. They cannot cover both areas effectively without considering both black box and white box approaches.
Tuesday, June 8, 2010
“Frequently, when I am testing a component or feature for the first time and I have source code available, I use the debugger to test. Before I even write test cases, I write a few basic tests. These might be automated tests, or they might be just a few ideas that I have scribbled onto a pad. I set a breakpoint somewhere in the component initialization, and then use the debugger to understand how every code path is reached and executed. I note where boundary conditions need testing and where external data is used. I typically spend hours (and sometimes days) poking and prodding (and learning and executing) until I feel I have a good understanding of the component. At that point, I usually have a good idea of how to create an efficient and effective suite of tests for the component that can be used for the lifetime of the product.”
Friday, May 28, 2010
Gluttony implies that fixing all dimensions (Scope, resources, cost and quality) of the project at the start. Such situations arise when we want to achieve more than the expected goal, in result quality is affected. It results in impossible schedules and death marches in the end. As we know Project iron triangle between cost, resources and schedule, we need to vary on thing. You can read more from here.
Time Boxing is quite good technique to avoid gluttony, in agile mode, each iteration is time boxed and we are always focus on project velocity and what we can achieve next, after few iterations , we can guess about the project velocity and bug rhythm, knowing this prevents the temptation of over-commit.
Friday, May 21, 2010
Thursday, May 20, 2010
It’s better to think of it as an “elastic triangle” and vary the cost, schedule, and/or scope as required. It is critical to understand how flexible you are with respect to each vertex. Perhaps your resources are limited due to financial cutbacks but you're willing to develop less functionality as the result of lower expectations due to the cutback. Perhaps the schedule is critical because you have a legislated deadline (e.g. for Sarbox or Basel-2) to meet, and due to the potential repercussions senior management is willing to spend whatever it takes to get the job done. Once you understand your situation, you can choose one of the following strategies for elasticizing the iron triangle:
Vary the scope. You can do this by timeboxing, which enables you to fix resources and schedule by dropping low-priority features out of iteration when you’ve run out of time. It is interesting to note that many agile development processes, such as Scrum or Extreme Programming (XP) take this sort of approach with an agile strategy for change management.
Vary the schedule. You can set the scope and the resources and then let the schedule vary by varying the number and type of people on the team which enables you to deliver the required functionality at the desired cost. If you're tight for budget, a small team may deliver the same functionality that a large team would but take longer calendar time to do so. The more people you have on a team, the greater the amount of money you need to spend on coordination and therefore your overall costs increase. Furthermore, a handful of highly productive people may produce better work and do so for far less money than a larger team of not-so-productive people.
Vary the resources. If you set the schedule and scope you may need to hire more and/or better people to deliver the system. However, remember that there are limits to this approach: nine women can't deliver a baby in one month.
Friday, May 14, 2010
Thursday, May 13, 2010
Forced browsing is a technique used by attackers to gain access to resources that are not referenced, but are nevertheless accessible. One technique is to manipulate the URL in the browser by deleting sections from the end until an unprotected directory is found
Thursday, April 8, 2010
If you keep applying the same pesticide, the insects eventually build up resistance and the pesticide no longer works.
Software undergoing the same repetitive tests eventually builds up resistance to them.
In the spiral model of software development, the test process repeats each time around the loop. With each iteration, the software testers receive the software for testing and run their tests. Eventually, after several passes, all the bugs that those tests would find are exposed. Continuing to run them won't reveal anything new.
To overcome the pesticide paradox, software testers must continually write new and different tests to exercise different parts of the program and find more bugs.
Wednesday, March 24, 2010
- Test strategy
- Test reporting
The scripted approach to testing attempts to mechanize the test process by taking test ideas out of a test designer's head and putting them on paper. There's a lot of value in that way of testing. But exploratory testers take the view that writing down test scripts and following them tends to disrupt the intellectual processes that make testers able to find important problems quickly.
Tuesday, March 2, 2010
Friday, February 26, 2010
Can it perform the required functions?
Will it work well and resist failure in all required situations?
How easy is it for a real user to use the product?
How well is the product protected against unauthorized use or intrusion?
How well does the deployment of the product scale up or down?
How speedy and responsive is it?
How easily can it be installed onto it target platform?
How well does it work with external components & configurations?
How economical will it be to provide support to users of the product?
How effectively can the product be tested?
How economical is it to build, fix or enhance the product?
How economical will it be to port or reuse the technology elsewhere?
How economical will it be to publish the product in another language?
Tuesday, February 23, 2010
Software design and development is a constructive job and Software testing on the other hand, breaks the code, so it is not possible for the software developer to change his mind set overnight from constructive activity to destructive testing activity, because he can’t bring himself into frame of mind of exposing errors.
Secondly, along with psychological problem, it can be possible that programmer is not fully aware about the software specification or misunderstood the specification. If it is the case, that programmer will have the same misunderstanding while testing his own program.
This does not implies that programmer can not test his own program; rather quality of a software would be great if programmer tests his code by executing test cases, but testing is more effective and successful if performed by some other party.
It is common practice that while testing, testers concentrate on valid and expected input conditions, and neglect the invalid and unexpected input conditions. But it is observed that most of errors in production come from the unusual and unexpected data input. and frequency of errors are great when testing with invalid and unexpected input conditions.
Examining a program to see if it does not do what it is supposed to do is only half of the battle, we also have to see if program does what it is not supposed to do. For example if
Program generates ticket in ticket reservation system, it should be make sure that it does not generate duplicate ticker or ticker without charging fare amount.
Do not throw test cases until the program life has ended, when we create test cases to test a specific functionality of a module, effort is there to create test cases to cater all scenarios. But after testing if the test cases are thrown away, it would create difficulty when we have to re-test the program it mean the effort we put for creating test cases will be lost so maintaining the repository of test cases is a good practice.
Monday, February 15, 2010
Severity defines the impact that a given defect has on the system. A severe defect may cause the system to crash or leads to an un-defined state.
- Should we fix it now, or can it wait?
- How difficult is it to resolve?
- How many resources will be tied up by the resolution?
Example Issue 1 (system crash) is definately severe, may be difficult to resolve, but only happens rarely. When should we fix it? Contrast that with the second issue (spelling error). Not severe, just makes you look bad. Should be a real easy fix. One developer, maybe 10 minutes to fix, another 10 to validate (if that). Which should get the higher Priority? Which should we fix now?
I'm going to recommend fixing the typo immediately, and if there is sufficient time, fix and resolve the blue screen before the next build. Many commercial defect tracking systems have either Severity or Priority. Some may have both. Personally I know of only one. But others allow you to modify existing fields or add additional fields. Of course those are severely lacking in other areas.
Bottom-line: you need to define both Severity and Priority for your application, and based on your user's needs.
Tuesday, February 9, 2010
- How often we execute the test?
- Do you really need to automate this test case? Is it cost effective?
- What is the mechanism you have to execute manual and automated testing in parallel, how you would compare the results?
- Is it quite complicated and time-consuming to execute it manually?
- How accurate results do you want after executing this test?
- Do you have skilled resources available for its automation?
- Are you automating it for regression testing? As automation plays key role in regression testing?
- Scripts written for automation are scalable to cope with the added functionality in future?
It is quite obvious that there are other factors involved in test automation as well, I just mentioned few of them.
Friday, January 29, 2010
Wednesday, January 27, 2010
- Static testing is about prevention, dynamic testing is about cure.
- The static tools offer greater marginal benefits .
- Static testing is many times more cost-effective than dynamic testing.
- Static testing beats dynamic testing by a wide margin.
- Static testing is more effective.
- Static testing gives you comprehensive diagnostics for your code.
- Static testing achieves 100% statement coverage in a relatively short time, while dynamic testing often often achieves less than 50% statement coverage, because dynamic testing finds bugs only in parts of the code that are actually executed.
- Dynamic testing usually takes longer than static testing. Dynamic testing may involve running several test cases, each of which may take longer than compilation.
- Dynamic testing finds fewer bugs than static testing. Static testing can be done before compilation, while dynamic testing can take place only after compilation and linking.
- Static testing can find all of the followings that dynamic testing cannot find: syntax errors, code that is hard to maintain, code that is hard to test, code that does not conform to coding standards, and ANSI violations.
You can read more about static testing from here.
Wednesday, January 13, 2010
A cowboy coder can be a lone developer or part of a group of developers with either no external management or management that controls only non-development aspects of the project, such as its nature, scope, and feature set (the "what", but not the "how").
Are there any cowboys around you? :)
Wednesday, January 6, 2010
I read an interesting e-book by Seth godin about good presentations; here I am summarizing few remarkable points about how to prepare for attractive PowerPoint presentations.
First, make yourself cue cards. You should be able to see your cue cards on your laptop’s screen while your audience sees your slides on the wall. Now, you can use the cue cards you made to make sure you’re saying what you came to say.
Second, make slides that reinforce your words, not repeat them. Create slides that demonstrate, with emotional proof, that what you’re saying is true not just accurate.
Third, create a written document. A leave-behind. Put in as many footnotes or details as you like. Then, when you start your presentation, tell the audience that you’re going to give them all the details of your presentation after it’s over, and they don’t have to write down everything you say.
Here are the five rules you need to remember to create amazing PowerPoint Presentations:
- No more than six words on a slide. EVER.
- No cheesy images. Use professional images from corbis.com instead
- No dissolves, spins or other transitions. None
- Sound effects can be used a few times per presentation, but never (ever) use the sound effects that are built in to the program. Instead, rip sounds and music from CDs and leverage the Proustian effect this can have.
- Don’t hand out print-outs of your slides. They’re emotional, and they won’t work without you there.
Friday, January 1, 2010
Google’s algorithm is quite complicated and tricky, it is:
Relevance is basically the measure of how your website (or more accurately one of your web pages) matches the search phrase the user has entered. Measuring relevance is a relatively sophisticated process, but it boils down to some fundamentals like the title of the page, words on the page and how frequently they occur
PageRank is an independent measure of Google’s perception of the quality/authority/credibility of an individual web page. It does not depend on any particular search phrase. So, assuming that two web pages have the same relevance – then whoever has the higher PageRank gets the better ranking – and shows up at the top of the results page.
How PageRank Is Calculated
PageRank is primarily determined by how many other web pages are linking into you. Google considers this kind of inbound a link a vote of confidence.
That is if your web page is having almost 20 inbound link from website ABC (having PR of 3) is equivalent if your web page has one inbound link from a web site having PR 8.
I have summarized my understanding in this article; you can read more details on inbound marketing from here. I have already written about need of inbound marketing on my blog.