Wednesday, June 30, 2010

Browser Bug Analyzing Tips

Check if the client operating system(OS) version and patches meet system requirements.
Check if the correct version of the browser is installed on the client machine.
Check if the browser is properly installed on the matche.
Check the browser settings.
Check with different browsers (e.g., Netscape Navigator versus internet Explorer).
Check with different supported versions of the same browsers(e.g.3.1,3.2,4.2,4.3, etc).

Few Questions with answers on software Bug

What are the different types of Bugs we normally see in any of the Project? Include the severity as well.
The Life Cycle of a bug in general context is:
Bugs are usually logged by the development team (While Unit Testing) and also by testers (While sytem or other type of testing).So let me explain in terms of a tester's perspective:

A tester finds a new defect/bug, so using a defect tracking tool logs it.
1. Its status is 'NEW' and assigns to the respective dev team (Team lead or Manager).
2. The team lead assign's it to the team member, so the status is 'ASSIGNED TO'
3. The developer works on the bug fixes it and re-assings to the tester for testing. Now the status is 'RE-ASSIGNED'
4. The tester, check if the defect is fixed, if its fixed he changes the status to 'VERIFIED'
5. If the tester has the autority (depends on the company) he can after verifying change the status to 'FIXED'. If not the test lead can verify it and change the status to 'fixed'.
6. If the defect is not fixed he re-assign's the defect back to the dev team for re-fixing.
This is the life cycle of a bug.
1. User Interface Defects - Low
2. Boundary Related Defects - Medium
3. Error Handling Defects - Medium
4. Calculation Defects - High
5. Improper Service Levels (Control flow defects) - High
6. Interpreting Data Defects - High
7. Race Conditions (Compatibility and Intersystem defects)- High
8. Load Conditions (Memory Leakages under load) - High
9. Hardware Failures:- High

Why are there Bugs?
Since humans design and program hardware and software, mistakes are inevitable. That's what computer and software vendors tell us, and it's partly true. What they don't say is that software is buggier than it has to be. Why? Because time is money, especially in the software industry. This is how bugs are born: a software or hardware company sees a business opportunity and starts building a product to take advantage of that. Long before development is finished, the company announces that the product is on the way. Because the public is (the company hopes) now anxiously awaiting this product, the marketing department fights to get the goods out the door before that deadline, all the while pressuring the software engineers to add more and more features. Shareholders and venture capitalists clamor for quick delivery because that's when the company will see the biggest surge in sales. Meanwhile, the quality-assurance division has to battle for sufficient bug-testing time.

What is the difference between a bug, a defect, and an error?
an error:
A human action that produces an incorrect result.Programmatically mistake leads to error.
bug:
An informal word describing any of the above.Deviation from the expected result.
A software bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working as intended, or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program's source code or its design. It is said that there are bugs in all useful computer programs, but well-written programs contain relatively few bugs, and these bugs typically do not prevent the program from performing its task. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports about bugs in a program are referred to as bug reports, also called PRs (problem reports), trouble reports, CRs (change requests), and so forth.

Defect:
Problem in algorithm leads to failure.A defect is for something that normally works, but it has something out-of-spec.

What Is Error Rate?
The mean time between errors. This can be a statistical value between any errors or it could be broken down into the rate of occurrence between similar errors. Error rate can also have a perception influence. This is important when identifying the "good-enough" balance of the error. In other words, the mean time between errors is greater than the ultimate user will accept.

What Is Issue log?
A log kept of all issues raised during the development process. This could contain problem uncoverd, impact of changes to specifications or the loss of a key individual. It is anything that must be tracked and monitored.

On-Line Testing Tools for Developers and Testers

Domain Name Format Validator As a software developer or tester of Web applications, you may be required to know what is a valid domain name and what is an invalid domain name. A valid domain name must satisfy a number of requirements. This page allows you to learn and test domain name format again those requirements.


IP (Version 4) Address Format Validator If you are a software developer or tester of communication applications, you need to know what is a valid IP addresses and what is an invalid IP address. This page allows you to learn and test valid version 4 IP addresses.

MAC (IEEE 802) Address Format Validator If you are a software developer or tester of communication applications, you need to know what is a valid MAC (Media Access Control) addresses and what is an invalid MAC address. This page allows you to learn and test valid version MAC addresses defined in IEEE 802 standards.

Email Address Format Validator As a software developer or tester of Web applications, you are required to know what is a valid email address and what is an invalid email address. According the Internet specifications, there are a number of valid email address formats. This page allows you to learn and test email address of all formats.

Show Me My Browser's Identification Information Whenever you are visiting a Website, your activities will be recorded against a client IP address and host name that represent your computer at that time. This simple test page allows you to find out what is your IP address and host name.

Show Me My IP Address and Host Name Every Web browser sends an identification string to the Web server you are visiting. The Web server can dynamically customize Web pages based on the browser identification. This simple test page allows you to find out what is the identification string of your browser.

Regular expressions...

Testing Regular Expression Match Patterns Regular expression started in Perl language. But it is now used everywhere: text editors, programming languages, and software testing tools. Regular expression is powerful, but hard to learn. This page allows you to learn and test regular expression with your own strings, match patterns, sub patterns, and modifiers.

Testing Regular Expression Match Pattern of Multiple Occurrences Regular expression pattern match with multiple occurrences is a very powerful tool in text editing, software programming and software testing. It can used to parse desired values of multiple occurrences out of text files. This page allows you to learn and test regular expression pattern match multiple occurrences.

Testing Regular Expression Match and Replace Regular expressions are commonly used in pattern matches and replacements processes. Many text editors, programming languages, and software testing tools offers pattern match and replacement functions. This page allows you to learn and test regular expression used in match and replacement processes.

PHP functions...

Testing date() PHP Function Date and time are used in every Web applications. But formatting a date and time value into a string presentation requires very programming skills. This page allows you to learn and test the date() function that gives so many ways to format a data and time value.

Testing htmlspecialchars() PHP Function When data needs to be presented in Web pages, HTML special characters must be well protected Otherwise, data well not be presented properly, or the Web will become corrupted. This page allows you to learn and test the htmlspecialchars() function to protect embedded data in Web pages.

Testing htmlentities() PHP Function When data needs to be presented in Web pages, HTML special characters must be well protected Otherwise, data well not be presented properly, or the Web will become corrupted. This page allows you to learn and test the htmlentities() function to protect embedded data in Web pages.

Testing html_entity_decode() PHP Function When data is embedded in HTML documents, special characters are resented in HTML entities to protect them from being interpreted by browsers. If you need them back, you can use the html_entity_decode() to convert them This page allows you to learn and test the html_entity_decode() function to convert HTML entities.

Testing urlencode() PHP Function When non-alphanumeric characters are presented in user data to be included as part of a Web page address, they need to be encoded into the percent (%) sign format as required by the application/x-www-form-urlencoded standard. This page allows you to learn and test the urlencode() function to perform URL encoding.

Testing urldecode() PHP Function When user data received by a Web server, it is presented in URL encoding format. If you retrieve this data from $_SERVER['QUERY_STRING'], you need to decode the data using the urldecode() function. If This page allows you to learn and test the urldecode() function to perform URL decoding.

Data converters...
Base64 Encode Converter When data is communicated over the Internet, it needs to be encoded into printable characters so it can be safely transmitted through communication applications. This page allows you to learn and test the Base64 encoding process.

Base64 Decode Converter When you receive email attachments, they are usually presented in Base64 encoded format. You can decode them yourself without using any email client tools. This page allows you to learn and test the Base64 decoding process.

UUEncode Converter On Unix systems, when data is communicated from one system to another system, it needs to be encoded into printable characters using the UUEncode algorith for safety reasons. This page allows you to learn and test the UUEncode process.

UUDecode Converter If you receive files in UUencode format for transfer protection reasons, you can decode the file yourselt without using any Unix tools. This page allows you to learn and test the UUdecode process.

Binary to HEX Converter Binary to HEX conversion is need if you want to open any non text file into a text editor to inspect issues like, hidden characters, corrupted UTF-8 sequences, corrupted file identifiers. This page allows you to learn and test the binary to HEX conversion process.

HEX to Binary Converter HEX to binary conversion is to reverse the process of binary to HEX conversion. This page allows you to learn and test the HEX to binary conversion process.

Random data generators...

Random Integer Generator If you are software tester, you may need some random integers to be used as part of application input, or test data in a database. This page allows you to generate large amount of integers randomly for your test data need.

Random Real Number/Float Value Generator If you are software tester, you may need some random real numbers or float values to be used as part of application input, or test data in a database. This page allows you to generate large amount of real numbers randomly for your test data need.

Random Price/Currency Value Generator If you are software tester, you may need some random prices or currency values to be used as part of application input, or test data in a database. This page allows you to generate large amount of currency values randomly for your test data need.

Random Date and Time Value Generator If you are software tester, you may need some random date and time values to be used as part of application input, or test data in a database. This page allows you to generate large amount of date and time values randomly for your test data need.

Random String Generator If you are software tester, you may need some random strings to be used as part of application input, or test data in a database. This page allows you to generate large amount of strings randomly for your test data need.

Test data generators...

Test User Full Name Generator Generator If you are software tester, you may need some user full names to be used as part of application input. This page helps you to generate some random first names and last names for your test data need.

Test User Birthday Date Generator If you are software tester, you may need some user birthday data to be used as part of application input. This page helps you to generate some random birthdays for your test data need.

Test Phone Number Generator If you are software tester, you may need some phone numbers to be used as part of application input. This page helps you to generate some random phone numbers for your test data need.

Test Email Address Generator If you are software tester, you may need some email addresses to be used as part of application input. Email addresses have special formatting rules. This page helps you to generate some email address in valid format for your test data need.

Test IP Address Generator If you are software tester, you may need some IP addresses to be used as part of application input. This page helps you to generate some random IP addresses for your test data need.

Test MAC Address Generator If you are software tester, you may need some MAC addresses to be used as part of application input. This page helps you to generate some random MAC addresses for your test data need.

Test Credit Card Number Generator If you are software tester, you may need some credit card numbers to be used as part of application input. Credit card numbers have special prefixes and MOD 10 checks. This page helps you to generate some credit card numbers in valid format for your test data need.

Test Password Generator If you are software tester, you may need some password strings to be used as part of application input. A good password must be at least 6 characters long and must contain a upper-case letter, a lower-case letter, a number and a symbol. This page helps you to generate some password strings for your test data need.

Load testing process... Version 7.2 Explained

Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives.


Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions.

Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us.

Step 4: Running the scenario. We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers.

Step 5: Monitoring the scenario. We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors.

Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.

Defect Tracking Objectives

  1. Provide the ability to track defects/problems
  2. Provide a defect tracking database
  3. Provide project-level data entry support
  4. Provide defect tracking/problem reporting workflow management
  5. Provide standardized and custom query/reporting capabilities
  6. Provide integration to software Version Management system
  7. Provide integration to Help Desk system
  8. Provide management information (cost of quality) and operational information (support project level testing process)
  9. Facilitate communication among testers/developers, the help desk, and management

Effective methods of writing Defect description

Testing is commonly used to execute software and finding defects. A Defect which describes of any variance or any discrepancies between actual and expected results in application software. Defects should be documented in such a way that any developer should understand the defect description and he/she can reproduce in their respective environment.


Defects can be logged by using Tools like Eg Siebel, Track, PVCS etc….and it can also be logged by documenting the defects and maintaining the document in repository. Testers should write the defect description efficiently which will be useful for others within a project. And the documentation should be transparent.

Best Practices of writing defects descriptions.
· Pre-Requisite of a Defect Document.
· An Abstract of a defect
· Description and Observation of a defect
· Screen shot of a Defect.

Pre-Requisite of a Defect Document
Document should contain few standard details:
- Author Name or Submitter Name
- Type of Defect (Eg: Enhancement, Issue or Defect)
- Submitted Date
- Fixed Date
- Status of the defect
- Project Phase (Eg: version 1.1, 2.0 etc…)
- Version Found (Daily builds)
- Severity (Eg, Critical, major, Minor, Cosmetic)

Abstract of a Defect
Testers should only specify a brief description of a defect

Description and Observation of Defect
In Description column the first few lines should specify an exact problem in the application. And in the following paragraph mention in detail like steps to reproduce (Eg , Start from the application Logon till the defect was found in the application).

Following with an Observation like (Eg, System displays an error message Eg: “Unable to update the record “. But according to functionality system should update the updated record). And it will be great if a tester specifies few more observation points like:

- This particular defect occurs in a Particular version (Eg Adobe versions for a Report.)
- This particular defect also found in other modules
- Inconsistency of application while reproducing the defect (Eg, some times able to reproduce and sometimes not)

Screen Shot of a defect
By providing a screen shot along with the defect document it will be very much useful for the developers to exactly identify the defect and the cause. And will be useful for the testers to verify in future of that particular defect.

Tips for Screen shot:
- Screen shot should be self explanatory
- A Figure like arrow , box or circle can be highlighted (were exactly the problem accrued this type highlighting will be helpful for GUI / Cosmetic related defects)
- Use different colors for specific descriptions

Conclusion

By giving the brief description of defect the

· Easy to analyze and cause of defect.
· Easy to fix the defect
· Avoid re-work
· Testers can save time
· Defect duplication can be avoided.
· Keeping track for defects

Bug Report Components

Report number:
Unique number given to a bug.

Program / module being tested:
The name of a program or module that being tested

Version & release number:
The version of the product that you are testing.

Problem Summary:
(data entry field that's one line) precise to what the problem is.

Report Type:
Describes the type of problem found, for example it could be software or hardware bug.

Severity:
Normally, how you view the bug.
Various levels of severity: Low - Medium - High - Urgent

Environment:
Environment in which the bug is found.

Detailed Description:
Detailed description of the bug that is found

How to reproduce:
Detailed description of how to reproduce the bug.

Reported by:
The name of person who writes the report.

Assigned to developer:
The name of developer who assigned to fixed the bug.

Status:

Open:
The status of bug when it entered.

Fixed / feedback:
The status of the bug when it fixed.

Closed:
The status of the bug when verified.

(Bug can be only closed by QA person. Usually, the problem is closed by QA manager.)

Deferred:
The status of the bug when it postponed.

User error:
The status of the bug when user made an error.

Not a bug:
The status of the bug when it is not a bug.

Priority:
Assigned by the project manager who asks the programmers to fix bugs in priority order.

Resolution:
Defines the current status of the problem. There are four types of resolution such as deferred, not a problem, will not fix, and as designed.

EXAMPLES OF EQUIVALENCE CLASSES

Ranges of numbers (such as all numbers between 10 and 99, which are of the same two-digit equivalence class)
Membership in groups (dates, times, country names, ete.)
Invalid inputs (placing symbols into text-only fields, etc)
Equivalent output events (variation of inputs that produce the same output)
Equivalent operating environments
Repetition of activities
Number of records in a database (or other equivalent objects)
Equivalent sums or other arithmetic results
Equivalent numbers of items entered (such as the number of characters enterd into a field)
Equivalent space (on a page or on a screen)
Equivalent amount of memory, disk space, or other resources available to a program.

Boundary values mark the transition points between equivalence clases. They can be limit values that define the line between supported inputs and nonsupported inputs,or they can define the line between supported system requirements and nonsupported system requirements. Applications are more susceptible to errors at the boundaries of equivalence classs, so boundary condition tests can be quite effective at uncovering errors.

Generally, each equivalence class is partitioned by its boundary vakues. Nevertheless, not all equivalence classs have boundaries. For example, given the following four browser equivalent classes(NETSCAPE 4.6.1 and Microsoft Internet Explorer 4.0 and 5.0), thereis no boundary defined among each class.

Each equivalence class represents potential risk. Under the equivalent class approach to developing test cases, at most, nine test cases should be executed against each partition.

How to do Debugging ?

Finding and fixing bugs, or "debugging", has always been a major part of computer programming. As computer programs grow more complex, bugs become more common and difficult to fix. Often programmers spend more time and effort finding and fixing bugs than writing new code.


Usually, the most difficult part of debugging is locating the erroneous part of the source code. Once the mistake is found, correcting it is usually easy. Programs known as debuggers exist to help programmers locate bugs. However, even with the aid of a debugger, locating bugs is something of an art.

Typically, the first step in locating a bug is finding a way to reproduce it easily. Once the bug is reproduced, the programmer can use a debugger or some other tool to monitor the execution of the program in the faulty region, and (eventually) find the problem. However, it is not always easy to reproduce bugs. Some bugs are triggered by inputs to the program which may be difficult for the programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to duplicate it. Other bugs may disappear when the program is run with a debugger; these are heisenbugs (humorously named after the Heisenberg uncertainty principle.)

Debugging is still a tedious task requiring considerable manpower. Since the 1990s, particularly following the Ariane 5 Flight 501 disaster, there has been a renewed interest in the development of effective automated aids to debugging. For instance, methods of static analysis by abstract interpretation have already made significant achievements, while still remaining much of a work in progress.

Build Automated QA into Your Code

Obviously, to build defect-free code, you have to be able to test your code. In addition to including a testing plan/strategy into the implementation, you should design specific code to provide for full, automated testability.


The most effective testing we use is fully automated or regression testing. This is a series of fully automated tests that are run after each build of a program. The tests are designed to exercise every part of the program, and produce a success/failure report. The idea is to use the power of the computer to make sure that the program hasn't been adversely affected by a change.

If the design is well structured, most changes should not have side effects. The purpose of these automated tests is to provide insurance that the coding assumptions are valid, and that everything else still works. By making the tests completely automated, they can be run frequently and provide prompt feedback to the engineer.
If tests are run by manually testing the program, we have the chance of human error missing a problem. Manual testing is also very expensive, usually too expensive to run after every change to a program.

There are a number of commercial testing tools available which are designed to help you automate your testing, particularly in GUI environments such as Windows. Although they are no doubt better than manual testing, we have not found them to be effective, for a number of reasons.

By building support for automated testing into your program, you can approach 100% automated testing. Without this customized, built-in testability, you will be lucky to achieve 35% automated testing, even with the best commercial QA testing tool. We recommend that you budget five percent of total engineering time to creating support for automated QA testing.

Of course, each new piece of code should have a corresponding set of tests, added at the same time as the code is added, for the automated QA suite.

In order for fully automated execution of testing to be of value, the tests that are automatically executed and checked must cover the software fully. To the extent that they don't, running the tests doesn't tell you anything about the part of your software that wasn't exercised by the testing. (This is true for all testing, whether automated or manual.)

Common types of computer bugs 2

Anti-patterns, also referred to as pitfalls, are classes of commonly-reinvented bad solutions to problems. They are studied, as a category, in order that they may be avoided in the future, and that instances of them may be recognized when investigating non-working systems.


The term originates in computer science, from the Gang of Four's Design Patterns book, which laid out examples of good programming practice. The authors termed these good methods "design patterns", and opposed them to "anti-patterns". Part of good programming practice is the avoidance of anti-patterns.

The concept is readily applied to engineering in general, and also applies outside engineering, in any human endeavour. Although the term is not commonly used outside engineering, the concept is quite universal.

Some recognised computer programming anti-patterns
* Abstraction inversion: Creating simple constructs on top of complex (Controversial)
* Accidental complexity: Introducing unnecessary complexity into a solution
* Action at a distance: Unexpected interaction between widely separated parts of a system * Accumulate and fire: Setting parameters for subroutines in a collection of global variables
* Ambiguous viewpoint: Presenting a model (usually OOAD) without specifying its viewpoint
* BaseBean: Inheriting functionality from a utility class rather than delegating to it
* Big ball of mud: A system with no recognisable structure
* Blind faith: Lack of checking of (a) the correctness of a bug fix or (b) the result of a subroutine
* Blob: see God object
* Boat anchor: Retaining a part of a system that has no longer any use
* Busy spin: Consuming CPU while waiting for something to happen, usually by repeated checking instead of proper messaging
* Caching failure: Forgetting to reset an error flag when an error has been corrected
* Checking type instead of interface: Checking that an object has a specific type when only a certain contract is required
* Code momentum: Over-constraining part of a system by repeatedly assuming things about it in other parts
* Coding by exception: Adding new code to handle each special case as it is recognised
* Copy and paste programming: Copying (and modifying) existing code without creating generic solutions
* De-Factoring: The process of removing functionality and replacing it with documentation
* DLL hell: Problems with versions, availability and multiplication of DLLs
* Double-checked locking: Checking, before locking, if this is necessary in a way which may fail with e.g. modern hardware or compilers.
* Empty subclass failure: Creating a (Perl) class that fails the "Empty Subclass Test" by behaving differently from a class derived from it without modifications
* Gas factory: An unnecessarily complex design
* God object: Concentrating too many functions in a single part of the design (class)
* Golden hammer: Assuming that a favorite solution is universally applicable
* Improbability factor: Assuming that it is improbable that a known error becomes effective
* Input kludge: Failing to specify and implement handling of possibly invalid input
* Interface bloat: Making an interface so powerful that it is too hard to implement
* Hard code: Embedding assumptions about the environment of a system at many points in its implementation
* Lava flow: Retaining undesirable (redundant or low-quality) code because removing it is too expensive or has unpredictable consequences
* Magic numbers: Including unexplained numbers in algorithms
* Magic pushbutton: Implementing the results of user actions in terms of an inappropriate (insufficiently abstract) interface
* Object cesspool: Reusing objects whose state does not conform to the (possibly implicit) contract for re-use
* Premature optimization: Optimization on the basis of insufficient information
* Poltergeists: Objects whose sole purpose is to pass information to another object
* Procedural code (when another paradigm is more appropriate)
* Race hazard: Failing to see the consequence of different orders of events
* Re-Coupling: The process of introducing unnecessary object dependency
* Reinventing the wheel: Failing to adopt an existing solution
* Reinventing the square wheel: Creating a poor solution when a good one exists
* Smoke and mirrors: Demonstrating how unimplemented functions will appear
* Software bloat: Allowing successive versions of a system to demand ever more resources
* Spaghetti code: Systems whose structure is barely comprehensible, especially because of misuse of code structures
* Stovepipe system: A barely maintainable assemblage of ill-related components
* Yo-yo problem: A structure (e.g. of inheritance) that is hard to understand due to excessive fragmentation

Some Organisational Anti-patterns
* Analysis paralysis: Devoting disproportionate effort to the analysis phase of a project
* Continuous obsolescence: Devoting disproportionate effort to porting a system to new environments
* Creeping featurism: Adding new features to the detriment of the quality of a system
* Design by committee: The result of having many contributors to a design, but no unifying vision
* Escalation of commitment: Failing to revoke a decision when it proves wrong
* I told you so: When the ignored warning of an expert proves justified
* Management by numbers: Paying excessive attention to quantitative management criteria, when these are inessential or cost too much to acquire
* Mushroom management: Keeping employees uninformed and abused
* Scope creep: Allowing the scope of a project to grow without proper control
* Vendor lock-in: Making a system excessively dependent on an externally supplied component
* Warm body: A person whose contribution to a project is in doubt, especially if taken on in panic

Some social anti-patterns
The status of some of these is likely to be controversial.
* Censorship: Suppressing discussion prevents political, social, and scientific progress
* Concentrated power: Individuals abuse power, even if initially well-meaning
* Dictatorship: No individual has all the skills necessary to govern; also power corrupts
* Discrimination: Discrimination on irrelevant features yields economic inefficiency and social resentment
* Dogmatic religion: Dogma suppresses individual thought and prevents progress
* Intolerance: Insisting on changing undesirable-but-harmless features of other people causes resentment and is an endless task
* Monopoly: Without competition most of the effects of a free market don't occur, and a private company has no incentive to do business fairly
* Plurality voting system: Politics under plurality voting degenerates into two highly-polarised parties, with all other political views suppressed
* Popularity contest: Popularity becomes a self-reinforcing quality, and is unrelated to any useful measure of merit
* Segregation: Separate but equal is rarely, if ever, equal; causes resentment
* Single-party system: Without electoral competition the party has no incentive to govern fairly
* Totalitarianism: Suppressing individuality causes resentment, and the approved way of life is never even remotely suitable for everyone
* Victimless crime: Suppressing harmless behaviour creates a subculture of otherwise-law-abiding people for whom the legal system is an enemy
* Witch hunt: Scapegoats are easy to find, but if the problem is never actually solved then more scapegoats will always be required
* Year Zero: Social change is an inherently slow process; rushing it yields disaster

Common types of computer bugs

A bugtracker is a ticket tracking system that is designed especially to manage problems (software bugs) with computer programs.

Typically bug tracking software allows the user to quickly enter bugs and search on them. In addition some allow users to specify a workflow for a bug that automates a bug's lifecycle.
Most bug tracking software allows the administrator of the system to configure what fields are included on a bug.
Having a bug tracking solution is critical for most systems. Without a good bug tracking solution bugs will eventually get lost or poorly prioritized.
Bugzilla is a general-purpose bug-tracking tool originally developed and used by the Mozilla Foundation. Since Bugzilla is web-based and is free software / open-source software, it is also the bug tracking tool of choice for many projects, both open source and proprietary.
Bugzilla relies on an installed web server (such as Apache) and a database management system (such as MySQL or PostgreSQL) to perform its work. Bugs can be submitted by anybody, and will be assigned to a particular developer. Various status updates for each bug are allowed, together with user notes and bug examples.
Bugzilla's notion of a bug is very general; for instance, mozilla.org uses it to track feature requests as well.

Requirements
Release notes such as those for Bugzilla 2.20 indicate the exact set of dependencies, which include:
* A compatible database server (often a version of MySQL)
* A suitable release of Perl 5
* An assortment of Perl modules
* A compatible web server such as Apache (though any web server that supports CGI can work)
* A suitable mail transfer agent such as Sendmail, qmail, Postfix, or Exim

Bug Impacts levels

Low impact
This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in layout/formatting. These problems do not impact use of the product in any substantive way.

Medium impact
This is a problem that a) Effects a more isolated piece of functionality. b) Occurs only at certain boundary conditions. c) Has a workaround (where "don't do that" might be an acceptable answer to the user). d) Occurs only at one or two customers. or e) Is very intermittent

High impact
This should be used for only serious problems, effecting many sites, with no workaround. Frequent or reproducible crashes/core dumps/GPFs would fall in this category, as would major functionality not working.

Urgent impact
This should be reserved for only the most catastrophic of problems. Data corruption, complete inability to use the product at almost any site, etc. For released products, an urgent bug would imply that shipping of the product should stop immediately, until the problem is resolved.

Common types of computer bugs (1)

* Divide by zero * Infinite loops
* Arithmetic overflow or underflow
* Exceeding array bounds
* Using an uninitialized variable
* Accessing memory not owned (Access violation)
* Memory leak or Handle leak
* Stack overflow or underflow
* Buffer overflow
* Deadlock
* Off by one error
* Race hazard
* Loss of precision in type conversion
* ISO 9126, which classifies a bug as either a defect or a nonconformity
ISO 9126 is an international standard for the evaluation of software. It will be overseen by the project SQuaRE, ISO 25000:2005, which follows the same general concepts.

The standard is divided into four parts which adresses, respectively, the following subjects: quality model; external metrics; internal metrics; and quality in use metrics.

The quality model stablished in the first part of the standard, ISO 9126-1, classifies software quality in a structured set of factors as follows:

* Functionality - A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.
o Suitability
o Accuracy
o Interoperability
o Compliance
o Security
* Reliability - A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time.
o Maturity
o Recoverability
o Fault Tolerance
* Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.
o Learnability
o Understandability
o Operability
* Efficiency - A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions.
o Time Behaviour
o Resource Behaviour
* Maintainability - A set of attributes that bear on the effort needed to make specified modifications.
o Stability
o Analysability
o Changeability
o Testability
* Portability - A set of attributes that bear on the ability of software to be transferred from one environment to another.
o Installability
o Conformance
o Replaceability
o Adaptability

The sub-characteristic Conformance is not listed above and applies to all characteristics. Examples are conformance to legislation concerning Usability or Reliability.

Each quality sub-characteristic (as adaptability) is further divided into attributes. An attribute is an entity which can be verified or measured in the software product. Attributes are not defined in the standard, as they vary between different software products.

Software product is defined in a broad sense: it encompasses executables, source code, architecture descriptions, and so on. As a result, the notion of user extends to operators as well as to programmers, which are users of components as software libraries.

The standard provides a framework for organizations to define a quality model for a software product. On doing so, however, it leaves up to each organization the task of specifying precisely its own model. This may be done, for example, by specifying target values for quality metrics which evaluates the degree of presence of quality attributes.

Internal metrics are those which does not rely on software execution (static measures).

External metrics are applicable to running software.
Quality in use metrics are only available when the final product is used in real conditions.
Ideally, the internal quality determines the external quality and this one determines the results of quality in use.

This standard stems from the model established in 1977 by McCall and his colleagues, who proposed a model to specify software quality. The McCall quality model is organized around three types of Quality Characteristics:

* Factors (To specify): They describe the external view of the software, as viewed by the users.
* Criteria (To build): They describe the internal view of the software, as seen by the developer.
* Metrics (To control): They are defined and used to provide a scale and method for measurement.

ISO 9126 distinguishes between a defect and a nonconformity, a defect being The nonfulfilment of intended usage requirements, whereas a nonconformity is The nonfulfilment of specified requirements. A similar distinction is made between validation and verification, known as V&V in the testing trade.

Top Ten Tips for Bug Tracking

1. A good tester will always try to reduce the repro steps to the minimal steps to reproduce; this is extremely helpful for the programmer who has to find the bug.
2. Remember that the only person who can close a bug is the person who opened it in the first place. Anyone can resolve it, but only the person who saw the bug can really be sure that what they saw is fixed.

3. There are many ways to resolve a bug. FogBUGZ allows you to resolve a bug as fixed, won't fix, postponed, not repro, duplicate, or by design.

4. Not Repro means that nobody could ever reproduce the bug. Programmers often use this when the bug report is missing the repro steps.

5. You'll want to keep careful track of versions. Every build of the software that you give to testers should have a build ID number so that the poor tester doesn't have to retest the bug on a version of the software where it wasn't even supposed to be fixed.

6. If you're a programmer, and you're having trouble getting testers to use the bug database, just don't accept bug reports by any other method. If your testers are used to sending you email with bug reports, just bounce the emails back to them with a brief message: "please put this in the bug database. I can't keep track of emails."

7. If you're a tester, and you're having trouble getting programmers to use the bug database, just don't tell them about bugs - put them in the database and let the database email them.

8. If you're a programmer, and only some of your colleagues use the bug database, just start assigning them bugs in the database. Eventually they'll get the hint.

9. If you're a manager, and nobody seems to be using the bug database that you installed at great expense, start assigning new features to people using bugs. A bug database is also a great "unimplemented feature" database, too.

10. Avoid the temptation to add new fields to the bug database. Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of the file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occurred; keeping track of which exact versions of which DLLs were installed on the machine where the bug happened. It's very important not to give in to these ideas. If you do, your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs "formally" is too much work, people will go around the bug database.

Tuesday, June 29, 2010

Screen Validation Checklist - Usability Conditions

  • Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified.
  • Is all date entry required in the correct format?
  • Have all pushbuttons on the screen been given appropriate Shortcut keys?
  • Do the Shortcut keys work correctly?
  • Have the menu options which apply to your screen got fast keys associated and should they have?
  • Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified.
  • Are all read-only fields avoided in the TAB sequence?
  • Are all disabled fields avoided in the TAB sequence?
  • Can the cursor be placed in the microhelp text box by clicking on the text box with the mouse?
  • Can the cursor be placed in read-only fields by clicking in the field with the mouse?
  • Is the cursor positioned in the first input field or control when the screen is opened?
  • Is there a default button specified on the screen?
  • Does the default button work correctly?
  • When an error message occurs does the focus return to the field in error when the user cancels it?
  • When the user Alt+Tab's to another application does this have any impact on the screen upon return to The application?
  • Do all the fields edit boxes indicate the number of characters they will hold by there length? e.g. a 30 character field should be a lot longer

Web Testing Checklist about Security - Firewalls

1. Was the software installed correctly?
2. Are firewalls installed at adequate levels in the organization and architecture? (e.g., corporate data, human resources data, customer transaction files, etc.)
3. Have firewalls been tested? (e.g., to allow & deny access).
4. Is the security administrator aware of known firewall defects?
5. Is there a link to access control?
6. Are firewalls installed in effective locations in the architecture? (e.g., proxy servers, data servers, etc.)

Web Testing Checklist about Performance - Architecture

1. Has the database campacity been identified?
2. Has anticipated growth data been obtained?
3. Is the database self-contained?
4. Is the system architecture defined?
" Tiers
" Servers
" Network
5. Has the anticipated volume for initial test been defined - with allowance for future growth?
6. Has plan for vertical growth been identified?
7. Have the various environments been created?
8. Has historical experience with the databases and equipment been documented?
9. Has the current system diagram been developed?
10.Is load balancing available?
11.Have the types of programming languages been identified?
12.Can back end processes be accessed?

Web Testing Checklist about Security - Security Administration

1. Have security administration procedures been defined?
2. Is there a way to verify that security administration procedures are followed?
3. Are security audits performed?
4. Is there a person or team responsible for security administration?
5. Are checks & balances in place?
6. Is there an adequate backup for the security administrator?

Screen Validation Checklist - Data Integrity Conditions

  • Is the data saved when the window is closed by double clicking on the close box?
  • Check the maximum field lengths to ensure that there are no truncated characters?
  • Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
  • Check maximum and minimum field values for numeric fields?
  • If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
  • If a set of radio buttons represent a fixed set of values such as A, B and C then what happens if a blank value is retrieved from the database? (In some situations rows can be created on the database by other functions which are not screen based and thus the required initial values can be incorrect.)
  • If a particular set of data is saved to the database check that each value gets saved fully to the database. i.e. Beware of truncation (of strings) and rounding of numeric values.

Web Testing Checklist about Security - Viruses

1. Are virus detection tools in place?
2. Have the virus data files been updated on a current basis?
3. Are virus updates scheduled?
4. Is a response procedure for virus attacks in place?
5. Are notification of updates to virus files obtained from anti-virus software vendor?
6. Does the security administrator maintain an informational partnership with the anti-virus software vendor?
7. Does the security administrator subscribe to early warning e-mail services? (e.g., www.fooorg or http://www.bar.net/)
8. Has a key contact been defined for the notification of a virus presence?
9. Has an automated response been developed to respond to a virus presence?
10. Is the communication & training of virus prevention and response procedures to users adequate?

Web Testing Checklist about Performance - Test Environment

1. Does the test environment exist?
2. Is the environment self-contained?
3. Can one iteration of testing be performed in production?
4. Is a copy of production data available for testing?
5. Are end-users available for testing and analysis?
6. Will the test use virtual users?
7. Does the test environment mirror production?
8. Have the differences documented? (constraints)
9. Is the test available after production?
10. Have version control processes been used to ensure the correct versions of applications and data in the test environment?
11. Have the times been identified when you will receive the test data (globally) time frame?
12. Are there considerations for fail-over recovery? Disaster recovery?
13. Are replacement servers available?
14. Have back-up procedures been written?

Web Testing Checklist about Security - Disaster Recovery

1. Have service levels been defined. (e.g., how long should recovery take?)
2. Are fail-over solutions needed?
3. Is there a way to reroute to another server in the event of a site crash?
4. Are executables, data, and content backed up on a defined interval appropriate for the level of risk?
5. Are disaster recovery process & procedures defined in writing? If so, are they current?
6. Have recovery procedures been tested?
7. Are site assets adequately Insured?
8. Is a third party "hot-site' available for emergency recovery?
9. Has a Business Contingency Plan been developed to maintain the business while the site is being restored?
10. Have all levels in organization gone through the needed training & drills?
11. Do support notification procedures exist & are they followed?
12. Do support notification procedures support a 24/7 operation?
13. Have criteria been defined to evaluation recovery completion / correctness?

GUI Testing Checklist - Specific Field Tests - Date Field Checks

  1. Assure that leap years are validated correctly & do not cause errors/miscalculations
  2. Assure that month code 00 and 13 are validated correctly & do not cause errors/miscalculations
  3. Assure that 00 and 13 are reported as errors
  4. Assure that day values 00 and 32 are validated correctly & do not cause errors/miscalculations
  5. Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations
  6. Assure that Feb. 30 is reported as an error
  7. Assure that century change is validated correctly & does not cause errors/ miscalculations
  8. Assure that out of cycle dates are validated correctly & do not cause errors/miscalculations

Web Testing Checklist about Security - Encryption

1. Are encryption systems/levels defined?
2. Is there a standard of what is to be encrypted?
3. Are customers compatible in terms of encryption levels and protocols?
4. Are encryption techniques for transactions being used for secured transactions?
- Secure socket layer (SSL)
- Virtual Private Networks (VPNs)
5. Have the encryption processes and standards been documented?

Web Testing Checklist about Security - Data Security

1. Are data inputs adequately filtered?
2. Are data access privileges identified? (e.g., read, write, update and query)
3. Are data access privileges enforced?
4. Have data backup and restore processes been defined?
5. Have data backup and restore processes been tested?
6. Have file permissions been established?
7. Have file permissions been tested?
8. Have sensitive and critical data been allocated to secure locations?
9. Have date archival and retrieval procedures been defined?
10. Have date archival and retrieval procedures been tested?

Checklist: Developing Windows Application

Modal Windows - Often times modal windows which must be acted upon end up hidden behind standard windows. This gives the user the impression that the system has locked up.


Special Characters - Special characters may not be used on some windows entry screens, there also may be some conflicts with converting data or using data from other systems.

Printer Configuration - Although Windows is designed to handle the printer setup for most applications, there are formatting differences between printers and printer types. LaserJet printers do not behave the same as inkjets, nor do 300, 600, or 1200 DPI laser printers behave the same across platforms.

Date Formats - The varying date formats sometimes cause troubles when they are being displayed in windows entry screens. This situation could occur when programs are designed to handle a YY/MM/DD format and the date format being used is YYYY/MMM/DD.
Screen Savers - Some screen savers such as After Dark are memory or resource ‘hogs’ and have been known to cause troubles when running other applications.

Speed Keys - Verify that there are no conflicting speed keys on the various screens. This is especially important on screens where the buttons change.

Virus Protection Software - Some virus protection software can be configured too strictly. This may cause applications to run slowly or incorrectly.

Disk Compression Tools - Some disk compression software may cause our applications to run slowly or incorrectly.

Multiple Open Windows - How does the system handle having multiple open windows, are there any resource errors.

Test Multiple Environments - Programs need to be tested under multiple configurations. The configurations seem to cause various results.

Test Multiple Operating Systems - Programs running under Win 95, Win NT, and Windows 3.11 do not behave the same in all environments.

DLL’s - Corrupted DLL’s will sometime cause applications not to execute or more damaging to run sporadically.

Incorrect DLL Versions - Corrupted DLL’s will sometime cause our applications not to execute or more damaging to run sporadically.

Missing DLL’s - Missing DLL’s will usually cause our applications not to execute.

Standard Program Look & Feel - The basic windows look & feel should be consistent across all windows and the entire application. Windows buttons, windows and controls should follow the same standards for sizes.

Tab Order - When pressing the TAB key to change focus from object to object the procession should be logical.

Completion of Edits - The program should force the completion of edits for any screen before users have a change to exit program.

Saving Screen Sizes - Does the user have an opportunity to save the current screen sizes and position?

Operational Speed - Make sure that the system operates at a functional speed, databases, retrieval, and external references.

Testing Under Loaded Environments - Testing system functions when running various software programs "RESOURCE HOGS" (MS Word, MS Excel, WP, etc.).

Resource Monitors - Resource monitors help track Windows resources which when expended will cause GPF’s.

Video Settings - Programmers tend to program at a 800 x 600 or higher resolution, when you run these programs at a default 640 x 480 it tends to overfill the screen. Make sure the application is designed for the resolution used by customers.

Clicking on Objects Multiple Times - Will you get multiple instances of the same object or window with multiple clicks?

Saving Column Orders - Can the user save the orders of columns of the display windows?

Displaying Messages saying that the system is processing - When doing system processing do we display some information stating what the system is doing?

Clicking on Other Objects While the System is Processing - Is processing interrupted? Do unexpected events occur after processing finishes?

Large Fonts / Small Fonts - When switching between windows font sizes mixed results occur when designing in one mode and executing in another.

Maximizing / Minimizing all windows - Do the actual screen elements resize? Do we use all of the available screen space when the screen is maximized.

Setup Program - Does your setup program function correctly across multiple OS’s. Does the program prompt the user before overwriting existing files.

Consistency in Operation - Consistent behavior of the program in all screens and the overall application.

Multiple Copies of the same Window - Can the program handle multiple copies of the same window? Can all of these windows be edited concurrently?

Confirmation of Deletes - All deletes should require confirmations of the process before execution.

Selecting alternative language options - Will your program handle the use of other languages (FRENCH, SPANISH, ITALIAN, etc.)

Web application security testing checklist

Set everyone's expectations
The Golden Rule of performing security assessments is to make sure that everyone affected by your testing is on the same page. Start by working with your project sponsor (i.e., CIO, VP of audit, IT director or compliance manager) and determine the business goals for what you're doing.

Gather good tools
As with all things security-related, your tools will make or break your assessments. In fact, the number of legitimate vulnerabilities discovered is directly proportional to the quality of your security tools. There are several open source Web application testing tools that I depend on in my work -- most of which are available in the BackTrack suite of tools.
Look at your application from every perspective
Perform a reconnaissance on your Web application and see what the world can see using Google and its hacking tools such as Foundstone's SiteDigger. Odds are you won't find a lot of stuff, but you'll never know until you check. Next, run a Web vulnerability scanner such as the ones I mentioned above. Where you can, be sure to run your scans as both an unauthenticated and untrusted outsider as well as an authenticated and trusted user (via basic HTTP, NTLM or form authentication).

Test for underlying weaknesses
One of the most commonly overlooked areas of Web application testing is failing to scan the underlying operating system and installed applications..

Go back and verify your scanner findings
As much as the marketing machine wants us to think that security testing tools are void of any shortcomings, they aren't. Don't believe what you see and hear. Get in and validate that the security weaknesses they discovered are legitimate. Validating and reporting on genuine security vulnerabilities in the proper context will save everyone time and effort in the long run. It will also instill confidence in others and make them want to take you seriously

Manually check for weaknesses
Don't stop now. Your security testing tools may have uncovered a lot of weaknesses in your Web application, but there are likely several more things left to exploit. This is where your human context and Web usage expertise come into play. Get in and poke around in the application a bit more to see what else can be done from a malicious point of view.
Test your source code
Until you look at your Web application's source code, you won't be able to say with conviction that everything's been tested.

Check List For Web Site

1. Are fonts consistent within functionality?
2. Are the company display standards followed?
- Logos
- Font size
- Colors
- Scrolling
- Object use
3. Are legal requirements met?
4. Is content sequenced properly?
5. Are web-based colors used?
6. Is there appropriate use of white space?
7. Are tools provided (as needed) in order to access the information?
8. Are attachments provided in a static format?
9. Is spelling and grammar correct?
10. Are alternative presentation options available (for limited browsers or performance issues)?

Web Testing Checklist about Compatibility and Portability - Audience

1. Has the audience been defined?
2. Is there a process for identifying the audience?
3. Is the process for identifying the audience current?
4. Is the process reviewed periodically?
5. Is there appropriate use of audience segmentation?
6. Is the application compatible with the audience experience level?
7. Where possible, has the audience readiness been ensured?
8. Are text version and/or upgrade links present?

How to Interpret and Use Information

1. Is terminology appropriate to the intended audience?
2. Are clear instructions provided?
3. Are there help facilities?
4. Are there appropriate external links?
5. Is expanded information provided on services and products? (why and how)
6. Are multiple views/layouts available?

Check List For Web Site

1. Are fonts consistent within functionality?
2. Are the company display standards followed?
- Logos
- Font size
- Colors
- Scrolling
- Object use
3. Are legal requirements met?
4. Is content sequenced properly?
5. Are web-based colors used?
6. Is there appropriate use of white space?
7. Are tools provided (as needed) in order to access the information?
8. Are attachments provided in a static format?
9. Is spelling and grammar correct?
10. Are alternative presentation options available (for limited browsers or performance issues)?

Web Testing Checklist about Usability

Navigation
1. Is terminology consistent?
2. Are navigation buttons consistently located?
3. Is navigation to the correct/intended destination?
4. Is the flow to destination (page to page) logical?
5. Is the flow to destination the page top-bottom left to right?
6. Is there a logical way to return?
7. Are the business steps within the process clear or mapped?
8. Are navigation standards followed?
Ease of Use
1. Are help facilities provided as appropriate?
2. Are selection options clear?
3. Are ADA standards followed?
4. Is the terminology appropriate to the intended audience?
5. Is there minimal scrolling and resizeable screens?
6. Do menus load first?
7. Do graphics have reasonable load times?
8. Are there multiple paths through site (search options) that are user chosen?
9. Are messages understandable?
10. Are confirmation messages available as appropriate?

Win Runner Basics

For new users, how to use WinRunner to test software applications automately ?

A: The following steps may be of help to you when automating tests

MOST IMPORTANT - write a set of manual tests to test your application - you cannot just jump in with WR and expect to produce a set of meaningful tests. Also as you will see from the steps below this set of manual tests will form your plan to tackle automation of your application.
Once you have a set of manual tests look at them and decide which ones you can automate using your current level of expertise. NOTE that there will be tests that are not suitable for automation, either because you can't automate them, or they are just not worth the effort.
Automate the tests selected in step 2 - initially you will use capture/replay using the steps in the manual test, but you will soon see that to produce meaningful and informative tests you need to add additional code to your test eg. use tl_step() to give test results. As this process continues you will soon see that there are operations that you repeatedly do in multiple tests - these are then candidates for user-defined functions and compiled modules
Once you have completed step 3 go back to step 2 and you will find that the knowledge you have gained in step 3 will now allow you to select some more tests that you can do.
If you continue going through this loop you will gradually become more familiar with WR and TSL, in fact you will probably find that eventually you do very little capture/replay and more straight TSL coding.

How to use WinRunne to check whether the record was updated or the record was delelte or the record was inserted or not?
Using WinRunner check point features: Create->dDB checkpoint->Runtime Record check
How to use WinRunner to test the login screen ?
A: When you enter wrong id or password, you will get Dialog box.
1. Record this Dialog box
2. User win_exists to check whether dialog box exists or not
3. Playback: Enter wrong id or password, if win_exists is
true, then your application is working good.
Enter good id or password, if win_exists is false,
then your application is working perfectly.

After clicking on "login" button, they opens other windows of the web application, how to check that page is opened or not
When your expecting "Window1" to come up after clicking on Login...
Capture the window in the GUI Map. No two windows in an web based application can have the same
html_name property. Hence, this would be the property to check.
First try a simple win_exists("window1",

QTP Basics

Full form of QTP ?
Quick Test Professional

What's the QTP ?
QTP is Mercury Interactive Functional Testing Tool.

Which scripting language used by QTP ?
QTP uses VB scripting.

What's the basic concept of QTP ?
QTP is based on two concept-
* Recording
* Playback

How many types of recording facility are available in QTP ?
QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording

How many types of Parameters are available in QTP ?
QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic

What's the QTP testing process ?
QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects

What's the Active Screen ?
It provides the snapshots of your application as it appeared when you performed a certain steps during recording session.

What's the Test Pane ?
Test Pane contains Tree View and Expert View tabs.

What's Data Table ?
It assists to you about parameterizing the test.

What's the Test Tree ?
It provides graphical representation of your operations which you have performed with your application.

Which all environment QTP supports ?
ERP/ CRM
Java/ J2EE
VB, .NET
Multimedia, XML
Web Objects, ActiveX controls
SAP, Oracle, Siebel, PeopleSoft
Web Services, Terminal Emulator
IE, NN, AOL

How can you view the Test Tree ?
The Test Tree is displayed through Tree View tab.

What's the Expert View ?
Expert View display the Test Script.

Which keyword used for Normal Recording ?
F3

Which keyword used for run the test script ?
F5

Which keyword used for stop the recording ?
F4

Which keyword used for Analog Recording ?
Ctrl+Shift+F4

Which keyword used for Low Level Recording ?
Ctrl+Shift+F3

Which keyword used for switch between Tree View and Expert View ?
Ctrl+Tab

What's the Transaction ?
You can measure how long it takes to run a section of your test by defining transactions.

Where you can view the results of the checkpoint ?
You can view the results of the checkpoints in the Test Result Window.

What's the Standard Checkpoint ?
Standard Checkpoints checks the property value of an object in your application or web page.

Which environment are supported by Standard Checkpoint ?
Standard Checkpoint are supported for all add-in environments.

What's the Image Checkpoint ?
Image Checkpoint check the value of an image in your application or web page.

Which environments are supported by Image Checkpoint ?
Image Checkpoint are supported only Web environment.

What's the Bitmap Checkpoint ?
Bitmap Checkpoint checks the bitmap images in your web page or application.

Which enviornment are supported by Bitmap Checkpoints ?
Bitmap checkpoints are supported all add-in environment.

What's the Table Checkpoints ?
Table Checkpoint checks the information with in a table.

Which environments are supported by Table Checkpoint ?
Table Checkpoints are supported only ActiveX environment.

What's the Text Checkpoint ?
Text Checkpoint checks that a test string is displayed in the appropriate place in your application or on web page.

Which environment are supported by Test Checkpoint ?
Text Checkpoint are supported all add-in environments

Note:
* QTP records each steps you perform and generates a test tree and test script.
* QTP records in normal recording mode.
* If you are creating a test on web object, you can record your test on one browser and run it on another browser.
* Analog Recording and Low Level Recording require more disk sapce than normal recording mode.

Monday, June 28, 2010

Software Testing Basics

What's Ad Hoc Testing ?

A testing where the tester tries to break the software by randomly trying functionality of software.
What's the Accessibility Testing ?
Testing that determines if software will be usable by people with disabilities.
What's the Alpha Testing ?
The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the software
What's the Beta Testing ?
Testing the application after the installation at the client place.
What is Component Testing ?
Testing of individual software components (Unit Testing).
What's Compatibility Testing ?
In Compatibility testing we can test that software is compatible with other elements of system.
What is Concurrency Testing ?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
What is Conformance Testing ?
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
What is Context Driven Testing ?
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
What is Data Driven Testing ?
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
What is Conversion Testing ?
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
What is Dependency Testing ?
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
What is Depth Testing ?
A test that exercises a feature of a product in full detail.
What is Dynamic Testing ?
Testing software through executing it. See also Static Testing.
What is Endurance Testing ?
Checks for memory leaks or other problems that may occur with prolonged execution.
What is End-to-End testing ?
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
What is Exhaustive Testing ?
Testing which covers all combinations of input values and preconditions for an element of the software under test.
What is Gorilla Testing ?
Testing one particular module, functionality heavily.
What is Installation Testing ?
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
What is Localization Testing ?
This term refers to making software specifically designed for a specific locality.
What is Loop Testing ?
A white box testing technique that exercises program loops.
What is Mutation Testing ?
Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources
What is Monkey Testing ?
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
What is Positive Testing ?
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
What is Negative Testing ?
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.
What is Path Testing ?
Testing in which all paths in the program source code are tested at least once.
What is Performance Testing ?
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
What is Ramp Testing ?
Continuously raising an input signal until the system breaks down.
What is Recovery Testing ?
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
What is the Re-testing testing ?
Retesting- Again testing the functionality of the application.
What is the Regression testing ?
Regression- Check that change in code have not effected the working functionality
What is Sanity Testing ?
Brief test of major functional elements of a piece of software to determine if its basically operational.
What is Scalability Testing ?
Performance testing focused on ensuring the application under test gracefully handles increases in work load.
What is Security Testing ?
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
What is Stress Testing ?
Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
What is Smoke Testing ?
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
What is Soak Testing ?
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
What's the Usability testing ?
Usability testing is for user friendliness.
What's the User acceptance testing ?
User acceptance testing is determining if software is satisfactory to an end-user or customer.
What's the Volume Testing ?
We can perform the Volume testing, where the system is subjected to large volume of data.

Software testing Questions and Answers

Q1. What is Software Testing?
A1. Software testing is the process used to measure the quality of developed computer software. Testing is a process of technical investigation, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors.

Q2. What is the purpose of Testing?
A2. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.

Q3. What types of testing do testers perform?
A3. Regression Testing is carried out both manually and through automation. It is mostly done using automation tools, as repeated testing of the same applications is required. It is done to note the changes the application goes through for new functionality, after fixing previous bugs and whether any changes in the design is required. Whenever any change takes place in the Application we should make sure, the previous functionality is still available.

Q4. What is the difference between load testing and intersystem testing?
A4. Inter System Testing: This type of testing is also called end-to-end testing. In this testing, test engineer validates application build co-existence with other software applications.Load Testing: Is also called scalability testing. In this testing, we run the application under customer expected load and customer expected configuration to estimate the load performance
That is, how many users can simultaneously access the application, be it a popular website or a software application.

Q5. What is the difference between load and stress testing?

In Load testing, we actually test the behavior /performance of the software by varying the load on the same from minimum to maximum as specified in the 'Technical Specifications' document.
In Stress Testing, our main intention is to test the software's behavior once it reaches the point where it actually breaks. Here we gradually increase the load while denying more resources (eg. RAM) which are needed to run the same, and test how the application behaves under stressed condition.
Both Load and Stress testing are performed between minimum and maximum specified limits of the application as per the Technical Requirements document.

Q6. What do we analyze in load testing?
A6. We analyze a number of things in load testing, to name a few:1. Response times - Do they appear consistent, is there any degrade over a period of time, or are they higher than what is expected?2. Performance of the hardware components – mid-tier server or the application server, http server, and the database server. The CPU utilization, JVM memory heap of the application server, and the CPU of database server are important for assessing the performance.

Q7. What is the difference between Load testing and Performance Testing?
A7. Performance testing verifies loads, volume and response time as defined by requirements while load testing is testing an application under heavy loads to determine at what point the system response time degrades.

Q8. What are the uses of load testing? What are deferred graphs? And what are the different components in Loadrunner?

Load testing is used to find the minimum and maximum load the application can handle. It ensures the usage of the fixed number of users using the application frequently or concurrently. This exercise is done and ensured using a Load testing tool.
Deferred Graphs give information CPU time usage, Throughput & Response time and the load against the varying number of users.The components are Generator, Controller & Load Runner Analysis.

Q9. What is the difference between Scalability testing & Load Testing?
A9. Load Testing - Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
Scalability Testing - Scalability testing focuses on the performance of your Web site, hardware and software products, and internal applications at all the stages from minimum to maximum load.

Friday, June 25, 2010

C# Interview Questions with Answers

What’s the advantage of using System.Text.StringBuilder over System.String?
 StringBuilder is more efficient in the cases, where a lot of manipulation is done to the text. Strings are immutable, so each time it’s being operated on, a new instance is created.


Can you store multiple data types in System.Array?
No.

What’s the difference between the System.Array.CopyTo() and System.Array.Clone()?
The first one performs a deep copy of the array, the second one is shallow.

How can you sort the elements of the array in descending order?
By calling Sort() and then Reverse() methods.

What’s the .NET datatype that allows the retrieval of data by a unique key?
HashTable.

What’s class SortedList underneath?
A sorted HashTable.

Will finally block get executed if the exception had not occurred?
Yes.

What’s the C# equivalent of C++ catch (…), which was a catch-all statement for any possible exception?
A catch block that catches the exception of type System.Exception. You can also omit the parameter data type in this case and just write catch {}.

Can multiple catch blocks be executed?
No, once the proper catch code fires off, the control is transferred to the finally block (if there are any), and then whatever follows the finally block.

Why is it a bad idea to throw your own exceptions?
Well, if at that point you know that an error has occurred, then why not write the proper code to handle that error instead of passing a new Exception object to the catch block? Throwing your own exceptions signifies some design flaws in the project.

What’s a delegate?
A delegate object encapsulates a reference to a method. In C++ they were referred to as function pointers.

What’s a multicast delegate?
 It’s a delegate that points to and eventually fires off several methods.

How’s the DLL Hell problem solved in .NET?
 Assembly versioning allows the application to specify not only the library it needs to run (which was available under Win32), but also the version of the assembly.

What are the ways to deploy an assembly?
An MSI installer, a CAB archive, and XCOPY command.

What’s a satellite assembly?
When you write a multilingual or multi-cultural application in .NET, and want to distribute the core application separately from the localized modules, the localized assemblies that modify the core application are called satellite assemblies.

What namespaces are necessary to create a localized application?
System.Globalization, System.Resources.

What’s the difference between // comments, /* */ comments and /// comments?
 Single-line, multi-line and XML documentation comments.

How do you generate documentation from the C# file commented properly with a command-line compiler?
Compile it with a /doc switch.

What’s the difference between and XML documentation tag?
Single line code example and multiple-line code example.

Is XML case-sensitive?
 Yes, so and are different elements.

What debugging tools come with the .NET SDK?
CorDBG – command-line debugger, and DbgCLR – graphic debugger. Visual Studio .NET uses the DbgCLR. To use CorDbg, you must compile the original C# file using the /debug switch.

What does the This window show in the debugger?
 It points to the object that’s pointed to by this reference. Object’s instance data is shown.

What does assert() do?
In debug compilation, assert takes in a Boolean condition as a parameter, and shows the error dialog if the condition is false. The program proceeds without any interruption if the condition is true.

What’s the difference between the Debug class and Trace class?
Documentation looks the same. Use Debug class for debug builds, use Trace class for both debug and release builds.

Why are there five tracing levels in System.Diagnostics.TraceSwitcher?
The tracing dumps can be quite verbose and for some applications that are constantly running you run the risk of overloading the machine and the hard drive there. Five levels range from None to Verbose, allowing to fine-tune the tracing activities.

Where is the output of TextWriterTraceListener redirected?
To the Console or a text file depending on the parameter passed to the constructor.

How do you debug an ASP.NET Web application?
Attach the aspnet_wp.exe process to the DbgClr debugger.

What are three test cases you should go through in unit testing?
Positive test cases (correct data, correct output), negative test cases (broken or missing data, proper handling), exception test cases (exceptions are thrown and caught properly).

Can you change the value of a variable while debugging a C# application?
Yes, if you are debugging via Visual Studio.NET, just go to Immediate window.
What are advantages and disadvantages of Microsoft-provided data provider classes in ADO.NET?
SQLServer.NET data provider is high-speed and robust, but requires SQL Server license purchased from Microsoft. OLE-DB.NET is universal for accessing other sources, like Oracle, DB2, Microsoft Access and Informix, but it’s a .NET layer on top of OLE layer, so not the fastest thing in the world. ODBC.NET is a deprecated layer provided for backward compatibility to ODBC engines.

What’s the role of the DataReader class in ADO.NET connections?
 It returns a read-only dataset from the data source when the command is executed.

What is the wildcard character in SQL?
 Let’s say you want to query database with LIKE for all employees whose name starts with La. The wildcard character is %, the proper query with LIKE would involve ‘La%’.

Explain ACID rule of thumb for transactions.
Transaction must be Atomic (it is one unit of work and does not dependent on previous and following transactions), Consistent (data is either committed or roll back, no “in-between” case where something has been updated and something hasn’t), Isolated (no transaction sees the intermediate results of the current transaction), Durable (the values persist if the data had been committed even if the system crashes right after).

What connections does Microsoft SQL Server support?
Windows Authentication (via Active Directory) and SQL Server authentication (via Microsoft SQL Server username and passwords).

Which one is trusted and which one is untrusted?
 Windows Authentication is trusted because the username and password are checked with the Active Directory, the SQL Server authentication is untrusted, since SQL Server is the only verifier participating in the transaction.

Why would you use untrusted verificaion?
Web Services might use it, as well as non-Windows applications.

What does the parameter Initial Catalog define inside Connection String?
The database name to connect to.

What’s the data provider name to connect to Access database?
Microsoft.Access.

What does Dispose method do with the connection object?
 Deletes it from the memory.

What is a pre-requisite for connection pooling?
Multiple processes must agree that they will share the same connection, where every parameter is the same, including the security settings.

EJB CONTINUES ... 7

QUESTION: What are the enhancements in EJB 2.0 specification with respect to Asynchronous communication?
ANSWER: EJB 2.0 mandates integration between JMS and EJB.
We have specified the integration of Enterprise JavaBeans with the Java Message Service, and have introduced message-driven beans. A message-driven bean is a stateless component that is invoked by the container as a result of the arrival of a JMS message. The goal of the message-driven bean model is to make developing an enterprise bean that is asynchronously invoked to handle the processing of incoming JMS messages as simple as developing the same functionality in any other JMS MessageListener.
QUESTION: What are the enhancements in EJB 2.0 with respect to CMP?
ANSWER: EJB 2.0 extends CMP to include far more robust modeling capability, with support for declarative management of relationships between entity EJBs. Developers no longer need to re-establish relationships between the various beans that make up their application -- the container will restore the connections automatically as beans are loaded, allowing bean developers to navigate between beans much as they would between any standard Java objects.
EJB 2.0 also introduces for the first time a portable query language, based on the abstract schema, not on the more complex database schema. This provides a database and vendor-independent way to find entity beans at run time, based on a wide variety of search criteria.

QUESTION: Can you briefly describe about local interfaces?
ANSWER: EJB was originally designed around remote invocation using the Java Remote Method Invocation (RMI) mechanism, and later extended to support to standard CORBA transport for these calls using RMI/IIOP. This design allowed for maximum flexibility in developing applications without consideration for the deployment scenario, and was a strong feature in support of a goal of component reuse in J2EE.
Many developers are using EJBs locally -- that is, some or all of their EJB calls are between beans in a single container.
With this feedback in mind, the EJB 2.0 expert group has created a local interface mechanism. The local interface may be defined for a bean during development, to allow streamlined calls to the bean if a caller is in the same container. This does not involve the overhead involved with RMI like marshalling etc. This facility will thus improve the performance of applications in which co-location is planned.
Local interfaces also provide the foundation for container-managed relationships among entity beans with container-managed persistence.

QUESTION: What are the special design care that must be taken when you work with local interfaces?
ANSWER: It is important to understand that the calling semantics of local interfaces are different from those of remote interfaces. For example, remote interfaces pass parameters using call-by-value semantics, while local interfaces use call-by-reference.
This means that in order to use local interfaces safely, application developers need to carefully consider potential deployment scenarios up front, then decide which interfaces can be local and which remote, and finally, develop the application code with these choices in mind.
While EJB 2.0 local interfaces are extremely useful in some situations, the long-term costs of these choices, especially when changing requirements and component reuse are taken into account, need to be factored into the design decision.

QUESTION: What happens if remove( ) is never invoked on a session bean?
ANSWER: In case of a stateless session bean it may not matter if we call or not as in both cases nothing is done. The number of beans in cache is managed by the container.
In case of stateful session bean, the bean may be kept in cache till either the session times out, in which case the bean is removed or when there is a requirement for memory in which case the data is cached and the bean is sent to free pool.

QUESTION: What is the difference between creating a distributed application using RMI and using a EJB architecture?
ANSWER: It is possible to create the same application using RMI and EJB. But in case of EJB the container provides the requisite services to the component if we use the proper syntax. It thus helps in easier development and lesser error and use of proven code and methodology. But the investment on application server is mandatory in that case. But this investment is warranted because it results in less complex and maintainable code to the client, which is what the end client wants. Almost all the leading application servers provide load balancing and performance tuning techniques. In case of RMI we have to code the services and include in the program the way to invoke these services.

QUESTION: Why would a client application use JTA transactions? (JTA)
ANSWER: One possible example would be a scenario in which a client needs to employ two (or more) session beans, where each session bean is deployed on a different EJB server and each bean performs operations against external resources (for example, a database) and/or is managing one or more entity beans. In this scenario, the client's logic could required an all-or-nothing guarantee for the operations performed by the session beans; hence, the session bean usage could be bundled together with a JTA UserTransaction object.
In the previous scenario, however, the client application developer should address the QUESTION of whether or not it would be better to encapsulate these operations in yet another session bean, and allow the session bean to handle the transactions via the EJB container. In general, lightweight clients are easier to maintain than heavyweight clients. Also, EJB environments are ideally suited for transaction management.

... Context c = new InitialContext(); UserTransaction ut = (UserTransaction)
c.lookup("java:comp/UserTransaction"); ut.begin(); // perform multiple operations... ut.commit() ...