Wednesday, 16 March 2011

Web Testing

What is Web Testing?

Web Testing in simple terms is checking your web application for potential bugs before its made live or before code is moved into the production environment.

During this stage issues such as that of web application security, the functioning of the site, its access to handicapped as well as regular users and its ability to handle traffic is checked.

Web Application Testing Checklist:

Some or all of the following testing types may be performed depending on your web testing requirements.


1. Functionality Testing :

This is used to check of your product is as per the specifications you intended for it as well as the functional requirements you charted out for it in your developmental documentation.Testing Activities Included:

Test all links in your webpages are working correctly and make sure there are no broken links. Links to be checked will include -
Outgoing links
Internal links
Anchor Links
MailTo Links



Test Forms are working as expected. This will include-
Scripting checks on the form are working as expected. For example- if a user does not fill a mandatory field in a form a error message is shown.
Check default values are being populated
Once submitted , the data in the forms is submitted to a live database or is linked to an working email address
Forms are optimally formatted for better readability

Test Cookies are working as expected. Cookies are small files used by websites to primarily remember active user sessions so you do not to log in every time you visit a website. Cookie Testing will include
Testing cookies (sessions) are deleted either when cache is cleared or when they reach their expiry.
Delete cookies (sessions) and test that login credentials are asked for when you next visit the site.



Test HTML and CSS to ensure that search engines can crawl your site easily. This will include
Checking for Syntax Errors
Readable Color Schemas
Standard Compliance.Ensure standards such W3C, OASIS, IETF, ISO, ECMA, or WS-I are followed.



Test business workflow- This will include
Testing your end - to - end workflow/ business scenarios which takes the user through a series of webpage's to complete.
Test negative scenarios as well , such that when a user executes an unexpected step , appropriate error message or help is shown in your web application.



Tools that can be used: QTP , IBM Rational , Selenium

2. Usability testing:

Usability testing has now become a vital part of any web based project. It can carried out by testers like you or a small focus groupsimilar to the target audience of the web application.

Test the site Navigation:
Menus , buttons or Links to different pages on your site should be easily visible and consistent on all webpages



Test the Content:
Content should be legible with no spelling or grammatical errors.
Images if present should contain and "alt" text

Tools that can be used: Chalkmark, Clicktale, Clixpy and Feedback Army


3.Interface Testing:

Three areas to be tested here are - Application , Web and Database Server
Application: Test requests are sent correctly to the Database and output at the client side is displayed correctly. Errors if any must be caught by the application and must be only shown to the administrator and not the end user.
Web Server: Test Web server is handling all application requests without any service denial.
Database Server: Make sure queries sent to the database give expected results.

Test system response when connection between the three layers (Application, Web and Database) can not be established and appropriate message is shown to the end user.

Tools that can be used: AlertFox,Ranorex


4.Database Testing:

Database is one critical component of your web application and stress must be laid to test it thoroughly. Testing activities will include-
Test if any errors are shown while executing queries
Data Integrity is maintained while creating , updating or deleting data in database.
Check response time of queries and fine tune them if necessary.
Test data retrieved from your database is shown accurately in your web application

Tools that can be used: QTP


5. Compatibility testing.

Compatibility tests ensures that your web application displays correctly across different devices. This would include-

Browser Compatibility Test: Same website in different browsers will display differently. You need to test if your web application is being displayed correctly across browsers , javascript , AJAX and authentication is working fine. You may also check for Mobile Browser Compatibility.

The rendering of web elements like buttons , text fields etc changes with change in Operating System. Make sure your website works fine for various combination of Operating systems such as Windows , Linux , Mac and Browsers such as Firefox , Internet Explorer , Safari etc.

Tools that can be used: NetMechanic


6.Performance Testing:

This will ensure your site works under all loads. Testing activities will include but not limited to -
Website application response times at different connection speeds
Load test your web application to determine its behavior under normal and peak loads
Stress test your web site to determine its break point when pushed to beyond normal loads at peak time.
Test if a crash occurs due to peak load , how does the site recover from such an event
Make sure optimization techniques like gzip compression , browser and server side cache enabled to reduce load times

Tools that can be used: Loadrunner, JMeter


7. Security testing:

Security testing is vital for e-commerce website that store sensitive customer information like credit cards.Testing Activities will include-
Test unauthorized access to secure pages should not be permitted
Restricted files should not be downloadable without appropriate access
Check sessions are automatically killed after prolonged user inactivity
On use of SSL certificates , website should re-direct to encrypted SSL pages.

Tools that can be used: Babel Enterprise, BFBTester and CROSS


8.Crowd Testing:

Crowdsourced testing is an interesting and upcoming concept and helps unravel many a unnoticed defects.

Tools that can be used: People like you and me . And yes , loads of them!

This concludes almost all testing types applicable to your web application.

As a Web-tester its important to note that web testing is quite an arduous process and you are bound to come across many obstacles. One of the major problems you will face is of course deadline pressure. Everything is always needed yesterday! The number of times the code will need changing is also taxing. Make sure you plan your work and know clearly what is expected of you. Its best define all the tasks involved in your web testing and then create a work chart for accurate estimates and planning

Thursday, 10 March 2011

Load Testing

The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades. Load testing operates at a predefined load level, usually the highest load that the system can accept while still functioning properly.

Note that load testing does not aim to break the system by overwhelming it, but instead tries to keep the system constantly humming like a well-oiled machine.In the context of load testing, extreme importance should be given of having large datasets available for testing. Bugs simply do not surface unless you deal with very large entities such thousands of users in repositories such as LDAP/NIS/Active Directory; thousands of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies on file systems, etc. Testers obviously need automated tools to generate these large data sets, but fortunately any good scripting language worth its salt will do the job.

What is software 'quality'?

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.

5 common problems in the software development process


  • Poor Requirements - if requirements are unclear, incomplete, too general, and not testable, there may be problems.
  • Unrealistic Schedule - if too much work is crammed in too little time, problems are inevitable.
  • Inadequate Testing - no one will know whether or not the software is any good until customers complain or systems crash.
  • Featurisms - requests to add on new features after development goals are agreed on.
  • Miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems can be expected.

What kinds of testing should be considered?

  • Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
  • White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
  • Unit Testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
  • Incremental Integration Testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
  • Integration Testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
  • Functional Testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
  • System Testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
  • End-to-End Testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
  • Sanity Testing or Smoke Testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
  • Regression Testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing approaches can be especially useful for this type of testing.
  • Acceptance Testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
  • Load Testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
  • Stress Testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
  • Performance Testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
  • Usability Testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
  • Install/Uninstall Testing - testing of full, partial, or upgrade install/uninstall processes.
  • Recovery Testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
  • Failover Testing - typically used interchangeably with 'recovery testing'
  • Security Testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
  • Compatability Testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
  • Exploratory Testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
  • Ad-Hoc Testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
  • Context-Driven Testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
  • User Acceptance Testing - determining if software is satisfactory to an end-user or customer.
  • Comparison Testing - comparing software weaknesses and strengths to competing products.
  • Alpha Testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
  • Beta Testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
  • Mutation Testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

What's an 'inspection'?

An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often hard for organizations to get serious about quality assurance?'. Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective than bug detection.

What is verification? validation?

Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.

How will you understand that Software has a bug?

Well most of the time if you find any of the following situations then you can safely say the software has a bug and can log a defect against it:
1. The software doesn't do something that the product specification says it should do.
2. The software does something that the product specification says it shouldn't do.
3. The software does something that the product specification doesn't mention.
4. The software doesn't do something that the product specification doesn't mention but should.
5. The software is difficult to understand, hard to use
Some common testing terms:
1. Static vs. dynamic testing
Static testing is performed using the software documentation. The Code is not executed during static testing. So here you will find defects which are related to requirements (documentation related defects), whereas in dynamic testing you will run the application and find the actual bugs in the code.
2. Software verification and validation
Verification and validation are often used interchangeably but have different definitions. These differences are important to software testing.
Verification is the process confirming that the software meets its specification. Validation is the process confirming that it meets the user's requirements. These may sound very similar, but an explanation of the Hubble space telescope problems will help show the difference.
Software Quality Assurance (SQA) (wiki)
Though software testing may be viewed as an important part of the software quality assurance (SQA) process, in SQA software process specialists and auditors take a broader view of software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the delivered software: the so-called defect rate.
What constitutes an "acceptable defect rate" depends on the nature of the software. For example, an arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than mission critical software such as that used to control the functions of an airliner that really is flying!
Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.
Software Testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. By contrast, QA (Quality Assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place.

What is the Goal of a Software Tester?

In simple words the goal of a software tester is to find bugs, find them as early as possible, and make sure they get fixed. A fundamental trait of software testers is that they simply like to break things. They live to find those elusive system crashes. They take great satisfaction in laying to waste the most complex programs. They're often seen jumping up and down in glee, giving each other high-fives, and doing a little dance when they bring a system to its knees. It's the simple joys of life that matter most to them.
When you are assigned to a software testing project, you will need to do the following things:
1. Requirement Analysis: Go through the user specification of the application you are going to test.
2. Test Architect: Write down the test conditions or test objectives from the user specs. This is equivalent to defining the architecture of the application. Do not skip this test. Say, for a calculator application the requirement says addition functionality for a calculator. Here the test objectives will be:
a. Validate that addition of two positive number gives correct result.
b. Validate that addition of one positive and one negative number gives correct result.
and so on.
3. Then write down the test cases from the above objectives. Here you will mention the detail level steps. eg
step1. Open the calculator application.
step2: click on the number 2
Step3: Click on +
Step4: Click on 3
Step5: Press = and get the result etc..
4. Test execution: At this stage you need to run the application following the test case steps and record the result.
5. Defect reporting: If step 4 gives correct result,  you mark the test case as passed. If it give a wrong result, say  
2+3 = 6 etc, then you need to log a defect. This defect will go to the developer who has developed this calculator application. He or she will fix the defect and send you the fixed application.
6. Retesting the defect: Here you need to retest the defect and, if it is actually fixed, you will close it and mark the test case as passed.

Friday, 4 March 2011

Quality Center Interview Questions -- part 1


1. What is Quality Center used for? Or What are the benefits and features of Quality Center ?

Quality Center is a comprehensive test management tool. It is a web-based tool and supports high level of communication and association among various stakeholders (Business Analyst, Developers , Testers etc. ) , driving a more effective and efficient global application-testing process. Automation Tools like QTP , WinRunner & LoadRunner can be integrated with Quality Center. One can also create reports and graphs for Analysis and Tracking for Test processes.

2. What is the difference between TestDirector and Quality Center?
Quality Center is upgraded version of Test Director built by the same vendor Mercury (Now acquired by HP).Test Director Version 8.2 onwards is know as Quality Center. Quality Center is has enhanced Security/Test management /Defect management features when compared to Test Director.

3. What is the difference between Quality Center and Bugzilla?

Quality Center is a test management tool which can also manage Defects apart from other features.
BugZilla is Defect Management tool only.

4. What is the Purpose of Creating Child Requirement in TD /QC?
By Creating Child requirements to the main requirement you can evaluate the sub requirements related to the main requirements.
You can link test sets and defects to the sub-requirements.  This helps in 100% test coverage and its analysis.
Learn More About Requirements in the Video Tutorial here

4. What is Test Lab?
In order to execute a Test Case (Developed in the Test Plan Module) either manual or automated it needs to imported into Test Lab Module. In sum, Test Cases are created in Test Plan Module while they are executed in Test Lab Module.

5. What is meant by Instance?
A Test Case imported from Test Plan module to Test Lab module is called an Instance of that test case. It is possible to have multiple instances of the same Test Case in the Test Lab Module.

6. Is it possible to maintain test data in Quality Center?
Yes. One can attach the test data to the corresponding test cases or create a separate folder in test plan to store them.

7. How to ensure that there is no duplication of bugs in Quality Center?
In the defect tracking window of QC,  there is a “find similar defect” icon. When this icon is clicked after writing the defect, if anybody else has entered the same defect then it points it out.

8. What will be the status in Quality Center if you give "Suggestion" to the Developer?
This is a trick question.
You can give "Suggestion" to the developer using the Comments sections provided in QC. This is will not change the current status of Defect in QC. In sum, the status of the defect remains the same,  as that  before giving suggestion to the developer.

9. How will you generate the defect ID in Quality Center?
The Defect ID is automatically generated after clicking Submit button.
Learn More about Defect Creation in Quality Center in Video Tutorial here


10. Is 'Not covered' and 'Not run' status the same?
Not Covered status means all those requirements for which the test cases are not written whereas Not Run status means all those requirements for which test cases are written but are not run.
11.How to import test cases from Excel / Word  to Quality Center?
1.Install and Configure the Microsoft Excel / Word Add-In for Quality Center.
2. Map the Columns in  Word / Excel spreadsheet with Columns available  in  Quality Center
3.Export the data from Word/Excel to Quality Center Using Tools >Export to Quality Center Option in Word/Excel.
4. Rectify errors if Any.

12. Can we export the file from Quality Centre to Excel / Word. If yes then how?
Yes.

Requirement tab– Right click on main Req/click on export/save as word, excel or other template. This would save all the child requirements

Test plan tab: Only individual test can be exported. No parent child export is possible. Select a test script, click on the design steps tab, right click anywhere on the open window. Click on export and save as.

Test lab tab: Select a child group. Click on execution grid if it is not selected. Right click anywhere. Default save option is excel. But can be saved in documents and other formats

Defects Tab
: Right click anywhere on the window, export all or selected defects and save excel sheet or document.
13. What is Business Component  ?
Quality Center provides Business Component  for Business Process Testing (BPT).
Many Enterprise Applications are a) complex and b) require extensive test scripts/case.
A Test /Automation Engineer can not handle both complexity of Application Under Test as well as  extensive test script/test case creation.
Using Business Component,  Subject Matter Experts ( who are experts on the  Application  Under Test ) can create tests in a script free environment without involving in the Nitty-gritty of test case/script designing. It helps increase test coverage and creates re-usable business components used for testing essential Business Processes.
Development of Test Script / Cases is done by Automation / Test Engineer.

14.How can we save the tests Executed in test lab?
The tests executed, are automatically saved when the user clicks on "END RUN" in the Test Lab
Learn More About Running a Test Case in QC in the Video Tutorial here

15. How to export test cases from QTP into QC?
To export test cases from QTP to QC, you first need to establish QTP-QC connection
1)  In QTP , Go to File > Quality Center Connection.
2)  Enter QC URL project name/domain/username/password and click on Login. QTP is now connected to QC.
Next you can select the save the QTP Script in QC.
3)  In QTP, Select File Menu > Save As > Save in QC
4 ) Select the Folder in QC where you wan to save the QTP Script
6) Click OK to Save.

16. How to use QTP as an automation tool in Quality Center ?
You need to install QTP add-in in Quality Center (Usually done by Quality Center Administrator) . You then create and store QTP scripts in QC.
Learn more about using QTP in Quality Center in Video Tutorial here

17. How to switch between two projects in Quality Center ?
In QC 9.0 and above  you can switch between two projects by select Tools>Change Projects>Select Project.
In other version , you will need to log-off and log-in again.

18. What is the main purpose to storing requirement in Quality Center?
You store requirements in Quality Center for following reasons
a) To ensure 100% coverage : You can create and track test plan / sets for the requirements stored in Quality Center to ensure all the requirements are tested.
b) Easy Change Management : If any requirement changes during course of test case creation , the underlying test case is automatically highlighted and Test Engineer can change the test case to suite the new requirement.
e) Ease of Tracking : Using Advanced Reporting & Graphs provided by QC , Managers can determine various metrics useful in project tracking and monitoring.

19. What is Coverage status, what does it do?
Coverage status is percentage of testing covered at a given time.
For Example, If you have 100 test cases in a project and you have executed 35 test cases than your coverage status of the project is 35%
Coverage status is helps keep track of project deadline