Custom Search

Monday, July 28, 2008

What is V & V? Difference between V & V...

Verification ensures the product –
is designed to deliver all functionality to the customer; it involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; and it can be done with checklists, issues lists, walkthroughs and inspection meetings.

Validation ensures –
that functionality, as defined in requirements, is the intended behavior of the product; and
validation typically involves actual testing and takes place after verifications are completed.

Verification
It takes place before validation, and not vice versa.
evaluates documents, plans, code, requirements, and specifications.
The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings.
The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document.
Validation
It evaluates the product itself.
The input of validation, is the actual testing of an actual product.
The output of validation, is a nearly perfect, actual product.
In short, Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product.

SQC Vs. SQA

Software Quality Control Software Quality Assurance


Is perceived as a professional role Is perceived as management’s eyes and
ears.

Has well-defined technical goals Has no history of saving anyone yet


Has leverage to stop the show Has fuzzy alignment with the project’s
goals.

Has authority with the developers Has no leverage to stop anything

Is aligned with the project goals Has few opportunities to show proficiency
before the problem

What is the difference among Testing, QA and QC

Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives. E.g are requirements being defined at the proper level of detail.

Quality Control: A set of activities designed to evaluate a developed work product. E.g. are the defined requirements the right requirements.

Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)

Friday, July 25, 2008

Most Frequently asked Questions in Unix...

1. Which command is used to run an interface?
Sh

2. How will you see the hidden file ?
Ls -a

3. What is the command used to set the date and timings …
Date

4. Some basic commands like copy, move,delete ?
Cp, mv, rm

5. Which command used to the go back to the home directory ….
cd..

6. Which command used to view the the current directory
PWD


7. How to unzip the files in Unix?
tar -cvf

8. How to find help for a specific Command?
Mount (Command)

9. How t ochange Password ?
Passwd

10. How to search a string from file?
grep

11. How to cahnge Permissions on a file?
chmod

12. What is the command for a calculator?
bc

What should be done after writing test case?

After writing testcase we should review it. Reviewing can be done by another Test engineer/Peer or test lead...Once got the result fix the review comments. After completion of writing test cases you need to review the cases for completeness and correctness to check whether every functionality is covered, then we have to wait for build and we have to exe the reviewed testcases....

Difference between client server testing and web server testing.

Web systems are one type of client/server. The client is the browser, the server is whatever is on the back end (database, proxy, mirror, etc). This differs from so-called “traditional” client/server in a few ways but both systems are a type of client/server. There is a certain client that connects via some protocol with a server (or set of servers).

Also understand that in a strict difference based on how the question is worded, “testing a Web server” specifically is simply testing the functionality and performance of the Web server itself. (For example, I might test if HTTP Keep-Alives are enabled and if that works. Or I might test if the logging feature is working. Or I might test certain filters, like ISAPI. Or I might test some general characteristics such as the load the server can take.) In the case of “client server testing”, as you have worded it, you might be doing the same general things to some other type of server, such as a database server. Also note that you can be testing the server directly, in some cases, and other times you can be testing it via the interaction of a client.

You can also test connectivity in both. (Anytime you have a client and a server there has to be connectivity between them or the system would be less than useful so far as I can see.) In the Web you are looking at HTTP protocols and perhaps FTP depending upon your site and if your server is configured for FTP connections as well as general TCP/IP concerns. In a “traditional” client/server you may be looking at sockets, Telnet, NNTP, etc.

Thursday, July 24, 2008

What are the type of Testing?

Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.


Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
· Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

Compatibility testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

User acceptance testing - determining if software is satisfactory to an end-user or customer.

Comparison testing - comparing software weaknesses and strengths to competing products.

Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.


A
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the systems functionality. Can include negative testing as well.

ASQ: Automated Software Quality. The use of software tools, such as automated testing tools, to improve software quality.

Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

Alpha Testing: Early testing of a software product conducted by selected customers.
Automated Testing: Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.

B

Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Beta Testing: Testing of a rerelease of a software product conducted by customers.

Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

Boundry Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Boundry Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.

C

CAST: Computer Aided Software Testing.

Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

Component: A minimal software item for which a separate specification is available.

Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

D

Debugging: The process of finding and removing the causes of software failures.

Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing: A test that exercises a feature of a product in full detail.

Dynamic Testing: Testing software through executing it. Also Called Static Testing.

E

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.


F

Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.

Functional Testing: Testing the features and operational behavior of a product to ensure they correspond to its specifications.

G

Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

I

Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

L

Localization Testing: This term refers to making software specifically designed for a specific locality.

M

Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

N

Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".

P

Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

Positive Testing: Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

S

Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.

Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational.

Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Static Analysis: Analysis of a program carried out without executing the program.

Static Testing: Analysis of a program carried out without executing the program.

Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. Often this is performance testing using a very high level of simulated load.
System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.


T

Testing:
1) The process of exercising software to verify that it satisfies specified requirements and to detect errors.
2) The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case: Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.

Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

U

Usability Testing: Testing the ease with which users can learn and use a product.

Unit Testing: Testing of individual software components.


V

Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

W

Walkthrough: A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.

White Box Testing: Testing based on an analysis of internal workings of a piece of software.

Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

About Testing

Software testing is the process of checking software, to verify that it satisfies its requirements and to detect errors.

Software testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test[1] , with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding software bugs.

Testing can never completely establish the correctness of computer software. Instead, it furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. Software testing should be distinguished from the separate discipline of Software Quality Assurance (S.Q.A.), which encompasses all business process areas, not just testing.[citation needed]

Over its existence, computer software has continued to grow in complexity and size. Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it presumably must assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment.