Custom Search

Tuesday, June 30, 2009

Test-Design Specification

1. Purpose.

To specify refinements of the test approach and to identify the features to be tested by this design and its associated tests.


2. Outline.
A test-design specification shall have the following structure:

(1) Test-design-specification identifier
(2) Features to be tested
(3) Approach refinements
(4) Test identification
(5) Feature pass/fail criteria

The sections shall be ordered in the specified sequence. Additional sections may be included at the end. If some or all of the content of a section is in another document, then a reference to that material may be listed in place of the corresponding content. The referenced material must be attached to the test-design specification or available to users of the design specification.


Details on the content of each section are contained in the following sections.



2.1 Test-Design-Specification Identifier.

Specify the unique identifierassigned to this test-design specification. Supply a reference to the associated test plan, if it exists.


2.2 Features to be Tested.

Identify the test items and describe the features and combinations of features which are the object of this design specification. Other features may be exercised, but need not be identified.
For each feature or feature combination, a reference to its associated requirements in the
item requirement specification or design description should be included.


2.3 Approach Refinements.


Specify refinements to the approach described in the test plan. Include specific test techniques to be used. The method of analyzing test results should be identified (for example, comparator
programs or visual inspection).

Specify the results of any analysis which provides a rationale for test-case selection. For example, one might specify conditions which permit a determination of error tolerance (for example, those conditions which distinguish valid inputs from invalid inputs).

Summarize the common attributes of any test cases. This may include input constraints that must be true for every input in the set of associated test cases, any shared environmental needs, and any shared special procedural requirements, and any shared case dependencies.



2.4 Test Identification.


List the identifier and a brief description of each test case associated with this design. A articular
test case may be identified in more than one test design specification. List the identifier and a brief description of each procedure associated with this test-design specification.



2.5 Feature Pass/Fail Criteria.


Specify the criteria to be used to determine whether the feature or feature combination has passed or failed.

Friday, June 19, 2009

Guidelines For Matrics Calculations

Metrics Used In Testing
In this tutorial you will learn about metrics used in testing, The Product Quality Measures -


1. Customer satisfaction index,
2. Delivered defect quantities,
3. Responsiveness (turnaround time) to users,
4. Product volatility,
5. Defect ratios,
6. Defect removal efficiency,
7. Complexity of delivered product,
8. Test coverage,
9. Cost of defects,
10. Costs of quality activities,
11. Re-work,
12. Reliability and Metrics for Evaluating Application System Testing.


The Product Quality Measures:

1. Customer satisfaction index
This index is surveyed before product delivery and after product delivery (and on-going on a periodic basis, using standard questionnaires).The following are analyzed:
Number of system enhancement requests per year
Number of maintenance fix requests per year
User friendliness: call volume to customer service hotline
User friendliness: training time per new user
Number of product recalls or fix releases (software vendors)
Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities
They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users
Turnaround time for defect fixes, by level of severity
Time for minor vs. major enhancements; actual vs. planned elapsed time


4. Product volatility
Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)

5. Defect ratios
Defects found after product delivery per function point.
Defects found after product delivery per LOC
Pre-delivery defects: annual post-delivery defects
Defects per function point of the system modifications

6. Defect removal efficiency
Number of post-release defects (found by clients in field operation), categorized by level of severity
Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects All defects include defects found internally plus externally (by customers) in the first year after product delivery

7. Complexity of delivered product
McCabe's cyclomatic complexity counts across the system
Halstead’s measure
Card's design complexity measures
Predicted defects and maintenance costs, based on complexity measures

8. Test coverage
Breadth of functional coverage
Percentage of paths, branches or conditions that were actually tested
Percentage by criticality level: perceived level of risk of paths
The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects
Business losses per defect that occurs during operation
Business interruption costs; costs of work-arounds
Lost sales and lost goodwill
Litigation costs resulting from defects
Annual maintenance cost (per function point)
Annual operating cost (per function point)
Measurable damage to your boss's career

10. Costs of quality activities
Costs of reviews, inspections and preventive measures
Costs of test planning and preparation
Costs of test execution, defect tracking, version and change control
Costs of diagnostics, debugging and fixing
Costs of tools and tool support
Costs of test case library maintenance
Costs of testing & QA education associated with the product
Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)

11. Re-work
Re-work effort (hours, as a percentage of the original coding hours)
Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
Re-worked software components (as a percentage of the total delivered components)

12. Reliability
Availability (percentage of time a system is available, versus the time the system is needed to be available)
Mean time between failure (MTBF).
Man time to repair (MTTR)
Reliability ratio (MTBF / MTTR)
Number of product recalls or fix releases
Number of production re-runs as a ratio of production runs



Metrics for Evaluating Application System Testing:

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of
acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

Thursday, June 18, 2009

Software Metrics

Software metrics are an integral part of the state-ofthe-practice in software engineering. More and more customers are specifying software and/or quality metrics reporting as part of their contractual requirements. Industry standards like ISO 9000 and industry models like the Software Engineering Institute’s (SEI) Capability Maturity Model Integrated (CMMI®) include measurement. Companies are using metrics to better understand, track, control and predict software projects, processes and products.

The term software metrics means different things to different people. When we buy a book or pick up an article on software metrics, the topic can vary from project cost and effort prediction and modeling, to defect tracking and root cause analysis, to a specific test coverage metric, to computer performance modeling. These are all examples of metrics when the word is used as a noun. I prefer the activity based view taken by Goodman. He defines software metrics as, "The continuous application of measurement-based techniques to the software development process and its products to supply meaningful and timely management information, together with the use of those techniques to improve that process and its products." [Goodman-93] Figure 1, illustrates an expansion of this definition to include software-related services such as installation and responding to customer issues. Software metrics can provide the information needed by engineers for technical decisions as well as information required by management.

If a metric is to provide useful information, everyone involved in selecting, designing, lementing,
collecting, and utilizing it must understand its definition and purpose.

Monday, June 15, 2009

Function Point Analysis

Introduction To Function Point Analysis
Software systems, unless they are thoroughly understood, can be like an ice berg. They are becoming more and more difficult to understand. Improvement of coding tools allows software developers to produce large amounts of software to meet an ever expanding need from users. As systems grow a method to understand and communicate size needs to be used. Function Point Analysis is a structured technique of problem solving. It is a method to break systems into smaller components, so they can be better understood and analyzed.

Function points are a unit measure for software much like an hour is to measuring time, miles are to measuring distance or Celsius is to measuring temperature. Function Points are an ordinal measure much like other measures such as kilometers, Fahrenheit, hours, so on and so forth

Reboot! Rethinking and Restarting Software Development - Online book
Online self paced function point training.

Human beings solve problems by breaking them into smaller understandable pieces. Problems that may appear to be difficult are simple once they are broken into smaller parts -- dissected into classes. Classifying things, placing them in this or that category, is a familiar process. Everyone does it at one time or another -- shopkeepers when they take stock of what is on their shelves, librarians when they catalog books, secretaries when they file letters or documents. When objects to be classified are the contents of systems, a set of definitions and rules must be used to place these objects into the appropriate category, a scheme of classification. Function Point Analysis is a structured technique of classifying components of a system. It is a method to break systems into smaller components, so they can be better understood and analyzed. It provides a structured technique for problem solving.

In the world of Function Point Analysis, systems are divided into five large classes and general system characteristics. The first three classes or components are External Inputs, External Outputs and External Inquires each of these components transact against files therefore they are called transactions. The next two Internal Logical Files and External Interface Files are where data is stored that is combined to form logical information. The general system characteristics assess the general functionality of the system.

Brief History
Function Point Analysis was developed first by Allan J. Albrecht in the mid 1970s. It was an attempt to overcome difficulties associated with lines of code as a measure of software size, and to assist in developing a mechanism to predict effort associated with software development. The method was first published in 1979, then later in 1983 . In 1984 Albrecht refined the method and since 1986, when the International Function Point User Group (IFPUG) was set up, several versions of the Function Point Counting Practices Manual have been published by IFPUG. The current version of the IFPUG Manual is 4.1. A full function point training manual can be downloaded from this website.


Objectives of Function Point Analysis
Frequently the term end user or user is used without specifying what is meant. In this case, the user is a sophisticated user. Someone that would understand the system from a functional perspective --- more than likely someone that would provide requirements or does acceptance testing.

Since Function Points measures systems from a functional perspective they are independent of technology. Regardless of language, development method, or hardware platform used, the number of function points for a system will remain constant. The only variable is the amount of effort needed to deliver a given set of function points; therefore, Function Point Analysis can be used to determine whether a tool, an environment, a language is more productive compared with others within an organization or among organizations. This is a critical point and one of the greatest values of Function Point Analysis.

Function Point Analysis can provide a mechanism to track and monitor scope creep. Function Point Counts at the end of requirements, analysis, design, code, testing and implementation can be compared. The function point count at the end of requirements and/or designs can be compared to function points actually delivered. If the project has grown, there has been scope creep. The amount of growth is an indication of how well requirements were gathered by and/or communicated to the project team. If the amount of growth of projects declines over time it is a natural assumption that communication with the user has improved.

Characteristic of Quality Function Point Analysis
Function Point Analysis should be performed by trained and experienced personnel. If Function Point Analysis is conducted by untrained personnel, it is reasonable to assume the analysis will done incorrectly. The personnel counting function points should utilize the most current version of the Function Point Counting Practices Manual,

Current application documentation should be utilized to complete a function point count. For example, screen formats, report layouts, listing of interfaces with other systems and between systems, logical and/or preliminary physical data models will all assist in Function Points Analysis.

The task of counting function points should be included as part of the overall project plan. That is, counting function points should be scheduled and planned. The first function point count should be developed to provide sizing used for estimating.


The Five Major Components
Since it is common for computer systems to interact with other computer systems, a boundary must be drawn around each system to be measured prior to classifying components. This boundary must be drawn according to the user’s point of view. In short, the boundary indicates the border between the project or application being measured and the external applications or user domain. Once the border has been established, components can be classified, ranked and tallied.

External Inputs (EI) - is an elementary process in which data crosses the boundary from outside to inside. This data may come from a data input screen or another application. The data may be used to maintain one or more internal logical files. The data can be either control information or business information. If the data is control information it does not have to update an internal logical file. The graphic represents a simple EI that updates 2 ILF's (FTR's).

External Outputs (EO) - an elementary process in which derived data passes across the boundary from inside to outside. Additionally, an EO may update an ILF. The data creates reports or output files sent to other applications. These reports and files are created from one or more internal logical files and external interface file. The following graphic represents on EO with 2 FTR's there is derived information (green) that has been derived from the ILF's

External Inquiry (EQ) - an elementary process with both input and output components that result in data retrieval from one or more internal logical files and external interface files. The input process does not update any Internal Logical Files, and the output side does not contain derived data. The graphic below represents an EQ with two ILF's and no derived data.

Internal Logical Files (ILF’s) - a user identifiable group of logically related data that resides entirely within the applications boundary and is maintained through external inputs.

External Interface Files (EIF’s) - a user identifiable group of logically related data that is used for reference purposes only. The data resides entirely outside the application and is maintained by another application. The external interface file is an internal logical file for another application.

Summary of benefits of Function Point Analysis

Function Points can be used to size software applications accurately. Sizing is an important component in determining productivity (outputs/inputs).
They can be counted by different people, at different times, to obtain the same measure within a reasonable margin of error.
Function Points are easily understood by the non technical user. This helps communicate sizing information to a user or customer.
Function Points can be used to determine whether a tool, a language, an environment, is more productive when compared with others.

Wednesday, June 10, 2009

Introduction to Rational

Introduction To Rational

1)

Requisite Pro

2)

Rose

3)

Purify, Quantify, Pure Coverage

4)

Robot

5)

ClearCase

6)

ClearQuest

7)

SoDA

1) Requisite Pro

-

RequisitePro lets you organize, prioritize, trace relationships, and easily track changes to your requirements.

-

RequisitePro combines both document-centric and database-centric approaches. By deeply integrating Microsoft Word with a multi-user database.

-

The program’s unique architecture and dynamic links make it possible for you to move easily between the requirements in the database and their presentation in Word documents.

Why Use RequisitePro?

1.1

Team Collaboration and User Satisfaction

-

A product development team typically includes a large number of individuals with diverse roles, such as business analysts, project leaders, product marketing managers, development managers, QA managers, developers, and testers.

-

each person on your team has access to critical requirements information and is able to manage those requirements, team efficiency and effectiveness are promoted and project risk is reduced.

1.2

Flexibility Through the Web Component

-

RequisiteWeb offers a Web-based client for RequisitePro.

-

RequisiteWeb allows users to access RequisitePro requirements information across an intranet. By using browsers—Netscape Navigator

or Microsoft Internet Explorer—RequisiteWeb provides a thin client solution to access project documents and data. No Rational application-specific files need to be installed on the user’s machine.

-

Using RequisiteWeb, you can modify the name, text, and attributes of requirements directly in the database and from within documents, and you can create, delete, and query requirements and assign new parents to them.

-

following are the main features of RequisiteWeb:

- Viewing documents
- Modifying requirements (in documents or database)

- Creating requirements in the database

- Setting your own user password

- Viewing, modifying, and creating hierarchical relationships

- Creating traceability links within and across projects

- Filtering and sorting requirements

- Creating and replying to discussions

1.3

Change Management

-

Change occurs in practically every development project, but it does not have to

consume project resources or throw the project off course.

-

RequisitePro enable you to establish and maintain dependencies between different requirements. As change occurs, these traceability relationships are flagged as suspect, so you can understand how change affects the entire project.

-

For each requirement a change history is maintained, capturing the who, what, when, and why of the change.

1.4

Comprehensive Process Support

-

RequisitePro can help you meet your objectives of delivering precise, quality software. RequisitePro provides industry standard project templates and attributes.

-

it can also import existing documents and be customized to support existing projects.

-

Whether your team follows a rigorous requirements management process, such as IEEE, SEI CMM, or Unified Modeling Language-driven use-case approaches, or is just beginning to define a formal process, RequisitePro can help you meet your objectives

A Quick Tour of Key Concepts in RequisitePro

-

overview of RequisitePro concepts and defines some terms that will help you get started.

-

Requirements
Requirement Type

Requirement Attributes

Project

Project Database

Project Version Control

Project List

Explorer

Views

Documents

Document Type

Hierarchical Relationships

Traceability Relationships

Suspect Relationships

2) Rose

-

Rational Rose provides support for two essential elements of modern software engineering: component-based development and controlled iterative development.

-

Rational Rose’s model-diagram architecture facilitates use of the Unified Modeling Language (UML), Component Object Modeling (COM), Object Modeling Technique (OMT), and Booch ‘93 method for visual modeling.

-

Visual modeling is the mapping of real world processes of a system to a graphical representation. Models are useful for understanding problems, communicating with everyone involved with the project (customers, domain experts, analysts, designers, etc.), modeling complex systems,

-

As software systems become more complex, we cannot understand them in their entirety. To effectively build a complex system, the developer begins by looking at the big picture without getting caught up in the details. A model is an ideal way to portray the abstractions of a complex problem by filtering out nonessential details.

-

Diagrams :
- UseCase Diagrams
- Seqence Diagrams
- Class Diagram
- State Transition Diagram
- Component Diagram
- Deploymnet Diagram

-

Rational Rose is the visual modeling software solution that lets you create, analyze, design, view, modify, and manipulate components.

-

Rational Rose provides the collaboration diagram as an alternative to a use-case diagram.

-

Features

-

Rational Rose provides the following features to facilitate the analysis, design, and iterative construction of your applications:

Use-Case Analysis

_ Object-Oriented Modeling

_ User-Configurable Support for UML, COM, OMT, and Booch ‘93

_ Semantic Checking

_ Support for Controlled Iterative Development

_ Round-Trip Engineering

_ Parallel Multiuser Development Through Repository and Private Support

_ Integration with Data Modeling Tools

_ Documentation Generation

_ Rational Rose Scripting for Integration and Extensibility

_ OLE Linking

_ OLE Automation

_ Multiple Platform Availability

3) Purify, Quantify, Pure Coverage

-

Collecting Diagnostic Information During Playback

-

Rational Diagnostic Tools To :

- Perform runtime error checking
- Profile Application Performance
- Analyze code coverage during playback

3.1

Rational Purify :

-

Automatically pinpoints runtime errors & memory leacks in all components of application and ensures that code is reliable.

3.2

Rational Quantify :

-

Performance Profiler

-

Provides Application performance Analysis

3.3

Rational Pure Coverage

-

Code Coverage Analysis Tool

-

Provides details application Analysis

-

Prevanting untested code from reaching the end user.

4) Robot

-

Rational Robot is a complete set of components for automating the testing of

Microsoft Windows client/server and Internet applications running under

Windows NT 4.0, Windows XP, Windows 2000, Windows 98, and Windows Me.

-

The main component of Robot lets you start recording tests in as few as two mouse clicks. After recording, Robot plays back the tests in a fraction of the time it would take to repeat the actions manually.

-

Use Robot to:

Perform full functional testing. Perform full performance testing

Create and edit scripts using the SQABasic and VU scripting environments.

Test applications developed with IDEs Collect diagnostic information about an application during script playback.

-

The Object-Oriented Recording technology in Robot lets you generate scripts by simply running and using the application-under-test.

5) ClearCase

-

Central Repository for storing all articles

-

Maintains Software Version Control

-

Allows Developers to work in parallel by giving them their individual workspaces

-

Integrate all the changed code into baselines

-

Ability to tack changes

-

Manage the supporting files for each script

6) ClearQuest

-

Is a change-reuest management tool

-

Tracks and manages defects and changes requests througout the development process

-

Manage every type of change activity associated with software development

-

Submit defect directly from the Test Manager log or Site Check

-

Modify and Track defects and change-request

-

Analyze projects progress by running queries, charts, and reports.

7) SoDA

-

Use SoDA to create reports that extracts information from one or more tools in Rational Suite.
eg. You can use SoDA to retrieve information from different information sources such as Test Manager to create documents or reports

-

Reports :
- Analysis Documents
- Design Documnets
- Test Documnets
- Status Reports

Tuesday, June 9, 2009

WinRunner FAQs - II

70) What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?
a. You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.
b. During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.
Syntax:
obj_wait_bitmap ( object, image, time );
win_wait_bitmap ( window, image, time );


71) What do you verify with the sync point for screen area and what command it generates, explain syntax?
a. For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution
Syntax: obj_wait_bitmap(object, image, time, x, y, width, height);

72) How do you edit checklist file and when do you need to edit the checklist file?
a. WinRunner has an edit checklist file option under the create menu. Select the “Edit GUI Checklist” to modify GUI checklist file and “Edit Database Checklist” to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.

73) How do you edit the expected value of an object?
a. We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values.

74) How do you modify the expected results of a GUI checkpoint?
a. We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode.

75) How do you handle ActiveX and Visual basic objects?
a. WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects.

76) How do you create ODBC query?
a. We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement.

77) How do you record a data driven test?
a. We can create a data-driven testing using data from a flat file, data table or a database.
i. Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the file and assign the variables with data.
ii. Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to manipulate data in the data table.
iii. Database: we store test data in the database and access these data using ‘db_*’ functions.


78) How do you convert a database file to a text file?
a. You can use Data Junction to create a conversion file which converts a database to a target text file.

79) How do you parameterize database check points?
a. When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes.

80) How do you create parameterize SQL commands?
a. A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application:

i. SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)

SELECT defines the columns to include in the query.
FROM specifies the path of the database.
WHERE (optional) specifies the conditions, or filters to use in the query.
Departure is the parameter that represents the departure point of a flight.
Day_Of_Week is the parameter that represents the day of the week of a flight.

b. When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script:

db_check("list1.cdl", "dbvf1", NO_LIMIT, dbvf1_params);

The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint.


81) Explain the following commands:
a. db_connect
i. to connect to a database
db_connect(, );

b. db_execute_query
i. to execute a query
db_execute_query ( session_name, SQL, record_number );

record_number is the out value.

c. db_get_field_value
i. returns the value of a single field in the specified row_index and column in the session_name database session.

db_get_field_value ( session_name, row_index, column );

d. db_get_headers
i. returns the number of column headers in a query and the content of the column headers, concatenated and delimited by tabs.

db_get_headers ( session_name, header_count, header_content );

e. db_get_row
i. returns the content of the row, concatenated and delimited by tabs.

db_get_row ( session_name, row_index, row_content );
f. db_write_records
i. writes the record set into a text file delimited by tabs.

db_write_records ( session_name, output_file [ , headers [ , record_limit ] ] );

g. db_get_last_error
i. returns the last error message of the last ODBC or Data Junction operation in the session_name database session.

db_get_last_error ( session_name, error );

h. db_disconnect
i. disconnects from the database and ends the database session.

db_disconnect ( session_name );

i. db_dj_convert
i. runs the djs_file Data Junction export file. When you run this file, the Data Junction Engine converts data from one spoke (source) to another (target). The optional parameters enable you to override the settings in the Data Junction export file.

db_dj_convert ( djs_file [ , output_file [ , headers [ , record_limit ] ] ] );
82) What check points you will use to read and check text on the GUI and explain its syntax?
a. You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text.

b. You can use a text checkpoint to:
i. Read text from a GUI object or window in your application, using obj_get_text and win_get_text
ii. Search for text in an object or window, using win_find_text and obj_find_text
iii. Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text
iv. Click on text in an object or window, using obj_click_on_text and win_click_on_text

83) Explain Get Text checkpoint from object/window with syntax?
a. We use obj_get_text (, ) function to get the text from an object
b. We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

84) Explain Get Text checkpoint from screen area with syntax?
a. We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

85) Explain Get Text checkpoint from selection (web only) with syntax?
a. Returns a text string from an object.

web_obj_get_text (object, table_row, table_column, out_text [, text_before, text_after, index]);

i. object The logical name of the object.
ii. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the # character.
iii. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the # character.
iv. out_text The output variable that stores the text string.
v. text_before Defines the start of the search area for a particular text string.
vi. text_after Defines the end of the search area for a particular text string.
vii. index The occurrence number to locate. (The default parameter number is numbered 1).

86) Explain Get Text checkpoint web text checkpoint with syntax?
a. We use web_obj_text_exists function for web text checkpoints.

web_obj_text_exists ( object, table_row, table_column, text_to_find [, text_before, text_after] );

a. object The logical name of the object to search.
b. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the character #.
c. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the character #.
d. text_to_find The string that is searched for.
e. text_before Defines the start of the search area for a particular text string.
f. text_after Defines the end of the search area for a particular text string.

87) Which TSL functions you will use for
a. Searching text on the window
i. find_text ( string, out_coord_array, search_area [, string_def ] );

string The string that is searched for. The string must be complete, contain no spaces, and it must be preceded and followed by a space outside the quotation marks. To specify a literal, case-sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. In this case, the string variable can include a regular expression.

out_coord_array The name of the array that stores the screen coordinates of the text (see explanation below).

search_area The area to search, specified as coordinates x1,y1,x2,y2. These define any two diagonal corners of a rectangle. The interpreter searches for the text in the area defined by the rectangle.

string_def Defines the type of search to perform. If no value is specified, (0 or FALSE, the default), the search is for a single complete word only. When 1, or TRUE, is specified, the search is not restricted to a single, complete word.

b. getting the location of the text string
i. win_find_text ( window, string, result_array [, search_area [, string_def ] ] );


window The logical name of the window to search.

string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression. The regular expression should not include an exclamation mark (!), however, which is treated as a literal character. For more information regarding Regular Expressions, refer to the "Using Regular Expressions" chapter in your User's Guide.

result_array The name of the output variable that stores the location of the string as a four-element array.

search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1,y1,x2,y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window is considered the search area.

string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.

c. Moving the pointer to that text string
i. win_move_locator_text (window, string [ ,search_area [ ,string_def ] ] );

window The logical name of the window.

string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression (the regular expression need not begin with an exclamation mark).

search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1, y1, x2, y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window specified is considered the search area.
string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.

d. Comparing the text
i. compare_text (str1, str2 [, chars1, chars2]);

str1, str2 The two strings to be compared.

chars1 One or more characters in the first string.

chars2 One or more characters in the second string. These characters are substituted for those in chars1.

88) What are the steps of creating a data driven test?
a. The steps involved in data driven testing are:
i. Creating a test
ii. Converting to a data-driven test and preparing a database
iii. Running the test
iv. Analyzing the test results.

89) Record a data driven test script using data driver wizard?
a. You can use the DataDriver Wizard to convert your entire script or a part of your script into a data-driven test. For example, your test script may include recorded operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. You need to parameterize only the portion of your test script that you want to run in a loop with multiple sets of data.

To create a data-driven test:
i. If you want to turn only part of your test script into a data-driven test, first select those lines in the test script.
ii. Choose Tools > DataDriver Wizard.
iii. If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you want to turn the entire test into a data-driven test, click Next.
iv. The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a different name for the data table, or use
v. The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder.
vi. In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, “table.”
vii. At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the table variable name is used. This makes it easy for you to assign a different data table
viii. To the script at a later time without making changes throughout the script.
ix. Choose from among the following options:
1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it reads from the data table; adds ddt_open and ddt_close statements
2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these statements to your test script manually.
3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close your datatable.
4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test script after the ddt_open statement.
5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction, refer to the documentation in the Data Junction package.
6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function, and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterizing data.
7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the name of the column in the data table. The replaced data is inserted into the table.

x. The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different argument to replace.

Choose whether and how to replace the selected data:
1. Do not replace this data: Does not parameterize this data.
2. An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list.
3. A new column: Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table. The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a new name.

xi. The final screen of the wizard opens.
1. If you want the data table to open after you close the wizard, select Show data table now.
2. To perform the tasks specified in previous screens and close the wizard, click Finish.
3. To close the wizard without making any changes to the test script, click Cancel.

90) What are the three modes of running the scripts?
a. WinRunner provides three modes in which to run tests—Verify, Debug, and Update. You use each mode during a different phase of the testing process.
i. Verify
1. Use the Verify mode to check your application.
ii. Debug
1. Use the Debug mode to help you identify bugs in a test script.
iii. Update
1. Use the Update mode to update the expected results of a test or to create a new expected results folder.
91) Explain the following TSL functions:
a. Ddt_open
i. Creates or opens a datatable file so that WinRunner can access it.
Syntax: ddt_open ( data_table_name, mode );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

mode The mode for opening the data table: DDT_MODE_READ (read-only) or DDT_MODE_READWRITE (read or write).

b. Ddt_save
i. Saves the information into a data file.
Syntax: dt_save (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.

c. Ddt_close
i. Closes a data table file
Syntax: ddt_close ( data_table_name );

data_table_name The name of the data table. The data table is a Microsoft Excel file or a tabbed text file. The first row in the file contains the names of the parameters.

d. Ddt_export
i. Exports the information of one data table file into a different data table file.
Syntax: ddt_export (data_table_namename1, data_table_namename2);

data_table_namename1 The source data table filename.
data_table_namename2 The destination data table filename.

e. Ddt_show
i. Shows or hides the table editor of a specified data table.
Syntax: ddt_show (data_table_name [, show_flag]);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.

show_flag The value indicating whether the editor should be shown (default=1) or hidden (0).

f. Ddt_get_row_count
i. Retrieves the no. of rows in a data tables
Syntax: ddt_get_row_count (data_table_name, out_rows_count);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

out_rows_count The output variable that stores the total number of rows in the data table.

g. ddt_next_row
i. Changes the active row in a database to the next row
Syntax: ddt_next_row (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

h. ddt_set_row
i. Sets the active row in a data table.
Syntax: ddt_set_row (data_table_name, row);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row The new active row in the data table.

i. ddt_set_val
i. Sets a value in the current row of the data table
Syntax: ddt_set_val (data_table_name, parameter, value);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.
parameter The name of the column into which the value will be inserted.
value The value to be written into the table.


j. ddt_set_val_by_row
i. Sets a value in a specified row of the data table.
Syntax: ddt_set_val_by_row (data_table_name, row, parameter, value);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row The row number in the table. It can be any existing row or the current row number plus 1, which will add a new row to the data table.

parameter The name of the column into which the value will be inserted.

value The value to be written into the table.

k. ddt_get_current_row
i. Retrieves the active row of a data table.
Syntax: ddt_get_current_row ( data_table_name, out_row );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

out_row The output variable that stores the active row in the data table.

l. ddt_is_parameter
i. Returns whether a parameter in a datatable is valid
Syntax: ddt_is_parameter (data_table_name, parameter);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

parameter The parameter name to check in the data table.

m. ddt_get_parameters
i. Returns a list of all parameters in a data table.
Syntax: ddt_get_parameters ( table, params_list, params_num );

table The pathname of the data table.
params_list This out parameter returns the list of all parameters in the data table, separated by tabs.
params_num This out parameter returns the number of parameters in params_list.

n. ddt_val
i. Returns the value of a parameter in the active roe in a data table.
Syntax: ddt_val (data_table_name, parameter);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

parameter The name of the parameter in the data table.

o. ddt_val_by_row
i. Returns the value of a parameter in the specified row in a data table.
Syntax: ddt_val_by_row ( data_table_name, row_number, parameter );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row_number The number of the row in the data table.

parameter The name of the parameter in the data table.

p. ddt_report_row
i. Reports the active row in a data table to the test results
Syntax: ddt_report_row (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

q. ddt_update_from_db
i. imports data from a database into a data table. It is inserted into your test script when you select the Import data from a database option in the DataDriver Wizard. When you run your test, this function updates the data table with data from the database.


92) How do you handle unexpected events and errors?
a. WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run.








WinRunner enables you to handle the following types of exceptions:

Pop-up exceptions: Instruct WinRunner to detect and handle the appearance of a specific window.

TSL exceptions: Instruct WinRunner to detect and handle TSL functions that return a specific error code.

Object exceptions: Instruct WinRunner to detect and handle a change in a property for a specific GUI object.

Web exceptions: When the WebTest add-in is loaded, you can instruct WinRunner to handle unexpected events and errors that occur in your Web site during a test run.

93) How do you handle pop-up exceptions?
a. A pop-up exception Handler handles the pop-up messages that come up during the execution of the script in the AUT. TO handle this type of exception we make WinRunner learn the window and also specify a handler to the exception. It could be
i. Default actions: WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the appropriate button in the dialog box.
ii. User-defined handler: If you prefer, specify the name of your own handler. Click User Defined Function Name and type in a name in the User Defined Function Name box.

94) How do you handle TSL exceptions?
a. A TSL exception enables you to detect and respond to a specific error code returned during test execution.
b. Suppose you are running a batch test on an unstable version of your application. If your application crashes, you want WinRunner to recover test execution. A TSL exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch.
c. The handler function is responsible for recovering test execution. When WinRunner detects a specific error code, it calls the handler function. You implement this function to respond to the unexpected error in the way that meets your specific testing needs.
d. Once you have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box. Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file.

95) How do you handle object exceptions?
a. During testing, unexpected changes can occur to GUI objects in the application you are testing. These changes are often subtle but they can disrupt the test run and distort results.
b. You could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function and continue with the test execution

96) How do you comment your script?
a. We comment a script or line of script by inserting a ‘#’ at the beginning of the line.

97) What is a compile module?
a. A compiled module is a script containing a library of user-defined functions that you want to call frequently from other tests. When you load a compiled module, its functions are automatically compiled and remain in memory. You can call them directly from within any test.
b. Compiled modules can improve the organization and performance of your tests. Since you debug compiled modules before using them, your tests will require less error-checking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script.

98) What is the difference between script and compile module?
a. Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable.
b. WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of “Compiled Module”.
c. By default, modules containing TSL code have a property value of "main". Main modules are called for execution from within other modules. Main modules are dynamically compiled into machine code only when WinRunner recognizes a "call" statement. Example of a call for the "app_init" script:

call cso_init();
call( "C:\\MyAppFolder\\" & "app_init" );
d. Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement:

reload (“C:\\MyAppFolder\\" & "flt_lib");
or
load ("C:\\MyAppFolder\\" & "flt_lib");

99) Write and explain various loop command?
a. A for loop instructs WinRunner to execute one or more statements a specified number of times.

It has the following syntax:

for ( [ expression1 ]; [ expression2 ]; [ expression3 ] )
statement

i. First, expression1 is executed. Next, expression2 is evaluated. If expression2 is true, statement is executed and expression3 is executed. The cycle is repeated as long as expression2 remains true. If expression2 is false, the for statement terminates and execution passes to the first statement immediately following.
ii. For example, the for loop below selects the file UI_TEST from the File Name list
iii. in the Open window. It selects this file five times and then stops.
set_window ("Open")
for (i=0; i General Options. The General Options dialog box opens. Click the Folders tab and choose a search path in the Search Path for Called Tests box. WinRunner searches the directories in the order in which they are listed in the box. Note that the search paths you define remain active in future testing sessions.

110) How you create user-defined functions and explain the syntax?
a. A user-defined function has the following structure:

[class] function name ([mode] parameter...)
{
declarations;
statements;
}

b. The class of a function can be either static or public. A static function is available only to the test or module within which the function was defined.
c.
d. Parameters need not be explicitly declared. They can be of mode in, out, or inout. For all non-array parameters, the default mode is in. For array parameters, the default is inout. The significance of each of these parameter types is as follows:

in: A parameter that is assigned a value from outside the function.
out: A parameter that is assigned a value from inside the function.
inout: A parameter that can be assigned a value from outside or inside the function.

111) What does static and public class of a function means?
a. The class of a function can be either static or public.
b. A static function is available only to the test or module within which the function was defined.
c. Once you execute a public function, it is available to all tests, for as long as the test containing the function remains open. This is convenient when you want the function to be accessible from called tests. However, if you want to create a function that will be available to many tests, you should place it in a compiled module. The functions in a compiled module are available for the duration of the testing session.
d. If no class is explicitly declared, the function is assigned the default class, public.

112) What does in, out and input parameters means?
a. in: A parameter that is assigned a value from outside the function.
b. out: A parameter that is assigned a value from inside the function.
c. inout: A parameter that can be assigned a value from outside or inside the function.
113) What is the purpose of return statement?
a. This statement passes control back to the calling function or test. It also returns the value of the evaluated expression to the calling function or test. If no expression is assigned to the return statement, an empty string is returned.
Syntax: return [( expression )];

114) What does auto, static, public and extern variables means?
a. auto: An auto variable can be declared only within a function and is local to that function. It exists only for as long as the function is running. A new copy of the variable is created each time the function is called.
b. static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its value until the test is terminated by an Abort command. This variable is initialized each time the definition of the function is executed.
c. public: A public variable can be declared only within a test or module, and is available for all functions, tests, and compiled modules.
d. extern: An extern declaration indicates a reference to a public variable declared outside of the current test or module.

115) How do you declare constants?
a. The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public or static. If no class is explicitly declared, the constant is assigned the default class public. Once a constant is defined, it remains in existence until you exit WinRunner.
b. The syntax of this declaration is:
[class] const name [= expression];

116) How do you declare arrays?
a. The following syntax is used to define the class and the initial expression of an array. Array size need not be defined in TSL.
b. class array_name [ ] [=init_expression]
c. The array class may be any of the classes used for variable declarations (auto, static, public, extern).

117) How do you load and unload a compile module?
a. In order to access the functions in a compiled module you need to load the module. You can load it from within any test script using the load command; all tests will then be able to access the function until you quit WinRunner or unload the compiled module.
b. You can load a module either as a system module or as a user module. A system module is generally a closed module that is “invisible” to the tester. It is not displayed when it is loaded, cannot be stepped into, and is not stopped by a pause command. A system module is not unloaded when you execute an unload statement with no parameters (global unload).

load (module_name [,1Ó˜] [,1Ó˜] );

The module_name is the name of an existing compiled module.

Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.

(Default = 0)

The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded: 1 indicates that the module will close automatically; 0 indicates that the module will remain open.
(Default = 0)
c. The unload function removes a loaded module or selected functions from memory.
d. It has the following syntax:
unload ( [ module_name | test_name [ , "function_name" ] ] );

118) Why you use reload function?
a. If you make changes in a module, you should reload it. The reload function removes a loaded module from memory and reloads it (combining the functions of unload and load).
The syntax of the reload function is:
reload ( module_name [ ,1Ó˜ ] [ ,1Ó˜ ] );

The module_name is the name of an existing compiled module.

Two additional optional parameters indicate the type of module. The first parameter indicates whether the module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.
(Default = 0)
The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded. 1 indicates that the module will close automatically. 0 indicates that the module will remain open.
(Default = 0)

119) Write and explain compile module?
a.
120) How do you call a function from external libraries (dll).
121) What is the purpose of load_dll?
122) How do you load and unload external libraries?
123) How do you declare external functions in TSL?
124) How do you call windows APIs, explain with an example?
125) Write TSL functions for the following interactive modes:
i. Creating a dialog box with any message you specify, and an edit field.
ii. Create dialog box with list of items and message.
iii. Create dialog box with edit field, check box, and execute button, and a cancel button.
iv. Creating a browse dialog box from which user selects a file.
v. Create a dialog box with two edit fields, one for login and another for password input.
126) What is the purpose of step, step into, step out, step to cursor commands for debugging your script?
127) How do you update your expected results?
128) How do you run your script with multiple sets of expected results?
129) How do you view and evaluate test results for various check points?
130) How do you view the results of file comparison?
131) What is the purpose of Wdiff utility?
132) What are batch tests and how do you create and run batch tests ?
133) How do you store and view batch test results?
134) How do you execute your tests from windows run command?
135) Explain different command line options?
136) What TSL function you will use to pause your script?
137) What is the purpose of setting a break point?
138) What is a watch list?
139) During debugging how do you monitor the value of the variables?