Custom Search

Wednesday, April 29, 2009

Aptitude Questions Answers Vol - 1

Q. If 2x-y=4 then 6x-3y=?
(a)15
(b)12
(c)18
(d)10
Ans. (b)


Q. If x=y=2z and xyz=256 then what is the value of x?
(a)12
(b)8
(c)16
(d)6
Ans. (b)


Q. (1/10)18 - (1/10)20 = ?
(a) 99/1020
(b) 99/10
(c) 0.9
(d) none of these
Ans. (a)


Q. Pipe A can fill in 20 minutes and Pipe B in 30 mins and Pipe C can empty the same in 40 mins.If all of them work together, find the time taken to fill the tank
(a) 17 1/7 mins
(b) 20 mins
(c) 8 mins
(d) none of these
Ans. (a)


Q. Thirty men take 20 days to complete a job working 9 hours a day. How many hour a day should 40 men work to complete the job?
(a) 8 hrs
(b) 7 1/2 hrs
(c) 7 hrs
(d) 9 hrs
Ans. (b)


Q. Find the smallest number in a GP whose sum is 38 and product 1728
(a) 12
(b) 20
(c) 8
(d) none of these
Ans. (c)


Q. A boat travels 20 kms upstream in 6 hrs and 18 kms downstream in 4 hrs. Find the speed of the boat in still water and the speed of the water current?
(a) 1/2 kmph
(b) 7/12 kmph
(c) 5 kmph
(d) none of these
Ans. (b)


Q. A goat is tied to one corner of a square plot of side 12m by a rope 7m long. Find the area it can graze?
(a) 38.5 sq.m
(b) 155 sq.m
(c) 144 sq.m
(d) 19.25 sq.m
Ans. (a)


Q. Mr. Shah decided to walk down the escalator of a tube station. He found that if he walks down 26 steps, he requires 30 seconds to reach the bottom. However, if he steps down 34 stairs he would only require 18 seconds to get to the bottom. If the time is measured from the moment the top step begins to descend to the time he steps off the last step at the bottom, find out the height of the stair way in steps?
Ans.46 steps.


Q. The average age of 10 members of a committee is the same as it was 4 years ago, because an old member has been replaced by a young member. Find how much younger is the new member ?
Ans.40 years.

Monday, April 27, 2009

Organisational Approaches for Unit Testing

Introduction

Unit testing is the testing of individual components (units) of the software. Unit testing is
usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases. The basic units of design and code in Ada, C and C++ programs are individual subprograms (procedures, functions, member functions). Ada and C++ provide capabilities for grouping basic units together into packages (Ada) and classes (C++).
Unit testing for Ada and C++ usually tests units in the context of the containing package or class.
When developing a strategy for unit testing, there are three basic organisational approaches that can be taken. These are top down, bottom up and isolation. These three approaches are described and their advantages and disadvantages discussed in sections 2, 3, and 4 of this paper.
The concepts of test drivers and stubs are used throughout this paper. A test driver is software which executes software in order to test it, providing a framework for setting input parameters, executing the unit, and reading the output parameters. A stub is an imitation of a unit, used in place of the real unit to facilitate testing.
An AdaTEST or Cantata test script comprises a test driver and an (optional) collection of stubs. Using AdaTEST or Cantata to implement the organisational approaches to unit testing presented in this paper is discussed in section 5.

2. Top Down Testing

2.1. Description
In top down unit testing, individual units are tested by using them from the units which call them, but in isolation from the units called. The unit at the top of a hierarchy is tested first, with all called units replaced by stubs. Testing continues by replacing the stubs with the actual called units, with lower level units being stubbed. This process is repeated until the lowest level units have been tested.
Top down testing requires test stubs, but not test drivers.
Figure 2.1 illustrates the test stubs and tested units needed to test unit D, assuming that units A, B and C have already been tested in a top down approach. A unit test plan for the program shown in figure 2.1, using a strategy based on the top down organisational approach, could read as follows:
Step (1)
Test unit A, using stubs for units B, C and D.
Step (2)
Test unit B, by calling it from tested unit A, using stubs for units C and D.
Step (3)
Test unit C, by calling it from tested unit A, using tested units B and a stub for unit D.
Step (4)
Test unit D, by calling it from tested unit A, using tested unit B and C, and stubs for units E, F and G. (Shown in figure 2.1).
Step (5)
Test unit E, by calling it from tested unit D, which is called from tested unit A, using tested units B and C, and stubs for units F, G, H, I and J.
Step (6)
Test unit F, by calling it from tested unit D, which is called from tested unit A, using tested units B, C and E, and stubs for units G, H, I and J.
Step (7)
Test unit G, by calling it from tested unit D, which is called from tested unit A, using tested units B, C, E and F, and stubs for units H, I and J.
Step (8)
Test unit H, by calling it from tested unit E, which is called from tested unit D, which is called from tested unit A, using tested units B, C, E, F and G, and stubs for units I and J.
Step (9)
Test unit I, by calling it from tested unit E, which is called from tested unit D, which is called from tested unit A, using tested units B, C, E, F, G and H, and a stub for units J.
Step (10)
Test unit J, by calling it from tested unit E, which is called from tested unit D, which is called from tested unit A, using tested units B, C, E, F, G, H and I.








2.2 Advantages
Top down unit testing provides an early integration of units before the software integration phase. In fact, top down unit testing is really a combined unit test and software integration strategy.

The detailed design of units is top down, and top down unit testing implements tests in the sequence units are designed, so development time can be shortened by overlapping unit testing with the detailed design and code phases of the software lifecycle. In a conventionally structured design, where units at the top of the hierarchy provide high level functions, with units at the bottom of the hierarchy implementing details, top down unit testing will provide an early integration of 'visible' functionality. This gives a very requirements oriented approach to unit testing. Redundant functionality in lower level units will be identified by top down unit testing, because there will be no route to test it. (However, there can be some difficulty in distinguishing between redundant functionality and untested functionality).

2.3. Disadvantages
Top down unit testing is controlled by stubs, with test cases often spread across many stubs. With each unit tested, testing becomes more complicated, and consequently more expensive to develop and maintain. As testing progresses down the unit hierarchy, it also becomes more difficult to achieve
the good structural coverage which is essential for high integrity and safety critical applications, and which are required by many standards. Difficulty in achieving structural coverage can also lead to a confusion between genuinely redundant functionality and untested functionality. Testing some low level functionality, especially error handling code, can be totally impractical. Changes to a unit often impact the testing of sibling units and units below it in the hierarchy. For example, consider a change to unit D. Obviously, the unit test for unit D would have to change and be repeated. In addition, unit tests for units E, F, G, H, I and J, which use the tested unit D, would also have to be repeated. These tests may also have to change themselves, as a consequence of the change to unit D, even though units E, F, G, H, I and J had not actually changed. This leads to a high cost associated with retesting when changes are made, and a high maintenance and overall lifecycle cost.
The design of test cases for top down unit testing requires structural knowledge of when the unit under test calls other units. The sequence in which units can be tested is constrained by the hierarchy of units, with lower units having to wait for higher units to be tested, forcing a 'long and thin' unit test phase. (However, this can overlap substantially with the detailed design and code phases of the software lifecycle).
The relationships between units in the example program in figure 2.1 is much simpler than would be encountered in a real program, where units could be referenced from more than one other unit in the hierarchy. All of the disadvantages of a top down approach to unit testing are compounded by a unit being referenced from more than one other unit.


2.4. Overall
A top down strategy will cost more than an isolation based strategy, due to complexity of testing units below the top of the unit hierarchy, and the high impact of changes. The top down ganisational approach is not a good choice for unit testing. However, a top down approach to the integration of units, where the units have already been tested in isolation, can be viable.


Friday, April 24, 2009

Why Bother to Unit Testing?

1. Introduction
The quality and reliability of software is often seen as the weak link in industry's attempts to develop new products and services. The last decade has seen the issue of software quality and reliability addressed through a growing adoption of design methodologies and supporting CASE tools, to the extent that most software designers have had some training and experience in the use of
formalised software design methods. Unfortunately, the same cannot be said of software testing. Many developments applying such design methodologies are still failing to bring the quality and reliability of software under control. It is not unusual for 50% of software maintenance costs to be
attributed to fixing bugs left by the initial software development; bugs which should have been eliminated by thorough and effective software testing.
This paper addresses a question often posed by developers who are new to the concept of thorough testing: Why bother to unit test? The question is answered by adopting the position of devil's advocate, presenting some of the common arguments made against unit testing, then proceeding to show how these arguments are worthless. The case for unit testing is supported by published data.

2. What is Unit Testing?
The unit test is the lowest level of testing performed during software development, where individual units of software are tested in isolation from other parts of a program. In a conventional structured programming language, such as C, the unit to be tested is traditionally the function or sub-routine. In object oriented languages such as C++, the basic unit to be tested is the class. With Ada, developers have the choice of unit testing individual procedures and functions, or unit testing at the Ada package level. The principle of unit testing also extends to 4GL development, where the basic unit would typically be a menu or display.
Unit level testing is not just intended for one-off development use, to aid bug free coding. Unit tests have to be repeated whenever software is modified or used in a new environment. Consequently, all tests have to be maintained throughout the life of a software system. Other activities which are often associated with unit testing are code reviews, static analysis and dynamic analysis. Static analysis investigates the textual source of software, looking for problems and gathering metrics without actually compiling or executing it. Dynamic analysis looks at the behaviour of software while it is executing, to provide information such as execution traces, timing profiles, and test coverage information.

3. Some Popular Misconceptions
Having established what unit testing is, we can now proceed to play the devil's advocate. In the following subsections, some of the common arguments made against unit testing are presented, together with reasoned cases showing how these arguments are worthless.


3.1. It Consumes Too Much Time
Once code has been written, developers are often keen to get on with integrating the software, so that they can see the actual system starting to work. Activities such as unit testing may be seen to get in the way of this apparent progress, delaying the time when the real fun of debugging the overall system can start. What really happens with this approach to development is that real progress is traded for apparent progress. There is little point in having a system which “sort of” works, but
happens to be full of bugs. In practice, such an approach to development will often result
in software which will not even run. The net result is that a lot of time will be spent tracking down relatively simple bugs which are wholly contained within particular units.
Individually, such bugs may be trivial, but collectively they result in an excessive period of time integrating the software to produce a system which is unlikely to be reliable when it enters use.
In practice, properly planned unit tests consume approximately as much effort as writing the actual code. Once completed, many bugs will have been corrected and developers can proceed to a much more efficient integration, knowing that they have reliable components to begin with. Real progress has been made, so properly planned unit testing is a much more efficient use of time. Uncontrolled rambling with a debugger consumes a lot more time for less benefit. Tool support using tools such as AdaTEST and Cantata can make unit testing more efficient and effective, but is not essential. Unit testing is a worthwhile activity even without tool support.

3.2. It Only Proves That the Code Does What the Code Does
This is a common complaint of developers who jump straight into writing code, without first writing a specification for the unit. Having written the code and confronted with the task of testing it, they read the code to find out what it actually does and base their tests upon the code they have written. Of course they will prove nothing. All that such a test will show is that the compiler works. Yes, they will catch the (hopefully) rare compiler bug; but they could be achieving so much more.
If they had first written a specification, then tests could be based upon the specification.
The code could then be tested against its specification, not against itself. Such a test will
continue to catch compiler bugs. It will also find a lot more coding errors and even some
errors in the specification. Better specifications enable better testing, and the corollary is
that better testing requires better specifications.
In practice, there will be situations where a developer is faced with the thankless task of testing a unit given only the code for the unit and no specification. How can you do more than just find compiler bugs? The first step is to understand what the unit is supposed to do - not what it actually does. In effect, reverse engineer an outline specification. The main input to this process is to read the code and the comments, for the unit, and the units which call it or it calls. This can be supported by drawing flowgraphs, either by hand or using a tool. The outline specification can then be reviewed, to make sure that there are no fundamental flaws in the unit, and then used to design unit tests, with minimal further reference to the code.

3.3. “I'm too Good a Programmer to Need Unit Tests”
There is at least one developer in every organisation who is so good at programming that their software always works first time and consequently does not need to be tested. How often have you heard this excuse? In the real world, everyone makes mistakes. Even if a developer can muddle through with this attitude for a few simple programs, real software systems are much more
complex. Real software systems do not have a hope of working without extensive testing and consequent bug fixing.
Coding is not a one pass process. In the real world software has to be maintained to reflect changes in operational requirements and fix bugs left by the original development. Do you want to be dependent upon the original author to make these changes? The chances are that the “expert” programmer who hacked out the original code without testing it will have moved on to hacking out code elsewhere. With a repeatable unit test the developer making changes will be able to check that there are no undesirable side effects.

3.4. Integration Tests will Catch all the Bugs Anyway
We have already addressed this argument in part as a side issue from some of the receding iscussion. The reason why this will not work is that larger integrations of code are more complex. If units have not been tested first, a developer could easily spend a lot of time just getting the software to run, without actually executing any test cases.
Once the software is running, the developer is then faced with the problem of thoroughly testing each unit within the overall complexity of the software. It can be quite difficult to even create a situation where a unit is called, let alone thoroughly exercised once it is called. Thorough testing of unit level functionality during integration is much more complex than testing units in isolation.
The consequence is that testing will not be as thorough as it should be. Gaps will be left and bugs will slip through.
To create an analogy, try cleaning a fully assembled food processor! No matter how much water and detergent is sprayed around, little scraps of food will remain stuck in awkward corners, only to go rotten and surface in a later recipe. On the other hand, if it is disassembled, the awkward corners either disappear or become much more accessible, and each part can be cleaned without too much trouble.

3.5. It is not Cost Effective

The level of testing appropriate to a particular organisation and software application depends on the potential consequences of undetected bugs. Such consequences can range from a minor inconvenience of having to find a work-round for a bug to multiple deaths. Often overlooked by software developers (but not by customers), is the long term damage to the credibility of an organisation which delivers software to users with bugs in it, and the resulting negative impact on future business. Conversely, a reputation for reliable software will help an organisation to obtain future business.
Many studies have shown that efficiency and quality are best served by testing software as early in the life cycle as practical, with full regression testing whenever changes are made. The later a bug is found, the higher the cost of fixing it, so it is sound economics to identify and fix bugs as early as possible. Unit testing is an opportunity to catch bugs early, before the cost of correction escalates too far.
Unit tests are simpler to create, easier to maintain and more convenient to repeat than later stages of testing. When all costs are considered, unit tests are cheap compared to the alternative of complex and drawn out integration testing, or unreliable software.

4. Some Figures

Figures from “Applied Software Measurement”, (Capers Jones, McGraw-Hill 1991), for the time taken to prepare tests, execute tests, and fix defects (normalised to one function point), show that unit testing is about twice as cost effective as integration testing and more than three times as cost effective as system testing.
This does not mean that developers should not perform the latter stages of testing, they are still necessary. What it does mean is that the expense of later stages of testing can be reduced by eliminating as many bugs as possible as early as possible. Other figures show that up to 50% of maintenance effort is spent fixing bugs which have always been there. This effort could be saved if the bugs were eliminated during development. When it is considered that software maintenance costs can be many times the initial development cost, a potential saving of 50% on software maintenance can make a sizeable impact on overall lifecycle costs.

5. Conclusion
Experience has shown that a conscientious approach to unit testing will detect many bugs at a stage of the software development where they can be corrected economically. In later stages of software development, detection and correction of bugs is much more difficult, time consuming and costly. Efficiency and quality are best served by testing software as early in the lifecycle as practical, with full regression testing whenever changes are made. Given units which have been tested, the integration process is greatly simplified. Developers will be able to concentrate upon the interactions between units and the overall functionality without being swamped by lots of little bugs within the units. The effectiveness of testing effort can be maximised by selection of a testing strategy
which includes thorough unit testing, good management of the testing process, and appropriate use of tools such as AdaTEST or Cantata to support the testing process. The result will be more reliable software at a lower development cost, and there will be further benefits in simplified maintenance and reduced lifecycle costs. Effective unit testing is all part of developing an overall “quality” culture, which can only be beneficial to a software developers business.

Thursday, April 23, 2009

What is Traceability Matrix from Software Testing perspective?


The concept of Traceability Matrix is very important from the Testing perspective. It is document which maps requirements with test cases. By preparing Traceability matrix, we can ensure that we have covered all the required functionalities of the application in our test cases.



What is Traceability Matrix from Software Testing perspective?


The concept of Traceability Matrix is very important from the Testing perspective. It is document which maps requirements with test cases. By preparing Traceability matrix, we can ensure that we have covered all the required functionalities of the application in our test cases. Some of the features of the traceability matrix:
It is a method for tracing each requirement from its point of origin, through each development phase and work product, to the delivered product


Can indicate through identifiers where the requirement is originated, specified, created, tested, and delivered Will indicate for each work product the requirement(s) this work product satisfies


Facilitates communications, helping customer relationship management and commitment negotiation


Traceability matrix is the answer of the following basic questions of any Software Project:
How is it possible to ensure, for each phase of the lifecycle, that I have correctly accounted for all the customer’s needs?
How can I ensure that the final software product will meet the customer’s needs? For example I have a functionality which checks if I put invalid password in the password field the application throws an error message “Invalid password”. Now we can only make sure this requirement is captured in the test case by traceability matrix.


Some more challenges we can overcome by Traceability matrix:
Demonstrate to the customer that the requested contents have been developed
Ensure that all requirements are correct and included in the test plan and the test cases
Ensure that developers are not creating features that no one has requested
The system that is built may not have the necessary functionality to meet the customers and users needs and expectations. How to identify the missing parts?
If there are modifications in the design specifications, there is no means of tracking the changes
If there is no mapping of test cases to the requirements, it may result in missing a major defect in the system
The completed system may have “Extra” functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.
If the code component that constitutes the customer’s high priority requirements is not known, then the areas that need to be worked first may not be known thereby decreasing the chances of shipping a useful product on schedule
A seemingly simple request might involve changes to several parts of the system and if proper Traceability process is not followed, the evaluation of the work that may be needed to satisfy the request may not be correctly evaluated


Step by step process of creating a Traceability Matrix from requirements:



step1: Identify all the testable requirements in granular level from various requirement specification documents. These documents vary from project to project. Typical requirements you need to capture are as follows:Used cases (all the flows are captured)Error MessagesBusiness rulesFunctional rulesSRSFRSSo on…
example requirements: login functionality, generate report, update something etc.



step2: In every project you must be creating test-cases to test the functionality as defined by the requirements. In this case you want to extend the traceability to those test-cases. In the example table below the test-cases are identified with a TC_ prefix.Put all those requirements in the top row of a spreadsheet. And use the right hand column of the spreadsheet to jot down all the test cases you have written for that particular requirement. In most of the cases you will have multiple test cases you have written to test one requirement. See the sample spreadsheet below:


Sample traceability matrix


step3: Put cross against each of the test case to each requirement if that particular test case is checking that particular requirement partially or completely. In the above table you can see REQ1 UC1.1 is checked by three test cases. (TC1.1.1, TC1.1.3, TC1.1.5).
Another example of traceability matrix where requirement documents (use case) are mapped back to the test cases.

Database Testing: Five Key Elements

Programs that interact with databases have common elements, and testing each requires a different approach. These elements include
1. Mapping application layer interactions
2. Mapping the data layer interactions
3. Functional interactions between the application and database
4. Embedded code
5. Database Migration


Mapping application layer interactions are often more easily handled through the normal segregation unit testing techniques; mapped objects can often be replaced by stubs. When testing larger subsystems, databases stubs may be useful, although the existence of integrated databases and in memory databases reduces the need for them.

Mapping the data layer interactions often benefit from using a real database of any kind. Differences in behavior between different types of target databases can be identified or verified prior to integration. Run of cases, this needs to be available to each developer. These days, a multi-processor desktop has more than enough power to run multiple virtual machines hosting "real" databases such as Oracle, Microsoft SQL Server or Sybase. For the consistency check of the base mapping layer, which is usually sufficient to use something like SQLite.
No development work should never be done against the production database. This is a recipe for disaster. Mistakes can easily destroy data production successfully withdraw amendments to the database from a known state, which has the potential of automation to the ruine. If a problem can not be replicated in the development or quality assurance, which suggests that something is wrong with the development and quality assurance resources.

Functional interactions between the application and the database are processed using real databases. As mentioned earlier, VM is useful for this. Indeed, I know that several organizations in which a reduced version of the entire production network runs on every PC developer. In the coming years, we can expect this kind of comprehensive simulation environment to become more frequent.

Embedded code refers to code that runs in the database. Triggers and stored procedures are most commonly used types of embedded code. The only way to test this code with a database directly.

Database migration must be tested against the target databases. Migration should be tested against both the patterns and realistic data sets.

Wednesday, April 15, 2009

SQL Introduction


What is SQL?
SQL stands for Structured Query Language
SQL lets you access and manipulate databases
SQL is an ANSI (American National Standards Institute) standard

What Can SQL do?
SQL can execute queries against a database
SQL can retrieve data from a database
SQL can insert records in a database
SQL can update records in a database
SQL can delete records from a database
SQL can create new databases
SQL can create new tables in a database
SQL can create stored procedures in a database
SQL can create views in a database
SQL can set permissions on tables, procedures, and views

SQL is a Standard - BUT....
Although SQL is an ANSI (American National Standards Institute) standard, there are many different versions of the SQL language.
However, to be compliant with the ANSI standard, they all support at least the major commands (such as SELECT, UPDATE, DELETE, INSERT, WHERE) in a similar manner.
Note: Most of the SQL database programs also have their own proprietary extensions in addition to the SQL standard!

Using SQL in Your Web Site
To build a web site that shows some data from a database, you will need the following:
An RDBMS database program (i.e. MS Access, SQL Server, MySQL)
A server-side scripting language, like PHP or ASP
SQL
HTML / CSS
RDBMS

RDBMS stands for Relational Database Management System.
RDBMS is the basis for SQL, and for all modern database systems like MS SQL Server, IBM DB2, Oracle, MySQL, and Microsoft Access.
The data in RDBMS is stored in database objects called tables.
A table is a collections of related data entries and it consists of columns and rows.

Tuesday, April 14, 2009

Test Director Interview Questions ...

1.What is Test Director?
Its a Mercury interactive's Test management tool. It includes all the features we need to organize and manage the testing process.

2.What are all the main features of Test Director?
It enables us to create a database of tests,execute tests, report and track defects detected in the software.

3.How the assessment of the application will be taken place in Test Director?
As we test, Test Director allows us to continuously assess the status of our application by generating sophisticated reports and graphs. By integrating all tasks involved in software testing Test Director helps us to ensure that our software is ready for deployment.

4.What the planning tests will do in Test Director?
It is used to develop a test plan and create tests. This includes defining goals and strategy,designing tests,automating tests where beneficial and analyzing the plan.

5.What the running tests will do in Test Director?
It execute the test created in the planning test phase, and analyze the test results

6.What the tracking defects will do in Test Director?
It is used to monitor the software quality. It includes reporting defects,determining repair priorities,assigning tasks,and tracking repair progress

7.What are all the three main views available in What the running tests will do in Test Director?? Plan tests? Run tests?
Track defectsEach view includes all the tools we need to complete each phase of the testing process

8.What is test plan tree?
A test plan tree enables you to organize and display your test hierarchically,according to your testing requirements

9.What are all the contents of test plan tree?
Test plan tree can include several type of tests.
Manual test scripts Win Runner test scripts. Batch of Win Runner test scripts. Visual API test scripts. Load Runner scenario scripts and Vuser scripts

10.What is test step?
A test step includes the action to perform in our application,input to enter,and its expected output