Custom Search

Friday, October 24, 2008

Testing Stop Process

This can be difficult to determine. Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are:
Deadlines ( release deadlines,testing deadlines.)
1. Test cases completed with certain percentages passed
2. Test budget depleted
3. Coverage of code/functionality/requirements reaches a specified point
4. The rate at which Bugs can be found is too small
5. Beta or Alpha Testing period ends
6. The risk in the project is under acceptable limit.
Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by
Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -
1. Measuring Test Coverage.
2. Number of test cycles.
3. Number of high priority bugs.

The Software Assurance Technology Center (SATC) in the Systems Reliability and Safety Office at Goddard Space Flight Center (GSFC) is investigating the use of software error data as an indicator of testing status. Items of interest for determining the status of testing include projections of the number of errors remaining in the software and the expected amount of time to find some percentage of the remaining errors. To project the number of errors remaining in software, one needs an estimate of the total number of errors in the software at the start of testing and a count of the errors found and corrected throughout testing. There are a number of models that reasonably fit the rate at which errors are found in software, the most commonly used is referred to in this paper as the Musa model. This model is not easily applicable at GSFC, however, due to the availability and the quality of the error data.
At GSFC, useful error data is not easy to obtain for projects not in the Software Engineering Laboratory. Of the projects studied by the SATC, only a few had an organized
accounting scheme for tracking errors, but they often did not have a consistent format for recording errors. Some projects record errors that were found but did not record any information about resources applied to testing. The error data frequently contained the date of entry of the error data rather than the actual date of error discovery. In order to use traditional models such as the Musa model for estimating the cumulative number of errors, one needs fairly precise data on the time of discovery of errors and the level of resources applied to testing. Real world software projects are generally not very accommodating when it comes to either accuracy or completeness of error data. The models developed by the SATC to perform trending and prediction on error data attempt to compensate for these shortcomings in the quantity and availability of project data.
In order to compensate for the quality of the error data, the SATC developed a software error trending models using two techniques, each based on the basic Musa model, but with the constant in the exponential term replaced by a function of time that describes the 'intensity' of the testing effort. The shape and the parameters for this function can be estimated using measures such as CPU time or staff hours devoted to testing. The first technique involves fitting cumulative error data to the modified Musa model using a least squares fit that is based on gradient methods. This technique requires data on errors found and the number of staff hours devoted to testing each week of the testing activity. The second technique uses a Kalman filter to estimate both the total number of errors in the software and the level of testing being performed. This technique requires error data and initial estimates of the total number of errors and the initial amount of effort applied to testing.
The SATC has currently examined and modeled error data from a limited number of projects. Generally, only the date on which an error was entered into the error tracking system was available, not the date of discovery of the error. No useful data was available on human or computer resources expended for testing. What is needed for the most accurate model is the total time expended for testing, even if the times are approximate. Using the sum of reported times to find/fix individual errors did not produce any reasonable correlation with the resource function required. Some indirect attempts to estimate resource usage, however, led to some very good fits. On one project errors were reported along with the name of the person that found the error. Resource usage for testing was estimated as follows: A person was estimated to be working on the testing effort over a period beginning with the first error that they reported and ending with the last error that they reported. The percentage of time that each person worked during that period was assumed to be an unknown constant that did not differ from person to person. Using this technique led to a resource curve that closely resembled the Rayleigh curve (Figure 1).

Figure 1: Test Resource Levels for Project A

On most of the projects, there was good conformity between the trend model and the reported error data. More importantly, estimates of the total number of errors and the error discovery parameter, made fairly early in the testing activity, seemed to provide reliable indicators of the total number of errors actually found and the time it took to find future errors. Figures 2 shows the relationship between reported errors and the SATC trend model for one project. The graph represents data available at the conclusion of the project. This close fit was also found on other projects when sufficient data was available.

Figure 2: Cumulative Software Errors for Project A

On another project, different estimates of the total number of errors were obtained when estimates were made over different testing time intervals. That is, there was inconsistent agreement between the trend model and the error data over different time intervals. Through subsequent discussion with the project manager it was learned that the rate of error reporting by the project went from approximately 100% during integration testing to 40% during acceptance testing. Furthermore, there was a significant amount of code rework, and testing of the software involved a sequential strategy of completely testing a single functional area before proceeding to test the next functional area of the code. Thus, the instability of the estimates of the total errors was a useful indicator of the fact that there was a significant change in either the project's testing and reporting process. Figure 3 shows the results for this project. Note the change in slope of the reported number of errors occurring around 150 days. The data curve flattens at the right end of the curve due to a pause in testing, rather than a lack of error detection. This project is still undergoing testing.



Figure 3: Cumulative S/W Errors for Project B - Flight S/W

If error data is broken into the distinct testing phases of the life cycle (e.g., unit, system, integration), the projected error curve using the SATC model closely fits the rate at which errors are found in each phase.
Some points need to be clarified about the SATC error trend model. The formulation of the SATC equation is the direct result of assuming that at any instant of time, the rate of discovery of errors is proportional to the number of errors remaining in the
software and to the resources applied to finding errors. Additional conditions needed in order for the SATC trending model to be a valid are:
1. The code being tested is not being substantially altered during the testing process,
especially through the addition or rework of large amounts of code.
2. All errors found are reported.
3. All of the software is tested, and testing of the software is
uniform throughout the
time of the testing activity.
Condition 1 is present to ensure that the total number of errors is a relatively stable number throughout the testing activity. Conditions 2 and 3 are present to ensure that the estimate of the total number of errors is in fact an estimate of the total errors present in the software at the start of testing - no new errors are introduced during testing. If testing is not "uniform" then the rate of error discovery will not necessarily be proportional to the number of errors remaining in the software and so the equation will not be an appropriate model for errors found. No attempt will be made here to make precise the meaning of the word "uniform".
The SATC developed this model rather than using the standard Musa model because it seems less sensitive to data inaccuracy and provides for non-constant testing resource levels. An additional benefit from this work is the
application of the Rayleigh curve for estimating resource usage. Future work by the SATC will continue to seek a practical balance between available trend analysis theory and the constraints imposed by the limited accuracy and availability of data from real-world projects.

Testing Start Process

Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product. Test data sets must be derived and their correctness and consistency should be monitored throughout the development process.

If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.

Wednesday, October 22, 2008

RAD Model

RAD is a linear sequential software development process model that emphasis an extremely short development cycle using a component based construction approach. If the requirements are well understood and defines, and the project scope is constraint, the RAD process enables a development team to create a fully functional system with in very short time period.

What is RAD?

RAD (rapid application development) is a concept that products can be developed faster and of higher quality through:

Gathering requirements using workshops or focus groups
Prototyping and early, reiterative user testing of designs
The re-use of software components
A rigidly paced schedule that defers design improvements to the next product version
Less formality in reviews and other team communication

Some companies offer products that provide some or all of the tools for RAD software development. (The concept can be applied to hardware development as well.) These products include requirements gathering tools, prototyping tools, computer-aided software engineering tools, language development environments such as those for the Java platform, groupware for communication among development members, and testing tools. RAD usually embraces object-oriented programming methodology, which inherently fosters software re-use. The most popular object-oriented programming languages, C++ and Java, are offered in visual programming packages often described as providing rapid application development.


Development Methodology

The traditional software development cycle follows a rigid sequence of steps with a formal sign-off at the completion of each. A complete, detailed requirements analysis is done that attempts to capture the system requirements in a Requirements Specification. Users are forced to "sign-off" on the specification before development proceeds to the next step. This is followed by a complete system design and then development and testing.
But, what if the design phase uncovers requirements that are technically unfeasible, or extremely expensive to implement? What if errors in the design are encountered during the build phase? The elapsed time between the initial analysis and testing is usually a period of several months. What if business requirements or priorities change or the users realize they overlooked critical needs during the analysis phase? These are many of the reasons why software development projects either fail or don’t meet the user’s expectations when delivered.
RAD is a methodology for compressing the analysis, design, build, and test phases into a series of short, iterative development cycles. This has a number of distinct advantages over the traditional sequential development model.



RAD projects are typically staffed with small integrated teams comprised of developers, end users, and IT technical resources. Small teams, combined with short, iterative development cycles optimizes speed, unity of vision and purpose, effective informal communication and simple project management.

RAD Model Phases

RAD model has the following phases:
Business Modeling: The information flow among business functions is defined by answering questions like what information drives the business process, what information is generated, who generates it, where does the information go, who process it and so on.

Data Modeling: The information collected from business modeling is refined into a set of data objects (entities) that are needed to
support the business. The attributes (character of each entity) are identified and the relation between these data objects (entities) is defined.

Process Modeling: The data object defined in the data modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing descriptions are created for adding, modifying, deleting or retrieving a data object.

Application Generation: Automated tools are used to facilitate construction of the software; even they use the 4th GL techniques.

Testing and Turn over: Many of the programming components have already been tested since RAD emphasis reuse. This reduces overall testing time. But new components must be tested and all interfaces must be fully exercised.


Advantages and Disadvantages

RAD reduces the development time and reusability of components help to speed up development. All functions are modularized so it is easy to work with.
For large projects RAD require highly skilled engineers in the team. Both end customer and
developer should be committed to complete the system in a much abbreviated time frame. If commitment is lacking RAD will fail. RAD is based on Object Oriented approach and if it is difficult to modularize the project the RAD may not work well.

Tuesday, October 7, 2008

Spiral Model

History :
The spiral model was defined by Barry Boehm in his 1988 article A Spiral Model of
Software Development and Enhancement. This model was not the first model to discuss iterative development, but it was the first model to explain why the iteration matters. As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase starts with a design goal and ends with the client (who may be internal) reviewing the progress thus far. Analysis and engineering efforts are applied at each phase of the project, with an eye toward the end goal of the project.

The Spiral Model :
The spiral model, also known as the spiral lifecycle model, is a systems development method (SDM) used in
information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is intended for large, expensive, and complicated projects.
The steps in the spiral model can be generalized as follows:
1. The new system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system.
2. A preliminary design is created for the new system.
3. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product.
4. A second prototype is evolved by a fourfold procedure: (1) evaluating the first prototype in terms of its strengths, weaknesses, and risks; (2) defining the requirements of the second prototype; (3) planning and designing the second prototype; (4) constructing and testing the second prototype.
5. At the customer's option, the entire project can be aborted if the risk is deemed too great. Risk factors might involve development cost overruns, operating-cost miscalculation, or any other factor that could, in the customer's judgment, result in a less-than-satisfactory final product.
6. The existing prototype is evaluated in the same manner as was the previous prototype, and, if necessary, another prototype is developed from it according to the fourfold procedure outlined above.
7. The preceding steps are iterated until the customer is satisfied that the refined prototype represents the final product desired.
8. The final system is constructed, based on the refined prototype.
9. The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuing basis to prevent large-scale failures and to minimize downtime.


Applications :
For a typical shrink-wrap application, the spiral model might mean that you have a rough-cut of user elements (without the polished / pretty graphics) as an operable application, add features in phases, and, at some point, add the final graphics. The spiral model is used most often in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The
US military has adopted the spiral model for its Future Combat Systems program.

Advantages :

1. Estimates (i.e. budget, schedule, etc.) become more realistic as
work progresses, because important issues are discovered earlier.
2. It is more able to cope with the (nearly inevitable) changes that
software development generally entails.
3. Software engineers (who can get restless with protracted design processes) can get their hands in and start working on a project earlier.


Disadvantages :

1. Highly customized limiting re-usability
2. Applied differently for each
application
3. Risk of not meeting budget or schedule
4. Risk of not meeting budget or schedule


WaterFall Model

WaterFall Model :
Waterfall approach was first Process Model to be introduced and followed widely in Software Engineering to ensure success of the project. In "The Waterfall" approach, the whole process of software development is divided into separate process phases. The phases in Waterfall model are: Requirement Specifications phase, Software Design, Implementation and Testing & Maintenance. All these phases are cascaded to each other so that second phase is started as and when defined set of goals are achieved for first phase and it is signed off, so the name "Waterfall Model". All the methods and processes undertaken in Waterfall Model are more visible


The stages of "The Waterfall Model" are:

Requirement Analysis & Definition:
All possible requirements of the system to be developed are captured in this phase. Requirements are set of functionalities and constraints that the end-user (who will be using the system) expects from the system. The requirements are gathered from the end-user by consultation, these requirements are analyzed for their validity and the possibility of incorporating the requirements in the system to be development is also studied. Finally, a Requirement Specification document is created which serves the purpose of guideline for the next phase of the model.

System & Software Design:
Before a starting for actual coding, it is highly important to understand what we are going to create and what it should look like? The requirement specifications from first phase are studied in this phase and system design is prepared. System Design helps in specifying hardware and system requirements and also helps in defining overall system architecture. The system design specifications serve as input for the next phase of the model.

Implementation & Unit Testing:
On receiving system design documents, the
work is divided in modules/units and actual coding is started. The system is first developed in small programs called units, which are integrated in the next phase. Each unit is developed and tested for its functionality; this is referred to as Unit Testing. Unit testing mainly verifies if the modules/units meet their specifications.

Integration & System Testing:
As specified above, the system is first divided in units which are developed and tested for their functionalities. These units are integrated into a complete system during Integration phase and tested to check if all modules/units coordinate between each other and the system as a whole behaves as per the specifications. After successfully testing the
software, it is delivered to the customer.

Operations & Maintenance:
This phase of "The Waterfall Model" is virtually never ending phase (Very long). Generally, problems with the system developed (which are not found during the development life cycle) come up after its practical use starts, so the issues related to the system are solved after deployment of the system. Not all the problems come in picture directly but they arise time to time and needs to be solved; hence this process is referred as Maintenance.