Custom Search

Thursday, May 28, 2009

Function Points

Function Points and the Function Point Model are measurement tools to manage software. Function Points, with other business measures, become Software Metrics.
Function Points measure Software size. Function Points measure functionality by objectively measuring functional requirements. Function Points quantify and document assumptions in Estimating software development. Function Points and Function Point Analysis are objective; Function Points are consistent, and Function Points are auditable. Function Points are independent of technology. Function Points even apply regardless of design. But Function Points do not measure people directly. Function Points is a macro tool, not a micro tool. Function Points are the foundation of a Software Metrics program.
Software Metrics include Function Points as a normalizing factor for comparison. Function Points in conjunction with time yield Productivity Software Metrics. Function Points in conjunction with defects yield Quality Software Metrics. Function Points with costs provide Unit Cost, Return on Investment, and Efficiency Software Metrics, never before available.
Function Points connect Software Metrics to measure Risk. Function Points can verify Staffing metrics. Function Points can evaluate Build, Buy and/or Outsource decisions. Function Points combine with SEI CMM measures, TQM measures, Baldrige measures, ISO and/or other software and business measures to prove overall status and value.
Doing The Right Things!All of the above Software Metrics can prove your organization is Doing Things Right! But the real and biggest value of Function Points and Software Metrics is proving you are Doing The Right Things!
Function Points and Usage or Volume measures create Software Metrics that demonstrate an organization's ability to Leverage software's business impact. The Leverage of E Commerce is obvious, but until now unmeasured. Function Points support Customer Satisfaction measures to create Value Software Metrics. Function Points and Skill measures provide Software Metrics for Employee Service Level Agreements to meet current and future company skill needs. Function Points can even measure the Corporate Vision and generate Software Metrics to report progress toward meeting it.
Function Points, Function Point Analysis, the Function Point Model, Supplemental Software Measures, and the Software Metrics they generate, are only the third measure that transcend every part of every organization. (The other two are time and money.) Without them your organization is only two thirds whole.




Tuesday, May 26, 2009

Load Test Overview

  1. What is load testing? - Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.
  2. What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.
  3. Did u use Load Runner? What version? - Yes. Version 7.2.
  4. Explain the Load testing process? -
    Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using Load Runner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. Load Runner automatically builds a scenario for us. Step 4: Running the scenario.
    We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario.
    We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.
  5. When do you do load and performance Testing? - We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.
  6. What are the components of LoadRunner? - The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.
  7. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.
  8. What Component of LoadRunner would you use to play Back the script in multi user mode? - The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.
  9. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.
  10. What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
  11. Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.
  12. Why do you create parameters? - Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.
  13. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  14. How do you find out where correlation is required? Give few examples from your projects? - Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.
  15. Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  16. What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.
  17. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? - Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select
    Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select
    extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.
  18. How do you debug a LoadRunner script? - VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.
  19. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? - Before we create the User Defined functions we need to create the external
    library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.
  20. What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.
  21. Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.
  22. How do you perform functional testing under load? - Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.
  23. What is Ramp up? How do you set this? - This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be
    specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
  24. What is the advantage of running the Vuser as thread? - VuGen provides the facility to use multithreading. This enables more Vusers to be run per
    generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single
    generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of
    Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.
  25. If you want to stop the execution of your script on error, how do you do that? - The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the “Continue on error” option in Run-Time Settings.
  26. What is the relation between Response Time and Throughput? - The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.
  27. Explain the Configuration of your systems? - The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.
  28. How do you identify the performance bottlenecks? - Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.
  29. If web server, database and Network are all fine where could be the problem? - The problem could be in the system itself or in the application server or in the code written for the application.
  30. How did you find web server related issues? - Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that
    occurred during scenario, the number of http responses per second, the number of downloaded pages per second.
  31. How did you find database related issues? - By running “Database” monitor and help of “Data Resource Graph” we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues
  32. Explain all the web recording options?
  33. What is the difference between Overlay graph and Correlate graph? - Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show’s the current graph’s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph’s Y-axis.
  34. How did you plan the Load? What are the Criteria? - Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.
  35. What does vuser_init action contain? - Vuser_init action contains procedures to login to a server.
  36. What does vuser_end action contain? - Vuser_end section contains log off procedures.
  37. What is think time? How do you change the threshold? - Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.
  38. What is the difference between standard log and extended log? - The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.
  39. Explain the following functions: - lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set.
  40. Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would
    be reasonable to conclude that the bandwidth is constraining the volume of
    data delivered.
  41. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five different types of goals in a goal oriented scenario:
    • The number of concurrent Vusers
    • The number of hits per second
    • The number of transactions per second
    • The number of pages per minute
    • The transaction response time that you want your scenario
  42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load
    increases. At 56 Vusers, there is a sudden, sharp increase in the average response
    time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.
  43. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  44. Where do you set automatic correlation options? - Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  45. What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.



Sunday, May 24, 2009

Introduction to Six Sigma

What Is Six Sigma?

Six Sigma is a methodology that aligns core business processes with customer and business requirements; systematically eliminates defects from existing processes, products, and services; or designs new processes, products, and services that reliably and consistently meet customer and business requirements. It essentially boils down to an approach for quantifying how well a business is meeting stakeholder expectations and then applying tactics for ensuring that those expectations are met virtually every time.

A major difference between Six Sigma and other quality programs, such as Total Quality Management (TQM), is that Six Sigma incorporates a control phase with ongoing checks in order to ensure that once improvements are achieved, they are not a one-time or temporary phenomenon, but maintained over time. Six Sigma methodology gives those who use it a structured yet flexible process to follow, a large and expanding tool set to employ, a configuration to clarify roles and responsibilities, and a governance to ensure compliance. Six Sigma can be used to improve existing processes or create a new product or process.

The “sigma” in Six Sigma is the Greek letter that statisticians use to represent the standard deviation of a population: it tells how much variability there is within a group of items (the “population”). The more variation there is, the bigger the standard deviation is. Thus, the sigma level is tied directly to the number of defects: the fewer the defects, the higher the sigma level, and the better the quality.
Each time that a process or product does not meet stakeholdersÙ expectations, it is counted as a defect. To achieve Six Sigma, a process must not produce more than 3.4 defects per million opportunities. To put this in perspective, if you were a publisher and a misspelled word was considered a defect, 99 percent quality would mean that for every 300,000 words that were read by the customers who purchased your books, 3,000 would still be misspelled. Six Sigma strives for near perfection; therefore, reaching Six Sigma quality would mean that for the same 300,000-word opportunity, only 1 word would be misspelled. For training programs, a defect would be anything that did not meet customer requirements. It could be a misspelled word, a simulation that has incorrect information or does not work properly, a hyperlink that is broken, or a course taking too long to complete or costing too much. In short, anything that does not meet a customer requirement is considered a defect.


The Six Sigma philosophy can be captured as a methodology that allows companies to:

1. Consistently meet customer requirements
2. Use data to drive all decision making
3. Do everything with quality


In other words, Six Sigma is a customer-focused, data-driven, measurement-based strategy that allows companies to meet customer requirements virtually every time.


The History of Six Sigma

When many people think of Six Sigma, the first name that comes to mind is Jack Welch. Welch was the CEO of General Electric who championed Six Sigma and in the process made it a household word in corporate America. Welch launched the effort in late 1995 with two hundred projects and intensive training programs, moved to three thousand projects and more training in 1996, and undertook six thousand projects and still more training in 1997. The initiative was a stunning success, delivering far more benefits than Welch had first envisioned. Six Sigma delivered $320 million in productivity gains and profits, more than double WelchÙs original goal of $150 million.

In fact, Six Sigma greatly predates Welch and his experience at GE. A Motorola engineer, Bill Smith, is the individual credited with coining the term Six Sigma. In the mid-1980s, Motorola engineers decided that the traditional quality levels for measuring defects did not provide enough granularity. At that time, it was common practice to measure how many defects occurred for every 1,000 opportunities, but these engineers wanted to measure the defects per 1 million opportunities. Motorola developed this new standard and then created the methodology.

As a measurement standard, however, Six Sigma goes even further back, to Carl Frederick Gauss (1777–1855) the German mathematician who introduced the concept of the normal curve. And as a measurement standard in product variation, it dates to the 1920s when Walter Shewhart (who is credited with combining creative management thinking with statistical analysis) showed that three sigma from the mean (or average) is the point where a process requires correction.
Understanding the history of this methodology and the backgrounds of the individuals who shaped this approach sheds light on the rationale for the methodology
. Shewhart spent most of his career in the Bell Telephone Laboratories, where he worked on statistical tools to examine when a corrective action must be applied to a process. Bill Smith was a Motorola engineer who introduced the statistical approach to increasing profitability by decreasing defects. The work of Jack Welch and how he turned GE around, largely as a result of adopting Six Sigma, is well documented. None of these men were just dealing purely in theory; they worked in businesses and were responsible for quantifying their contributions to the organization in a way that the business respected. As a result they understood the need to show business results.

Three concepts are at the core of Six Sigma: the concept of the customer, a defect, and tollgate reviews. These concepts apply to Six Sigma regardless of the model and will be discussed throughout this book.

The Customer

With Six Sigma, everything begins and ends with the customer. According to iSixSigma, a customer is “one who buys or rates our process/product (in terms of requirements), and gives the final verdict on the same” (
www.isixsigma.com). Motorola University teaches that there are two types of customer classifications: internal and external.
Internal customers are stakeholders, departments, and employees within the company. They are frequently referred to as process partners. They may use their companyÙs products or services or may be part of the value chain that helps to produce the product. In developing a training program, a process partner might be a subject matter expert or the manager of an employee who will take the training. The requirements of internal customers are frequently referred to as the voice of the business.

External customers are individuals or organizations outside the company. They use or purchase a product or service in its final form and are referred to as end users. They are the reason an organization is in business. If the training group is designing a training program for the accounting department, then the accountants who are taking the training are considered the customers. Whoever is paying for the course development is also considered an external customer. The same is true if the training is being developed for individuals who do not work for the company. The requirements of external customers are referred to as the voice of the customer.

The same rigor that is applied to external customers needs to be applied in understanding internal customer needs. Improvements made for an internal customer ultimately lead to a quantitative improvement for the external customers.

The Six Sigma philosophy holds that these two entities, the internal and the external customers, dictate the requirements for specific products or services and thus quality
. The highest level of quality means meeting the expectations of both internal and external customers.


The Defect.

Internal and external customers dictate the requirements for a product or service. Not meeting a requirement is considered a defect. To use a sample example from the training world, letÙs assume that as a training manager, you are commissioned by the accounting department to develop a program that teaches employees how to use a new accounting system. One of the requirements is that the class is no longer than one hour. You build the course, but it is one hour and ten minutes long. The course length then becomes defective. LetÙs assume that another requirement is that a specific logo must appear on every page of a PowerPoint presentation. Each time the logo does not appear is considered a defect. If the PowerPoint presentation is 100 pages and the logo is absent from fifteen slides, the defects per million opportunities (DPMO) would be equal to the total defects divided by the total opportunities then multiplied by 1 million:

DPMO = Total defects/total opportunities x ,000,000.
Applying this formula to the training example would be as follows:
(15/100) x 1,000,000 = 150,000 DPMO.
One hundred and fifty thousand defects per million opportunities would translate into a sigma level of less than 2.5.

The number of DPMOs translates into your sigma level: the fewer defects, the higher the sigma level. The performance in the example would yield a sigma level of between 2.0 and 2.5, meaning that it would be meeting critical customer requirements (CCRs) less than 84 percent of the time.



Quality Audit

Quality audits are typically performed at predefined time intervals and ensure that the institution has clearly-defined internal quality monitoring procedures linked to effective action. This can help determine if the organization complies with the defined quality system processes and can involve procedural or results-based assessment criteria.
Several countries have adopted quality audits in their higher education system (New Zealand, Australia, Sweden, Finland, Norway and USA) Initiated in the UK, the process of quality audit in the education system focused primarily on procedural issues rather than on the results or the efficiency of a quality system implementation.
The processes and tasks that a quality audit involves can be managed using a wide variety of software and self-assessment tools. Some of these relate specifically to quality in terms of fitness for purpose and conformance to standards, while others relate to Quality costs or, more accurately, to the Cost of poor quality. In analyzing quality costs, a cost of quality audit can be applied across any organization rather than just to conventional production or assembly processes.
An evaluator can use the six steps below to do the audit for a specific organization, group, committee, task force, etc.

1. Look at the organization (group, committee, task force, etc.)
Assess the mission and goals of the organization, what it is supposed to produce, and the overriding principles by which it operates.
2. Examine the jobs
Examine each job in the organization. Ask whether the job is necessary, whether it makes full use of the employee’s capabilities, and whether it is important in accomplishing the mission and goals of the organization.
3. Assess employees’ performance
Evaluate each employee’s performance in relation to the organization’s mission and goals. For each job being performed, ask if the employee is doing what should be done, is using his or her skills effectively, likes his or her job, and has enthusiasm and interest in performing the job.
4. Evaluate how employees feel about their manager or leader
Good organizational climate requires good leadership. Determine whether each employee within the group likes his or her manager, whether they follow or ignore the requests of their manager, and whether they attempt to protect their manager (i.e., make their manager look good).
5. Create a dialog with the members of the group
Interact with each employee asking a series of hypothetical questions to identify the employee's true feelings toward the organization. Questions such as, “do you feel theorganization supports your suggestions”, can help draw out the true feelings of each employee.
6. Rate organizational climate
Based on the responses to steps 1-5, evaluate the climate on the following five-point scale:
Ideal (5 points)
A fully cooperative environment in which managers and staff work as a team to
accomplish the mission.
Good (4 points)
Some concerns about the health of the climate, but overall it is cooperative and
productive.
Average (3 points)
The organizational climate is one of accomplishing the organization's mission and
goals, but no more.
Below average (2 points)
The individuals are more concerned about their individual performance, development, and promotion than accomplishing the organization’s mission.
Poor (1 point)
There is open hostility in the group and a non-cooperative attitude. As a result, the
mission and goals are typically not met.

A negative climate of three points or less often results from individuals having too much responsibility without the authority to fulfill those responsibilities, or management’s failure to recognize the abilities of the employees. Negative organizational climates can be improved with the following:
Develop within the organization a shared vision of what needs to be changed. Get feedback from the employees, and through discussion and compromise agree upon the mission and goals for the organization.
Change the organization's procedures, as the climate rarely improves without procedural changes. Develop a plan for accomplishing the organization's mission that is understood and acceptable to its members. This is normally accomplished if the members help develop the plan.

Friday, May 22, 2009

Basics of QTP

Full form of QTP?
Quick Test Professional


What’s the QTP?
QTP is Mercury Interactive Functional Testing Tool.


What’s the basic concept of QTP? QTP is based on two concepts-
• Recording
• Playback


Which scripting language used by QTP?
QTP using VB scripting.


How many types of recording facility are available in QTP?
QTP provides three types of recording methods-

• Context Recording (Normal)
• Analog Recording • Low Level Recording


There are three types of recording modes available in Quick Test Pro.
1.Normal mode 2.Analog mode 3.Low-level recording mode

Normal mode: This is the default mode of recording in QTP, Object and the Operation associated/performed with Object can be recorded. This mode takes full advantage of Quick Test's test object model, recognizing the objects in AUT regardless of their location on the screen.

Analog mode: This mode records exact mouse and keyboard operations you perform in relation to the screen / application window. This mode is useful for the operation which you can record at object level, such as drawing a picture, recording signature. the steps recoded using Analog mode are saved in separate data-file, Quick tests adds to your test a Run Analog file statement that calls the recorded Analog file .This file is stored with your action in which these Analog steps are created. The steps recorded in Analog mode can not be edited within Quick Test.

Low-level recording mode: enables you to record any object or operation in your AUT whether or not Quick Test recognizes it. This Low-level recording is useful when the exact location of an Object/operation in your AUT is important for your test. This mode records in terms of X, Y co-ordinates. Unlike in Analog mode, the steps can be seen in Test script, as well in Keyword view. Normal rec. mode (by default rec. mode is normal)- it captures objects in GUI format and object repository is created with object and its properties stored during recoding time. Mouse movements are not captured in this mode. Analog rec. mode - Objects are not captured, thus object repository is not created. Only mouse movements are captured and this is stored in a file called track file. Mouse movements’ relation to screen /application window can be captured. Low level rec. mode - Objects are captured in Win obj. format and properties stored in obj. repository. if u see in object repository u will notice a different icon for the objects stored here. In this mode u cannot capture overlapping objects. E.g. - menu drop down etc.


How many types of Parameters are available in QTP?
QTP provides three types of Parameter- • Method Argument • Data Driven • Dynamic


What’s the QTP testing process?

QTP testing process consist of seven steps- • Preparing to recoding • Recording • Enhancing your script • Debugging • Run • Analyze • Report Defects


What’s the Active Screen?
It provides the snapshots of your application as it appeared when you performed a certain steps during recording session.


What’s the Test Pane?
Test Pane contains Tree View and Expert View tabs.


What’s Data Table?
It assists to you about parameterizing the test.


What’s the Test Tree?
It provides graphical representation of your operations which you have performed with your application.


Which all environment QTP supports?
ERP/ CRM
Java/ J2EE VB, .NET Multimedia, XML Web Objects, ActiveX controls SAP, Oracle, Siebel, PeopleSoft Web Services, Terminal Emulator IE, NN, AOL


How can you view the Test Tree?
The Test Tree is displayed through Tree View tab.


What’s the Expert View?
Expert View display the Test Script.


Which keyword used for Normal Recording?
F3


Which keyword used for run the test script?
F5


Which keyword used for stop the recording?
F4

Which keyword used for Analog Recording?
Ctrl+Shift+F4


Which keyword used for Low Level Recording?
Ctrl+Shift+F3


Which keyword used for switch between Tree View and Expert View?
Ctrl+Tab


Note:
> QTP records each step you perform and generates a test tree and test script.

> QTP records in normal recording mode.

> If you are creating a test on web object, you can record your test on one browser and run it on another browser.
> Analog Recording and Low Level Recording require more disk space than normal recording mode.



What’s the Transaction?
You can measure how long it takes to run a section of your test by defining transactions.


Where you can view the results of the checkpoint?
You can view the results of the checkpoints in the Test Result Window.


Note: If you want to retrieve the return value of a checkpoint (a Boolean value that indicates whether the checkpoint passed or failed) you must add parentheses around the checkpoint argument in the statement in the Expert View.


What’s the Standard Checkpoint?
Standard Checkpoints checks the property value of an object in your application or web page.


Which environment is supported by Standard Checkpoint?
Standard Checkpoint is supported for all add-in environments.


What’s the Image Checkpoint?
Image Checkpoint checks the value of an image in your application or web page.


Which environments are supported by Image Checkpoint?
Image Checkpoint supports only Web environment.


What’s the Bitmap Checkpoint?
Bitmap Checkpoint checks the bitmap images in your web page or application.


Which environment is supported by Bitmap Checkpoints?
Bitmap checkpoints are supported all add-in environment.


What are the Table Checkpoints?
Table Checkpoint checks the information within a table.


Which environments are supported by Table Checkpoint?
Table Checkpoints are supporting only ActiveX environment.


What’s the Text Checkpoint?
Text Checkpoint checks that a test string is displayed in the appropriate place in your application or on web page.


Which environment is supported by Test Checkpoint?
Text Checkpoint is supported all add-in environments 1. Some more quetions i have with me on QTP. Hope it will help u.


1. What are the Features & Benefits of Quick Test Pro(QTP)..?
1. Key word driven testing
2. Suitable for both client server and web based application 3. Vb script as the scriot language 4. Better error handling mechanism 5. Excellent data driven testing features


2. Where can I get Quck Test pro(QTP Pro) software.. This is Just for Information purpose Only. Introduction to QuickTest Professional 8.0, Computer Based Training: Please find the step to get QuickTest Professional 8.0 CBT Step by Step Tutorial and Evaluation copy of the software. The full CBT is 162 MB. You will have to create account to be able to download evaluation copies of CBT and Software.


3. How to handle the exceptions using recovery secnario manager in Qtp? You can instruct QTP to recover unexpected events or errors that occured in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps
1. Triggered Events 2. Recovery steps 3. Post Recovery Test-Run



3. How to handle the exceptions using recovery secnario manager in Qtp? You can instruct QTP to recover unexpected events or errors that occured in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps 1. Triggered Events 2. Recovery steps 3. Post Recovery Test-Run


4. what is the use of Text output value in Qtp? Output values enable to view the values that the application talkes during run time.When paramaterised, the values change for each iteration.Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table.


5. How to use the Object spy in QTP 8.0 version? There are two ways to Spy the objects in QTP 1) Thru file toolbar —In the File ToolBar click on the last toolbar button (an icon showing a person with hat). 2) Tru Object repository Dialog —In Objectrepository dialog click on the button”object spy…” In the Object spy Dialog click on the button showing hand symbol. the pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object if at all the object is not visible..or window is minimised then Hold the Ctrl button and activate the required window to and release the Ctrl button.


6. What is the file extension of the code file & object repository file in QTP? File extension of – Per test object rep :- filename.mtr – Shared Oject rep :- filename.tsr Code file extension id script.mts

7. Explain the concept of object repository & how QTP recognises objects?
Object Repository: displays a tree of all objects in the current component or in the current action or entire test( depending on the object repository mode you selected). we can view or modify the test object description of any test object in the repository or to add new objects to the repository. Quicktest learns the default property values and determines in which test object class it fits.If it is not enough it adds assistive properties, one by one to the description until it has compiled the unique description.If no assistive properties are available, then it adds a special Ordianl identifier such as objects location onthe page or in the source code.

8. What are the properties you would use for identifying a browser & page when using descriptive programming ?
“name” would be another property apart from “title” that we can use. OR We can also use the property “micClass”. ex: Browser(”micClass:=browser”).page(”micClass:=page”)….

9. What are the different scripting languages you could use when working with QTP ?
Visual Basic (VB),XML,JavaScript,Java,HTML

10. Give me an example where you have used a COM interface in your QTP project?

11. Few basic questions on commonly used Excel VBA functions. common functions are: Coloring the cell Auto fit cell setting navigation from link in one cell to other saving

12. Explain the keyword createobject with an example.
Creates and returns a reference to an Automation object syntax: CreateObject(servername.typename [, location]) Arguments servername:Required. The name of the application providing the object. typename : Required. The type or class of the object to create. location : Optional. The name of the network server where the object is to be created.

13. Explain in brief about the QTP Automation Object Model.
Essentially all configuration and run functionality provided via the QuickTest interface is in some way represented in the QuickTest automation object model via objects, methods, and properties. Although a one-on-one comparison cannot always be made, most dialog boxes in QuickTest have a corresponding automation object, most options in dialog boxes can be set and/or retrieved using the corresponding object property, and most menu commands and other operations have corresponding automation methods. You can use the objects, methods, and properties exposed by the QuickTest automation object model, along with standard programming elements such as loops and conditional statements to design your program.

14. How to handle dynamic objects in QTP? QTP has a unique feature called Smart Object Identification/recognition. QTP generally identifies an object by matching its test object and run time object properties. QTP may fail to recognise the dynamic objects whose properties change during run time. Hence it has an option of enabling Smart Identification, wherein it can identify the objects even if their properties changes during run time. Check this out- If QuickTest is unable to find any object that matches the recorded object description, or if it finds more than one object that fits the description, then QuickTest ignores the recorded description, and uses the Smart Identification mechanism to try to identify the object. While the Smart Identification mechanism is more complex, it is more flexible, and thus, if configured logically, a Smart Identification definition can probably help QuickTest identify an object, if it is present, even when the recorded description fails. The Smart Identification mechanism uses two types of properties: Base filter properties—The most fundamental properties of a particular test object class; those whose values cannot be changed without changing the essence of the original object. For example, if a Web link’s tag was changed from to any other value, you could no longer call it the same object. Optional filter properties—Other properties that can help identify objects of a particular class as they are unlikely to change on a regular basis, but which can be ignored if they are no longer applicable.

15. What is a Run-Time Data Table? Where can I find and view this table?
In QTP, there is data table used , which is used at runtime. -In QTP, select the option View->Data tabke. -This is basically an excel file, which is stored in the folder of the test created, its name is Default.xls by default.

16. How does Parameterization and Data-Driving relate to each other in QTP?
To datadrive we have to parameterize.i.e. we have to make the constant value as parameter, so that in each iteraration(cycle) it takes a value that is supplied in run-time datatable. Through parameterization only we can drive a transaction(action) with different sets of data. You know running the script with the same set of data several times is not suggestable, & it’s also of no use.

17. What is the difference between Call to Action and Copy Action.?
Call to Action : The changes made in Call to Action , will be reflected in the orginal action( from where the script is called).But where as in Copy Action , the changes made in the script ,will not effect the original script(Action)

18. Discuss QTP Environment.
QuickTest Pro environment using the graphical interface and ActiveScreen technologies - A testing process for creating test scripts, relating manual test requirements to automated verification features - Data driving to use several sets of data using one test script.

19. Explain the concept of how QTP identifies object.
During recording qtp looks at the object and stores it as test object.For each test object QT learns a set of default properties called mandatory properties,and look at the rest of the objects to check whether this properties are enough to uniquely identify the object. During test run,QT searches for the run time obkects that matches with the test object it learned while recording.

20. Differentiate the two Object Repository Types of QTP.
Object repository is used to store all the objects in the application being tested.2 types of oject repositoy per action and shared. In shared repository only one centralised repository for all the tests. where as in per action.for each test a separate per action repostory is created.

21. What the differences are and best practical application of each. Per Action: For Each Action, one Object Repository is created. Shared : One Object Repository is used by entire application

22. Explain what the difference between Shared Repository and Per_Action Repository
Shared Repository: Entire application uses one Object Repository , that similar to Global GUI Map file in WinRunner Per Action: For each Action ,one Object Repository is created, like GUI map file per test in WinRunner

23. Have you ever written a compiled module? If yes tell me about some of the functions that you wrote.
I Used the functions for Capturing the dynamic data during runtime. Function used for Capturing Desktop, browser and pages.

24. What projects have you used WinRunner on? Tell me about some of the challenges that arose and how you handled them.
pbs :WR fails to identify the object in gui. If there is a non std window obk wr cannot recognize it ,we use GUI SPY for that to handle such situation. .

25. Can you do more than just capture and playback?
I have done Dynamically capturing the objects during runtime in which no recording, no playback and no use of repository is done AT ALL. -It was done by the windows scripting using the DOM(Document Object Model) of the windows.

26. How long have you used the product?


27. How to do the scripting. Is there any inbuilt functions in QTP as in QTP-S. Whatz the difference between them? how to handle script issues?
Yes, there’s an in-built functionality called “Step Generator” in Insert->Step->Step Generator -F7, which will generate the scripts as u enter the appropriate steps. 28. What is the difference between check point and output value. I would like to add some stuff to Kalpana’s comments. It is as follows:- An outPut value is a value captured during the test run and entered in the run-time but to a specified location. EX:-Location in Data Table[Global sheet / local sheet]

29. IF we use batch testing.the result shown for last action only.in that how can i get result for every action. u can click on the icon in the tree view to view the result of every action

30. How the exception handling can be done using QTP
It can be done Using the Recovery Scenario Manager which provides a wizard that gudies you through the process of defining a recovery scenario. FYI.. The wizard could be accesed in QTP> Tools-> Recovery Scenario Manager …….

31. How do you test siebel application using qtp?

32. How many types of Actions are there in QTP? There are three kinds of actions: non-reusable action—an action that can be called only in the test with which it is stored, and can be called only once. reusable action—an action that can be called multiple times by the test with which it is stored (the local test) as well as by other tests. external action—a reusable action stored with another test. External actions are read-only in the calling test, but you can choose to use a local, editable copy of the Data Table information for the external action.

33. How do you data drive an external spreadsheet?


34. I want to open a Notepad window without recording a test and I do not want to use SystemUtil.Run command as well How do I do this?


U can still make the notepad open without using the record or System utility script, just by mentioning the path of the notepad “( i.e., where the notepad.exe is stored in the system) in the “Windows Applications Tab” of the “Record and Run Settings window. Try it out. All the Best.


1. . What are the Features & Benefits of Quick Test Pro(QTP)..?

1. Key word driven testing

2. Suitable for both client server and web based application

3. Vb script as the scriot language

4. Better error handling mechanism

5. Excellent data driven testing features


2. Where can I get Quck Test pro(QTP Pro) software.. This is Just for Information purpose Only.
Introduction to QuickTest Professional 8.0, Computer Based Training: Please find the step to get QuickTest Professional 8.0 CBT Step by Step Tutorial and Evaluation copy of the software. The full CBT is 162 MB. You will have to create account to be able to download evaluation copies of CBT and Software.

3. How to handle the exceptions using recovery secnario manager in Qtp?

You can instruct QTP to recover unexpected events or errors that occured in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps
1. Triggered Events 2. Recovery steps 3. Post Recovery Test-Run

4. what is the use of Text output value in Qtp?
Output values enable to view the values that the application talkes during run time.When paramaterised, the values change for each iteration.Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table.

5. How to use the Object spy in QTP 8.0 version?

There are two ways to Spy the objects in QTP
1) Thru file toolbar —In the File ToolBar click on the last toolbar button (an icon showing a person with hat). 2) Tru Object repository Dialog —In Objectrepository dialog click on the button”object spy…” In the Object spy Dialog click on the button showing hand symbol. the pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object if at all the object is not visible..or window is minimised then Hold the Ctrl button and activate the required window to and release the Ctrl button.

6. What is the file extension of the code file & object repository file in QTP?
File extension of
– Per test object rep :- filename.mtr – Shared Oject rep :- filename.tsr Code file extension id script.mts

7. Explain the concept of object repository & how QTP recognises objects?

Object Repository: displays a tree of all objects in the current component or in the current action or entire test( depending on the object repository mode you selected).
we can view or modify the test object description of any test object in the repository or to add new objects to the repository. Quicktest learns the default property values and determines in which test object class it fits.If it is not enough it adds assistive properties, one by one to the description until it has compiled the unique description.If no assistive properties are available, then it adds a special Ordianl identifier such as objects location onthe page or in the source code.

8. What are the properties you would use for identifying a browser & page when using descriptive programming ?
“name” would be another property apart from “title” that we can use. OR We can also use the property “micClass”. ex: Browser(”micClass:=browser”).page(”micClass:=page”)….

9. What are the different scripting languages you could use when working with QTP ?

Visual Basic (VB),XML,JavaScript,Java,HTML


10. Give me an example where you have used a COM interface in your QTP project?


11. Few basic questions on commonly used Excel VBA functions.
common functions are: Coloring the cell Auto fit cell setting navigation from link in one cell to other saving

12. Explain the keyword createobject with an example. Creates and returns a reference to an Automation object syntax: CreateObject(servername.typename [, location]) Arguments servername:Required. The name of the application providing the object. typename : Required. The type or class of the object to create. location : Optional. The name of the network server where the object is to be created.

13. Explain in brief about the QTP Automation Object Model. Essentially all configuration and run functionality provided via the QuickTest interface is in some way represented in the QuickTest automation object model via objects, methods, and properties. Although a one-on-one comparison cannot always be made, most dialog boxes in QuickTest have a corresponding automation object, most options in dialog boxes can be set and/or retrieved using the corresponding object property, and most menu commands and other operations have corresponding automation methods. You can use the objects, methods, and properties exposed by the QuickTest automation object model, along with standard programming elements such as loops and conditional statements to design your program.

14. How to handle dynamic objects in QTP?
QTP has a unique feature called Smart Object Identification/recognition. QTP generally identifies an object by matching its test object and run time object properties. QTP may fail to recognise the dynamic objects whose properties change during run time. Hence it has an option of enabling Smart Identification, wherein it can identify the objects even if their properties changes during run time. Check this out- If QuickTest is unable to find any object that matches the recorded object description, or if it finds more than one object that fits the description, then QuickTest ignores the recorded description, and uses the Smart Identification mechanism to try to identify the object. While the Smart Identification mechanism is more complex, it is more flexible, and thus, if configured logically, a Smart Identification definition can probably help QuickTest identify an object, if it is present, even when the recorded description fails. The Smart Identification mechanism uses two types of properties: Base filter properties—The most fundamental properties of a particular test object class; those whose values cannot be changed without changing the essence of the original object. For example, if a Web link’s tag was changed from to any other value, you could no longer call it the same object. Optional filter properties—Other properties that can help identify objects of a particular class as they are unlikely to change on a regular basis, but which can be ignored if they are no longer applicable.

15. What is a Run-Time Data Table? Where can I find and view this table?

In QTP, there is data table used , which is used at runtime.
-In QTP, select the option View->Data tabke. -This is basically an excel file, which is stored in the folder of the test created, its name is Default.xls by default.

16. How does Parameterization and Data-Driving relate to each other in QTP?

To datadrive we have to parameterize.i.e. we have to make the constant value as parameter, so that in each iteraration(cycle) it takes a value that is supplied in run-time datatable. Through parameterization only we can drive a transaction(action) with different sets of data. You know running the script with the same set of data several times is not suggestable, & it’s also of no use.



17. What is the difference between Call to Action and Copy Action.?
Call to Action : The changes made in Call to Action , will be reflected in the orginal action( from where the script is called).But where as in Copy Action , the changes made in the script ,will not effect the original script(Action)

18. Discuss QTP Environment.
QuickTest Pro environment using the graphical interface and ActiveScreen technologies - A testing process for creating test scripts, relating manual test requirements to automated verification features - Data driving to use several sets of data using one test script.

19. Explain the concept of how QTP identifies object. During recording qtp looks at the object and stores it as test object.For each test object QT learns a set of default properties called mandatory properties,and look at the rest of the objects to check whether this properties are enough to uniquely identify the object. During test run,QT searches for the run time obkects that matches with the test object it learned while recording.

20. Differentiate the two Object Repository Types of QTP.
Object repository is used to store all the objects in the application being tested.2 types of oject repositoy per action and shared. In shared repository only one centralised repository for all the tests. where as in per action.for each test a separate per action repostory is created.

21. What the differences are and best practical application of each.
Per Action: For Each Action, one Object Repository is created. Shared : One Object Repository is used by entire application


22. Explain what the difference between Shared Repository and Per_Action Repository
Shared Repository: Entire application uses one Object Repository , that similar to Global GUI Map file in WinRunner Per Action: For each Action ,one Object Repository is created, like GUI map file per test in WinRunner

23. Have you ever written a compiled module? If yes tell me about some of the functions that you wrote.
I Used the functions for Capturing the dynamic data during runtime. Function used for Capturing Desktop, browser and pages.


24. What projects have you used WinRunner on? Tell me about some of the challenges that arose and how you handled them. pbs :WR fails to identify the object in gui. If there is a non std window obk wr cannot recognize it ,we use GUI SPY for that to handle such situation.


25. Can you do more than just capture and playback?
I have done Dynamically capturing the objects during runtime in which no recording, no playback and no use of repository is done AT ALL. -It was done by the windows scripting using the DOM(Document Object Model) of the windows.

26. How long have you used the product?


27. How to do the scripting. Is there any inbuilt functions in QTP as in QTP-S. Whatz the difference between them? how to handle script issues?
Yes, there’s an in-built functionality called “Step Generator” in Insert->Step->Step Generator -F7, which will generate the scripts as u enter the appropriate steps.


28. What is the difference between check point and output value.
I would like to add some stuff to Kalpana’s comments. It is as follows:- An outPut value is a value captured during the test run and entered in the run-time but to a specified location. EX:-Location in Data Table[Global sheet / local sheet]


29. IF we use batch testing.the result shown for last action only.in that how can i get result for every action.
u can click on the icon in the tree view to view the result of every action


30. How the exception handling can be done using QTP
It can be done Using the Recovery Scenario Manager which provides a wizard that gudies you through the process of defining a recovery scenario. FYI.. The wizard could be accesed in QTP> Tools-> Recovery Scenario Manager …….


31. How do you test siebel application using qtp?



32. How many types of Actions are there in QTP?
There are three kinds of actions: non-reusable action—an action that can be called only in the test with which it is stored, and can be called only once. reusable action—an action that can be called multiple times by the test with which it is stored (the local test) as well as by other tests. external action—a reusable action stored with another test. External actions are read-only in the calling test, but you can choose to use a local, editable copy of the Data Table information for the external action.


33. How do you data drive an external spreadsheet?


34. I want to open a Notepad window without recording a test and I do not want to use SystemUtil.Run command as well How do I do this?
U can still make the notepad open without using the record or System utility script, just by mentioning the path of the notepad “( i.e., where the notepad.exe is stored in the system) in the “Windows Applications Tab” of the “Record and Run Settings window. Try it out. All the Best.

Thursday, May 21, 2009

SQL Fundamentals


What is SQL and where does it come from?

Structured Query Language (SQL) is a language that provides an interface to relational database systems. SQL was developed by IBM in the 1970s for use in System R, and is a de facto standard, as well as an ISO and ANSI standard. SQL is often pronounced SEQUEL.

In common usage SQL also encompasses DML (Data Manipulation Language), for INSERTs, UPDATEs, DELETEs and DDL (Data Definition Language), used for creating and modifying tables and other database structures.

The development of SQL is governed by standards. A major revision to the SQL standard was completed in 1992, called SQL2. SQL3 support object extensions and will be (partially?) implemented in Oracle8.


What are the difference between DDL, DML and DCL commands?


DDL is Data Definition Language statements. Some examples:

· CREATE - to create objects in the database

· ALTER - alters the structure of the database

· DROP - delete objects from the database

· TRUNCATE - remove all records from a table, including all spaces allocated for the records are removed

· COMMENT - add comments to the data dictionary

· GRANT - gives user's access privileges to database

· REVOKE - withdraw access privileges given with the GRANT command


DML is Data Manipulation Language statements. Some examples:

· SELECT - retrieve data from the a database

· INSERT - insert data into a table

· UPDATE - updates existing data within a table

· DELETE - deletes all records from a table, the space for the records remain

· CALL - call a PL/SQL or Java subprogram

· EXPLAIN PLAN - explain access path to data

· LOCK TABLE - control concurrency



DCL is Data Control Language statements. Some examples:

· COMMIT - save work done

· SAVEPOINT - identify a point in a transaction to which you can later roll back

· ROLLBACK - restore database to original since the last COMMIT

· SET TRANSACTION - Change transaction options like what rollback segment to use


How can I eliminate duplicates values in a table?


Choose one of the following queries to identify or remove duplicate rows from a table leaving one record:

Method 1:

SQL> DELETE FROM table_name A WHERE ROWID > (

2 SELECT min(rowid) FROM table_name B

3 WHERE A.key_values = B.key_values);

Method 2:

SQL> create table table_name2 as select distinct * from table_name1;

SQL> drop table_name1;

SQL> rename table_name2 to table_name1;

Method 3:

SQL> Delete from my_table where rowid not in(

SQL> select max(rowid) from my_table

SQL> group by my_column_name );

Method 4:

SQL> delete from my_table t1

SQL> where exists (select 'x' from my_table t2

SQL> where t2.key_value1 = t1.key_value1

SQL> and t2.key_value2 = t1.key_value2

SQL> and t2.rowid > t1.rowid);

Note: If you create an index on the joined fields in the inner loop, you, for all intents purposes, eliminate N^2 operations (no need to loop through the entire table on each pass by a record).

How can I generate primary key values for my table?

Create your table with a NOT NULL column (say SEQNO). This column can now be populated with unique values:

SQL> UPDATE table_name SET seqno = ROWNUM;

or use a sequences generator:

SQL> CREATE SEQUENCE sequence_name START WITH 1 INCREMENT BY 1;
SQL> UPDATE table_name SET seqno = sequence_name.NEXTVAL;

Finally, create a unique index on this column.



How can I get the time difference between two date columns

Look at this example query:

select floor(((date1-date2)*24*60*60)/3600)

|| ' HOURS ' ||

floor((((date1-date2)*24*60*60) -

floor(((date1-date2)*24*60*60)/3600)*3600)/60)

|| ' MINUTES ' ||

round((((date1-date2)*24*60*60) -

floor(((date1-date2)*24*60*60)/3600)*3600 -

(floor((((date1-date2)*24*60*60) -

floor(((date1-date2)*24*60*60)/3600)*3600)/60)*60)))

|| ' SECS ' time_difference

from ...



How does one count different data values in a column?

select dept, sum( decode(sex,'M',1,0)) MALE,

sum( decode(sex,'F',1,0)) FEMALE,

count(decode(sex,'M',1,'F',1)) TOTAL

from my_emp_table

group by dept;



How does one count/sum RANGES of data values in a column?

A value x will be between values y and z if GREATEST(x, y) = LEAST(x, z). Look at this example:

select f2,

sum(decode(greatest(f1,59), least(f1,100), 1, 0)) "Range 60-100",

sum(decode(greatest(f1,30), least(f1, 59), 1, 0)) "Range 30-59",

sum(decode(greatest(f1, 0), least(f1, 29), 1, 0)) "Range 00-29"

from my_table

group by f2;

For equal size ranges it might be easier to calculate it with DECODE(TRUNC(value/range), 0, rate_0, 1, rate_1, ...). Eg.

select ename "Name", sal "Salary",

decode( trunc(f2/1000, 0), 0, 0.0,

1, 0.1,

2, 0.2,

3, 0.31) "Tax rate"

from my_table;



Can one retrieve only the Nth row from a table?

provided this solution to select the Nth row form a table:

SELECT f1 FROM t1

WHERE rowid = (

SELECT rowid FROM t1

WHERE rownum <= 10

MINUS

SELECT rowid FROM t1

WHERE rownum <>

Alternatively...

SELECT * FROM emp WHERE rownum=1 AND rowid NOT IN

(SELECT rowid FROM emp WHERE rownum <>

Please note, there is no explicit row order in a relational database. However, this query is quite fun and may even help in the odd situation.



Can one retrieve only rows X to Y from a table?

To display rows 5 to 7, construct a query like this:

SELECT *

FROM tableX

WHERE rowid in (

SELECT rowid FROM tableX

WHERE rownum <= 7

MINUS

SELECT rowid FROM tableX

WHERE rownum <>

Please note, there is no explicit row order in a relational database. However, this query is quite fun and may even help in the odd situation.



How does one select EVERY Nth row from a table?

One can easily select all even, odd, or Nth rows from a table using SQL queries like this:

Method 1: Using a subquery

SELECT *

FROM emp

WHERE (ROWID,0) IN (SELECT ROWID, MOD(ROWNUM,4)

FROM emp);

Method 2: Use dynamic views (available from Oracle7.2):

SELECT *

FROM ( SELECT rownum rn, empno, ename

FROM emp

) temp

WHERE MOD(temp.ROWNUM,4) = 0;

Please note, there is no explicit row order in a relational database. However, these queries are quite fun and may even help in the odd situation.



How does one select the TOP N rows from a table?

Form Oracle8i one can have an inner-query with an ORDER BY clause. Look at this example:

SELECT *

FROM (SELECT * FROM my_table ORDER BY col_name_1 DESC)

WHERE ROWNUM <>

Use this workaround with prior releases:

SELECT *

FROM my_table a

WHERE 10 >= (SELECT COUNT(DISTINCT maxcol)

FROM my_table b

WHERE b.maxcol >= a.maxcol)

ORDER BY maxcol DESC;



How does one code a tree-structured query?

Tree-structured queries are definitely non-relational (enough to kill Codd and make him roll in his grave). Also, this feature is not often found in other database offerings.

The SCOTT/TIGER database schema contains a table EMP with a self-referencing relation (EMPNO and MGR columns). This table is perfect for tesing and demonstrating tree-structured queries as the MGR column contains the employee number of the "current" employee's boss.

The LEVEL pseudo-column is an indication of how deep in the tree one is. Oracle can handle queries with a depth of up to 255 levels. Look at this example:

select LEVEL, EMPNO, ENAME, MGR

from EMP

connect by prior EMPNO = MGR

start with MGR is NULL;

One can produce an indented report by using the level number to substring or lpad() a series of spaces, and concatenate that to the string. Look at this example:

select lpad(' ', LEVEL * 2) || ENAME ........

One uses the "start with" clause to specify the start of the tree. More than one record can match the starting condition. One disadvantage of having a "connect by prior" clause is that you cannot perform a join to other tables. The "connect by prior" clause is rarely implemented in the other database offerings. Trying to do this programmatically is difficult as one has to do the top level query first, then, for each of the records open a cursor to look for child nodes.

One way of working around this is to use PL/SQL, open the driving cursor with the "connect by prior" statement, and the select matching records from other tables on a row-by-row basis, inserting the results into a temporary table for later retrieval.



How does one code a matrix report in SQL?

Look at this example query with sample output:

SELECT *

FROM (SELECT job,

sum(decode(deptno,10,sal)) DEPT10,

sum(decode(deptno,20,sal)) DEPT20,

sum(decode(deptno,30,sal)) DEPT30,

sum(decode(deptno,40,sal)) DEPT40

FROM scott.emp

GROUP BY job)

ORDER BY 1;

JOB DEPT10 DEPT20 DEPT30 DEPT40

--------- ---------- ---------- ---------- ----------

ANALYST 6000

CLERK 1300 1900 950

MANAGER 2450 2975 2850

PRESIDENT 5000

SALESMAN 5600



How does one implement IF-THEN-ELSE in a select statement?

The Oracle decode function acts like a procedural statement inside an SQL statement to return different values or columns based on the values of other columns in the select statement.

Some examples:

select decode(sex, 'M', 'Male',

'F', 'Female',

'Unknown')

from employees;

select a, b, decode( abs(a-b), a-b, 'a > b',

0, 'a = b',

'a <>

from tableX;

select decode( GREATEST(A,B), A, 'A is greater than B', 'B is greater than A')...

Note: The decode function is not ANSI SQL and is rarely implemented in other RDBMS offerings. It is one of the good things about Oracle, but use it sparingly if portability is required.

From Oracle 8i one can also use CASE statements in SQL. Look at this example:

SELECT ename, CASE WHEN sal>1000 THEN 'Over paid' ELSE 'Under paid' END

FROM emp;



How can one dump/ examine the exact content of a database column?

SELECT DUMP(col1)

FROM tab1

WHERE cond1 = val1;

DUMP(COL1)

-------------------------------------

Typ=96 Len=4: 65,66,67,32

For this example the type is 96, indicating CHAR, and the last byte in the column is 32, which is the ASCII code for a space. This tells us that this column is blank-padded.




Can one drop a column from a table?

From Oracle8i one can DROP a column from a table. Look at this sample script, demonstrating the ALTER TABLE table_name DROP COLUMN column_name; command.

With previous releases one can use Joseph S. Testa's DROP COLUMN package that can be downloaded from http://www.oracle-dba.com/ora_scr.htm.

Other workarounds:

1. SQL> update t1 set column_to_drop = NULL;

SQL> rename t1 to t1_base;

SQL> create view t1 as select from t1_base;

2. SQL> create table t2 as select from t1;

SQL> drop table t1;

SQL> rename t2 to t1;



Can one rename a column in a table?

No, this is listed as Enhancement Request 163519. Some workarounds:

1. -- Use a view with correct column names...

rename t1 to t1_base;

create view t1 as select * from t1_base;

2. -- Recreate the table with correct column names...

create table t2 as select * from t1;

drop table t1;

rename t2 to t1;

3. -- Add a column with a new name and drop an old column...

alter table t1 add ( newcolame datatype );

update t1 set newcolname=oldcolname;

alter table t1 drop column oldcolname;

How can I change my Oracle password?

Issue the following SQL command: ALTER USER IDENTIFIED BY
/

From Oracle8 you can just type "password" from SQL*Plus, or if you need to change another user's password, type "password user_name".

How does one find the next value of a sequence?

Perform an "ALTER SEQUENCE ... NOCACHE" to unload the unused cached sequence numbers from the Oracle library cache. This way, no cached numbers will be lost. If you then select from the USER_SEQUENCES dictionary view, you will see the correct high water mark value that would be returned for the next NEXTVALL call. Afterwards, perform an "ALTER SEQUENCE ... CACHE" to restore caching.

You can use the above technique to prevent sequence number loss before a SHUTDOWN ABORT, or any other operation that would cause gaps in sequence values.



Workaround for snapshots on tables with LONG columns

You can use the SQL*Plus COPY command instead of snapshots if you need to copy LONG and LONG RAW variables from one location to another. Eg:

COPY TO SCOTT/TIGER@REMOTE -

CREATE IMAGE_TABLE USING -

SELECT IMAGE_NO, IMAGE -

FROM IMAGES;

Note: If you run Oracle8, convert your LONGs to LOBs, as it can be replicated.