Experimental Designs are basically based on the various types of designs of Experiments and how an Experiment looks nice if it is designed in a better way.
Basic Principles of Experimental Designs
Professor Fisher has enumerated three principles of experimental designs:
- the Principle of Replication;
- the Principle of Randomization;
- Principle of Local Control.
According to the Principle of Replication, the experiment should be repeated more than once. Thus, each treatment is applied in many experimental units instead of one. By doing so the statistical accuracy of the experiment is increased. For example, suppose we are to examine the effect of two varieties of rice. For this purpose we may divide the field into two parts and grow one variety in one part and the other variety in the other part. We can then compare the yield of the two parts and draw conclusion on that basis. But if we are to apply the principle of replication to this experiment, then we first divide the field into several parts, grow one variety in half of these parts and the other variety in the remaining parts. We can then collect the data of yield of the two varieties and conclusion can be drawn by comparing the same. The result so obtained will be more reliable in comparison to the conclusion drawn without applying the principle of replication. Sometimes the entire experiment can be repeated several times for better results. Conceptually replication does not present any difficulty, but coincidentally it does. For example, if an experiment requiring a two-way analysis of variance is replicated, it will then require a three-way analysis of variance since replication itself may be a source of variation in the data. However, it should be remembered that replication is introduced in order to increase the precision of a study; that is to say, to increase the accuracy with which the main effects and interactions can be estimated.
The Principle of Randomization provides protection, when we conduct an experiment, against the effects of extraneous factors by randomization. In other words, this principle indicates that we should design or plan the experiment in such a way that the variations caused by extraneous factors can all be combined under the general heading of “chance”. For instance, if we grow one variety of rice, say, in the first half of the parts of a field and the other variety is grown in the other half, then it is just possible that the soil fertility may be different in the first half in comparison to the other half. If this is so, our results would not be realistic. In such a situation, we may assign the variety of rice to be grown in different parts of the field on the basis of some random sampling technique i.e., we may apply randomization principle and protect ourselves against the effects of the extraneous factors (soil fertility differences in the given case). As such, through the application of the principle of randomization, we can have a better estimate of the well known experimental error.
The Principle of Local Control is another important principle of experimental designs. Under it the extraneous factor, the known source of variability, is made to very deliberately over as wide a range as necessary and these needs to be done in such a way that the variability it causes can be measured and hence eliminated from the experimental error. This means that we should plan the experiment in a manner that we can perform a two-way analysis of variance, in which the total variability of the data is divided into three components attributed to treatments (varieties of rice in our case), the extraneous factor (soil fertility in our case) and experimental error.* In other words, according to the principle of local control, we first divide the field into several homogeneous parts, known as blocks, and then each such block is divided into parts equal to the number of treatments. Then the treatments arc randomly assigned to these parts of a block. This activity is known as ‘blocking’. In general, blocks are the levels at which we hold an extraneous factor fixed, so that we can measure its contribution to the total variability of the data by means of a two-way analysis of variance. In brief, through the principle of local control we can eliminate the variability due to extraneous factor(s) from the experimental error.
Important Experimental Designs
Experimental design refers to the framework or structure of an experiment and as such there are several experimental designs. We can classify experimental designs into two broad categories like informal experimental designs and formal experimental designs. Informal experimental designs are those designs that normally use a less sophisticated form of analysis based on differences in magnitudes, whereas formal experimental designs offer relatively more control and use precise statistical procedures for analysis work, important experimental designs are as follows:
- Informal experimental designs:
- Before-and-after without control design.
- After-only with control design.
- Before-and-after with control design.
- Formal experimental designs:
- Completely randomized design (C. R. design).
- Randomized block design (R. B. design).
- Latin square design (L. S. design).
We may briefly deal with each of the above stated informal as well as formal experimental designs.
- Before and after without control design: In such a design a single test group or area is selected and the dependent variable is measured before the introduction of the treatment. The treatment is then introduced and the dependent variable is measured again after the treatment has been introduced. The effect of the treatment would be equal to the level of the phenomenon after the treatment minus the level of the phenomenon before the treatment. The main difficulty of such a design is that with the passage of time considerable extraneous variations may be there in its treatment effect.
- After-only with control design. In this design two groups or areas (test area and control area) are selected and the treatment is introduced into the test area only. The dependent variable is then measured in both the areas at the same time. Treatment impact is assessed by subtracting the value of the dependent variable in the control area from its value in the test area. The basic assumption in such a design is that the two areas are identical with respect to their behavior towards the phenomenon considered. If this assumption is not true, there is the possibility of extraneous variation entering into the treatment effect. However, data can be collected in such a design without the introduction of problems with the passage of time. In this respect this design is superior to before-and-after without control design.
- Before and-after with control design. In this design two areas are selected and the dependent variable is measured in both the areas for an identical time-period before the treatment. The treatmentis then introduced into the test area only, and the dependent variable is measured in both for an identical time-period after the introduction of the treatment. The treatment effect is determined by subtracting the change in the dependent variable in the control area from the change in the dependent variable in test area. This design is superior to the above two designs for the simple reason that it avoids extraneous variation resulting both from the passage of time andfrom noncom pare) tree of the test and control areas. But at times, due to lack of historical data, time or a comparable control area, we should prefer to select one of the first two informal designs stated above.
- Completely Randomized Design (C. R. design) involves only two principles viz., the principle of replication and the principle of randomization of experimental designs. It is the simplest possible design and its procedure of analysis is also easier. The essential characteristic of this designs that subjects are randomly assigned to experiment treatments (or vice. Versa). For instance, if we have 10 subjects and if we wish to test 5 under treatment A and 5 under treatment B, the randomization process gives every possible group of 5 subjects selected from a set of 10 an equal opportunity of being assigned to treatment A and treatment B. One-way analysis of variance (or one-way ANOVA) is used to analyze such a design. Even unequal replications can also work in this design. It provides maximum number of degrees of freedom to the error. Such a design is generally used when experimental areas happen to be homogenous. Technically, when all the variations due to the uncontrolled extraneous factors are included under the heading of chance variation, we refer to the design of experiment as C. R. design.
We can present a brief description of the two forms of such a design as under: Two-group simple randomized design: In a two-group simple randomized design, first of all the population is defined and then from the population a sample is selected randomly. Further requirement of this design is that items, after being selected randomly from the population, be randomly assigned to the experimental and control groups (Such random assignment of items to two groups is technically described as principle of randomization). Thus, this design yields two groups as representatives of the population Since in the simple randomized design the e1emeas constituting the sample are randomly drawn from the same pupation and randomly assigned to the experimental and control groups, it be- comes possible to draw conclusions on the basis of samples applicable for the population. The two groups (experimental and control groups) such a design are given different treatments f the independent variable. This design of experiment is quite common in research studies concerning behavioral sciences. The merit of such a design is that it is simple and randomizes the differences among the sample items. But the limitation of it is that the individual differences among those conducting the treatments are not eliminated i.e., it does not control the extraneous variable and as such the result of the experiment may not depict a correct picture. This can be illustrated by taking an example. Suppose the researcher wants to compare two groups of students who have been randomly selected and randomly assigned. Two different treatments viz., the usual training and the specialized training are being given to the two groups. The researcher hypothesizes greater gains for the group receiving specialized training. To determine this, he tests each group before and after the training, and then compares the amount of gain for the two groups to accept or reject his hypothesis. This is an illustration of the two-group randomized design, wherein individual differences among students are being randomized. But this does not control the differential effects of the extraneous independent variables (in this case, the individual differences among those conducting the training programme).