Design and Analysis of Experiments One Factor Experiments - PowerPoint PPT Presentation

Loading...

PPT – Design and Analysis of Experiments One Factor Experiments PowerPoint presentation | free to download - id: 56c8dd-YzlhM



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Design and Analysis of Experiments One Factor Experiments

Description:

Method of Experimental Design. Provides a systematic approach to planning what data is required and the analysis to be performed. Designing an experiment - Planning ... – PowerPoint PPT presentation

Number of Views:285
Avg rating:3.0/5.0
Slides: 41
Provided by: lyleSmuE4
Learn more at: http://lyle.smu.edu
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Design and Analysis of Experiments One Factor Experiments


1
Design and Analysis of Experiments One Factor
Experiments
  • Probability and Statistics for Scientists and
    Engineers

2
Design and Analysis of One Factor Designs
  1. Experimental Design
  2. Analysis of Variance
  3. Tests for Equality of Means Bartletts Test

3
Experimental design
4
Completely Randomized Designs
  • A design in which the treatments are assigned
    completely at random to the experimental units,
    or vice versa. It imposes no restrictions, such
    as blocking, or the allocation of the treatments
    to the experimental units.
  • Used because of its simplicity
  • Restricted to cases in which homogenous
    experimental units are available

5
Model
  • The basic assumption for a completely randomized
    design with one observation per experimental unit
    is that the observations may be represented
    mathematically by the linear statistical model
  • Yij ? ai ?ij , i 1, 2, , k
  • j 1, 2, , n
  • where
  • Yij is the observation associated with the ith
  • treatment and jth experimental unit,
  • ? is the true mean effect (constant)
  • ai is the true effect of the ith treatment
  • and ?ij is the experimental error

6
Statistical Layout
  • The results of a completely random experiment
    with one observation per experiment unit may be
    exhibited as follows

Treatment Total 1 2 . .
. K Y11 Y21 . . . YK1 Y12 Y22 . . . .
. . . . . . Y1n Y2n . .
. YKn Totals Y1 Y2 . . . YK
Y Number of Obs. n n . . . n
nK Means Y1 Y2 . . . Yk Y
7
Analysis of Variance
8
Analysis of Variance
  • In partitioning the total variation of the
    observations into the variation attributable to
    mean, treatments, and random error, the
    Sum-of-Squares is used
  • Total Sum of Squares Treatment Sum of Squares
  • Error Sum of Squares
  • SST SSA SSE

9
Analysis of Variance
  • where
  • and

10
Analysis of Variance - Computational Formulas
  • SSE SST - SSA

11
Analysis of Variance Table (AOV Table or ANOVA
Table)
Sources of Degrees of Sum of
Mean F- Critical Variation Freedom Squares
Square Ratio Value of F Treatments K-1 SSA
Error K(n-1) SSE Total nK-1 SST
12
Example
  • Suppose that an appliance manufacturer is
    interested in
  • determining whether the brand of laundry
    detergent used
  • affects the amount of dirt removed from standard
    household
  • laundry loads. In particular, the manufacturer
    wants to
  • compare four different brands of detergent
    (labeled A, B, C,
  • and D). Suppose that, after a random assignment
    of ten loads
  • to each brand, the amount of dirt removed
    (measured in
  • milligrams) was determined, with the results
    summarized as follows

13
Example Continued
14
Example Solution
The statistical layout is Treatment(Brand)
Total A B C D 11 12 18 11 13 14 16 12
14 18 20 18 Totals 139 172
183 149 643 Number of Obs. 10 10 10 10
40 Means 13.9 17.2 18.3 14.9 16.075
15
Example Solution Continued
16
Example Solution Continued
  • The sum of squares are calculated as follows
  • The mean squares are
  • The calculated F-ratio is

17
Example Solution Continued
Analysis of Variance Table Sources of Degrees
of Sum of Mean of F-
Critical Variation Freedom Squares Squares
Ratio Value of F Treatments 3 123.275 41.09
6.92 2.866 Error 36 213.5
5.93 Total 39 336.775
18
Example - Conclusion
  • Since the probability of obtaining an F statistic
    of 6.93 or larger when the null hypothesis is
    true is approximately 0.001 and less than the
    specified a of 0.05, the null hypotheses is
    rejected.
  • or
  • since the calculated F-ratio of 6.93 is greater
    than the critical value of 2.87, the null
    hypotheses is rejected.
  • Therefore conclude that the different brands of
  • detergent are not equally effective.

19
Test of Equality of Means Bartlett's Test
20
Tests for the Equality of Several Variances
21
Tests for the Equality of Several Variances
22
Example
23
Example
24
Example
25
Design and Analysis of Experiments Basic
Concepts
  • Probability and Statistics for Scientists and
    Engineers

26
Method of Experimental Design
  • Provides a systematic approach to planning what
    data is required and the analysis to be
    performed.
  • Designing an experiment - Planning an experiment
    so that data will be collected which relevant to
    the problem under investigation.
  • The design of an experiment is the complete
    sequence of steps taken ahead of time to insure
    that the appropriate data will be obtained in a
    way which permits an objective analysis leading
    to valid inferences with respect to the stated
    problem.

27
Some requisites of a good experiment
  1. There must be a clearly defined objective.
  2. The effects of the factors should not be obscured
    by other variables.
  3. The results should not be influenced by
    conscious or unconscious bias in the experiment
    or on the part of the experimenter.
  4. The experiment should provide some measure of
    precision.
  5. The experiment must have sufficient precision to
    accomplish its purpose.

28
Experimental Units
  • That unit to which a single treatment, which may
    be a combination of many factors, is applied in
    one replication of the basic experiment.
  • The term factor refers to an independent
    variable.

29
Treatment and Treatment Combinations
  • Implies the particular set of experimental
    conditions which will be imposed on an
    experimental condition which will be imposed on
    an experimental unit within the confines
  • of the chosen design.

30
Blocking
  • The allocation of the experimental units to
    blocks in such a manner that the units within a
    block are relatively homogenous while the greater
    part of the predictable variation among units has
    been confounded with the effect of blocks.

31
Grouping
  • The placing of a set of homogenous experimental
    units into groups in order that the different
    groups may be subjected to different treatments

32
Balancing
  • Obtaining the experimental units, the grouping,
    the blocking and the assignment of the treatments
    to the experimental units in such a way that a
    balanced configuration exists.

33
Experimental Error
  • The results of experiments are affected not only
    by the action of the treatments, but also by
    extraneous variations which tend to mask the
    effects of the treatments. The term experimental
    errors is often applied to these variations,
    where the word errors is not synonymous with
    mistakes,
  • but includes all types of extraneous variation.
  • Two main sources of experimental errors are
  • 1) Inherent variability in the experimental
    material
  • 2) Lack of uniformity in the physical conduct of
    the experiment, or failure to standardize the
    experimental technique.

34
Basic Principles of Experimental Design
  1. Replication
  2. Randomization
  3. Local Control

35
Replication
  • Repetition of the basic experiment. In order to
    evaluate the effects of factors, a measure of
    precision must be available. In situations where
    the measurement of precision must be obtained
    from the experiment itself, replication provides
    the
  • measure. It also provides an opportunity for the
    effects of uncontrolled factors to balance out,
    and thus aids randomization as a bias-decreasing
    tool.
  • Replication will also help to spot gross errors
    in measurement.
  • Replication makes a test of significance
    possible.

36
Randomization
  • By insisting on a random assignment of treatments
    to the experimental units, we can proceed as
    though the assumption Observations are
    independently distributed, is true.
  • Randomization makes the test valid by making it
    appropriate to analyze the data as though the
    assumption of independent error is true.

37
Local Control
  • The amount of balancing, blocking, and grouping
    of the experimental units that is employed in the
    adopted statistical design. The function of local
    control is to make the experimental design more
    efficient. That is, local control makes any test
    of significance more sensitive. This increase in
  • efficiency (or sensitivity) results because a
    proper use of local control will reduce the
    magnitude of the estimate of experimental error.

38
Steps in Designing an Experiment
  • A statistically designed experiment consists of
    the
  • following steps
  • 1) Statement of the problem.
  • 2) Formulation of hypothesis.
  • 3) Devising of experimental technique and design.
  • 4) Examination of possible outcomes and reference
    back to the reasons for the inquiry to be sure
    the experiment provides the required information
    to an adequate extent.
  • 5) Consideration of the possible results from the
    point of view of the statistical procedures which
    will be applied to them, to ensure that the
    conditions necessary for these procedures to be
    valid are satisfied.

39
Steps in Designing an Experiment Continued
  • 6) Performance of experiment.
  • 7) Application of statistical techniques to the
    experimental results.
  • 8) Drawing conclusions with measures of the
    reliability of estimates of any quantities that
    are evaluated, careful consideration being given
    to the validity of the conclusions for the
    population of objects or events to which they are
    to apply.
  • 9) Evaluation of the whole investigation,
    particularly with other investigations on the
    same or similar problem.
  • Note Frequently, there is a formidable barrier
    to communications which must be overcome.

40
Check List for Planning Test Programs
  • A. Obtain a clear statement of the problem.
  • B. Collect available background information.
  • C. Design a test program
  • 1. Hold a conference of all parties concerned
  • 2. Design the program in preliminary form
  • 3. Review the design with all concerned
  • D. Plan and carry out the experimental work.
  • E. Analyze the data
  • F. Interpret the results
  • G. Prepare the report
About PowerShow.com