Quantitative Methods - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Quantitative Methods

Description:

IQ. The zero point, however, is arbitrary. ... Income, (0 is not arbitrary) Kelvin scale of temperature. Discrete versus Continuous Variables ... – PowerPoint PPT presentation

Number of Views:199
Avg rating:3.0/5.0
Slides: 28
Provided by: FayeLin
Category:

less

Transcript and Presenter's Notes

Title: Quantitative Methods


1
Quantitative Methods
  • Survey Research

2
The Measurement Process
  • Measurement- refers to the process of describing
    abstract concepts in terms of specific
    indicators. You take a concept, idea or
    construct and develop a measure by which one can
    observe the idea empirically.
  • Indicator- is an observation that is assumed to
    be evidence of the attributes or properties of
    some phenomenon.
  • Item- a single indicator of a variable.
  • Index or Scale- multiple items measuring a
    concept form an index or a scale.

3
Conceptualization
  • Refers to the process of taking a construct and
    refining it by giving it a conceptual or
    theoretical definition.
  • A conceptual definition is a definition in
    abstract, theoretical terms. It refers to other
    ideas or constructs.
  • To do this one must think about the meanings of a
    construct.

4
Operationalization
  • Operationalization links a conceptual definition
    to a specific set of measurement techniques or
    procedures.
  • The constructs operational definition is a
    definition in terms of the specific operations of
    actions a researcher carries out. It could be a
    survey questionnaire, a method of observing
    events in a field setting, a way to measure
    symbolic content in the mass media, or any
    process carried out by the researcher that
    reflects, documents, or represents the abstract
    construct as it is expressed in the conceptual
    definition.

5
Five Suggestions for Coming Up With a Measure
  • 1) Remember the Conceptual Definition- the
    measure should match your conceptual definition
    of your construct.
  • 2) Keep an Open Mind- be creative and always look
    for new measures.
  • 3) Borrow from Others- borrow and cite others
    measures.
  • 4) Anticipate Difficulties- logical and practical
    problems often arise when trying to measure
    variables of interest Try to anticipate and
    mitigate.
  • 5) Do not forget your units of analysis- you need
    to be able to generalize your findings to the
    area of interest.

6
Ways of Measuring
  • Verbal Reports
  • Observation
  • Archival Records

7
Types of Variables
  • Independent variable- the variable that acts on
    something else. It is unaffected by changes in
    the dependent variable.
  • Dependent variable- the variable that is the
    effect or the result or outcome of another
    variable.
  • Intervening variables- is a variable that comes
    between the independent and dependent variable
    as part of a relationship. It is the link or
    mechanism between them.

8
Levels of Measurement
  • Nominal Variables- a nominal variable has no
    numeric value. Ie. Race, gender, political party,
    religion
  • Ordinal Variables- are rank ordered, but there is
    no definable numeric distance between them. Ie.
    Highest degree earned, socioeconomic status.
  • Interval variables- is numerically relevant.
    Equal spacing between categories Ie . IQ. The
    zero point, however, is arbitrary. Someone with
    a 140 IQ is not twice as intelligent as a person
    with a 70 IQ.
  • Ratio variables- is numerically relevant. Equal
    spacing between categories but the zero point is
    set. Ie. Income, (0 is not arbitrary) Kelvin
    scale of temperature

9
Discrete versus Continuous Variables
  • Discrete Variables- possess a finite number of
    distinct and separate values or categories. Ie.
    Sex, race, household size, number of days absent
  • Continuous Variables- could theoretically be
    divided into an infinite number of categories.
    Ie. Height- could use smaller and smaller units

10
Reliability and Validity
  • All quantitative studies can be evaluated in
    terms of reliability and validity.

11
Reliablity
  • Refers to a measures ability to yield consistent
    results.
  • Dependability
  • A measure is reliable if the measurement does not
    change when the concept being measured remains
    constant in value.

12
Two Principles of Reliability
  • Stability- idea that a reliable measure should
    not change from one application to the next.
  • Equivalence- idea that all items that make up a
    measuring instrument should be consistent with
    one another.

13
Three Types of Reliability
  • Stability Reliability
  • Representative Reliability
  • Equivalence Reliability

14
Stability Reliability
  • is reliability across time. Does the measure
    deliver the same answer when applied in different
    time periods?
  • Test-Retest Method
  • Multiple Forms

15
Representative Reliability
  • is reliability across sub-populations or groups
    of people. Does the indicator deliver the same
    types of answers when applied to different
    groups?

16
Steps to Increase Representative Reliability
  • Steps to increase Representative Reliability
  • 1) Care must be taken to assess whether
    measurements might lead to a distorted view of
    minorities.
  • 2) Researchers can immerse themselves in the
    culture of the group to be studied, experiencing
    the daily activities of life and the cultural
    products as the natives do.
  • 3) Researchers should use key informants-people
    who participate routinely in the culture of the
    group to be studied- to help assess the
    measurement instrument.
  • 4) When translating an instrument from English
    into another language, researchers should use the
    most effective translation methods, usually
    double translation
  • 5) After developing or translating measuring
    instruments for use with minority populations,
    the instruments should be tested for validity and
    reliability on that population.

17
Equivalence Reliability
  • applies when researchers use multiple indicators
    in the operationalization of a concept.
  • split half

18
Improving Reliability
  • Clearly conceptualize constructs
  • Rreliability increases when a single construct or
    subdimension of a construct is measured.
  • Use a precise level of measurement
  • Use multiple indicators
  • Use pilot tests
  • Interview subjects about measuring devices
  • Use the highest level of measurement available
  • Conduct an item by item assessment of multiple
    item measures.

19
Validity
  • refers to the accuracy of a measure does it
    accurately measure the variable that it is
    intended to measure.
  • Are you measuring what you think you are
    measuring.

20
Three Types of Validity
  • Face Validity
  • Content Validity
  • Criterion Validity
  • Concurrent validity
  • Predictive Validity

21
Face Validity
  • Involves assessing whether a logical relationship
    exists between the variable and the proposed
    measure. Commonsense comparison of what
    comprises the measure and the theoretical
    definition of the variable.
  • This is accomplished by simply looking over the
    measure and determining if it is valid on the
    face of it.
  • Does the definition and method of measurement
    seem to fit?
  • Is there consensus on this fact?
  • The researcher should include relative literature
    or other studies which have assessed similar
    concepts.

22
Content Validity
  • A special type of face validity. It addresses
    the question, is the full content of a
    definition, represented in a measure?
  • A conceptual definition holds ideas it is a
    space containing ideas and concepts.
  • First, specify the content in a constructs
    definition.
  • Sample from all areas of the definition
  • Develop an indicator that taps all of the parts
    of the definition

23
Criterion Validity
  • Refers to establishing validity by showing a
    correlation between a measuring device and some
    other criterion or standard that we know or
    believe accurately measures the variable under
    consideration.
  • Is established by comparing scores on the test
    with pre-established measures. These terms refer
    to its ability to predict future events, and its
    ability to match other measures given at the same
    time.

24
Concurrent Validity
  • Type of criterion validity in which the
    instrument being evaluated is compared to some
    already existing criterion such as the results of
    another measuring device.

25
Predictive Validity
  • Second form of criterion validity. An instrument
    is used to predict some future state of affairs.

26
Relationships between Reliability and Validity
  • Reliability is necessary for validity and it is
    easier to achieve. It does not, however,
    guarantee validity. You can measure something
    consistently and still not be measuring what you
    think you are, or you can obtain an invalid
    measure reliably.

27
Other Types of Reliability and Validity
  • Inter-coder reliability- if there are several
    coders, observers, raters etc. you may be
    concerned with inter-coder reliability. Is
    everyone coding the same?
  • Internal Validity- Within the data. Does the
    difference really exist? Can it be explained as
    a measurement artifact? There are no errors
    internal to the design of the research project.
  • External Validity- To whom does the data apply.
    To what populations can this data be applied.
    The ability to generalize findings from a
    specific setting and small groups to a broad
    range of settings and people.
  • Statistical Validity -The correct statistical
    procedure is chosen and its assumptions are
    fully met.
Write a Comment
User Comments (0)
About PowerShow.com