I/O Psychology Research Methods - PowerPoint PPT Presentation

Loading...

PPT – I/O Psychology Research Methods PowerPoint presentation | free to download - id: 77fc3e-MWFjO



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

I/O Psychology Research Methods

Description:

Title: I/O Psychology Research Methods Author: Lisa Schultz Last modified by: Lisa Schultz Created Date: 1/17/2005 5:22:11 PM Document presentation format – PowerPoint PPT presentation

Number of Views:63
Avg rating:3.0/5.0
Slides: 41
Provided by: LisaS196
Learn more at: http://arts.pdn.ac.lk
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: I/O Psychology Research Methods


1
I/O PsychologyResearch Methods
2
What is Science?
  • Science Approach that involves the
    understanding, prediction, and control of some
    phenomenon of interest.
  • Scientific Knowledge is
  • Logical and Concerned with Understanding
  • Empirical
  • Communicable and Precise
  • Probabilistic (Disprove, NOT Prove)
  • Objective / Disinterestedness

3
Goals of Science
  • Ex We want to study absenteeism in an
    organization
  • Description What is the current state of
    affairs?
  • Prediction What will happen in the future?
  • Explanation What is the cause of the phenomena
    were interested in?

4
What is research?
  • Systematic study of phenomena according to
    scientific principles.
  • A set of procedures used to obtain empirical and
    verifiable information from which we then make
    informed, educated conclusions.

5
The Empirical Research Process
  • 1. Statement of the Problem
  • 2. Design of the Research Study
  • 3. Measurement of Variables
  • 4. Analysis of Data
  • 5. Interpretation/Conclusions

6
Step 1 Statement of the Problem
  • Theory statement that explains the relationship
    among phenomena gives us a framework within
    which to conduct research.
  • There is nothing quite so practical as a good
    theory. Kurt Lewin
  • Two Approaches
  • Inductive theory building use data to derive
    theory.
  • Deductive theory testing start with theory and
    collect data to test that theory.

7
Step 1 Statement of the Problem
  • Hypothesis
  • A testable statement about the status of a
    variable or the relationship among multiple
    variables
  • Must be falsifiable!

8
Step 1 Statement of the Problem
  • Types of variables
  • Independent Variables (IV) Are variables that
    are manipulated by the researcher.
  • Dependent Variables (DV) Are the outcomes of
    interest.
  • Predictors and Criterion
  • Confounding variables Uncontrolled extraneous
    variables that permits alternative explanations
    for the results of a study.

9
Moderator Variable
  • Special type of IV that influences the
    relationship between 2 other variables
  • X Y
  • M
  • Example
  • Gender Hiring rate
  • M Type of job
  • Relationship b/t gender and hiring rate may
    change depending on the type of job individuals
    are applying for.

10
Mediator Variable
  • Special type of IV that accounts for the relation
    between the IV and the DV.
  • Mediation implies a causal relation in which an
    IV causes a mediator which causes a DV.
  • IV MED DV
  • Example
  • IV negative feedback
  • MED negative thoughts
  • DV willingness to participate

11
Moderator vs. Mediator
  • A moderator variable is one that influences the
    strength of a relationship between two other
    variables.
  • A mediator variable is one that explains the
    relationship between the two other variables.

12
Example
  • You are an I/O psychologist working for an
    insurance company. You want to assess which of
    two training methods is most effective for
    training new secretaries. You give one group of
    secretaries on-the-job training and a booklet to
    study at home. You give the second group of
    secretaries on-the-job training and have them
    watch a 30-minute video.

13
Step 2 Research Design
  • A research design is the structure or
    architecture for the study.
  • A plan for how to treat variables that can
    influence results so as to rule out alternative
    interpretations.
  • Primary Research Methods
  • Experimental (Laboratory vs. Field Research)
  • Quasi-Experimental
  • Non-Experimental (Observational, Survey)

14
Step 2 Research Design
  • Secondary Research Methods
  • Meta-analysis statistical method for
    combining/analyzing the results from many studies
    to draw a general conclusion about relationships
    among variables (p.61).
  • Qualitative Research Methods
  • Rely on observation, interview, case study, and
    analysis of diaries to produce narrative
    descriptions of events or processes.

15
Evaluating Research Design
  • Internal validity (Control)
  • Does X cause Y?
  • Lab studies eliminate distracting variables
    through experimental control.
  • Using of statistical techniques to control for
    the influences of certain variables is
    statistical control.
  • External validity (Generalizability)
  • Does the relation of X and Y hold in other
    settings and with other participants and stimuli?

16
Threats to Internal Validity
  • History
  • Instrumentation
  • Selection
  • Maturation
  • Mortality/Attrition
  • Testing
  • Experimenter Bias
  • Awareness of Being a Subject

17
Step 3 Measurement
  • Goal Quantify the IV and DV
  • Psychological Measurement the process of
    quantifying variables (called constructs)
  • The process of assigning numerical values to
    represent individual differences, that is,
    variations among individuals on the attribute of
    interest
  • A Measure
  • Any mechanism, procedure, tool, etc, that
    purports to translate attribute differences into
    numerical values

18
Step 3 Measurement
  • Two classes of measured variables
  • Categorical (or Qualitative)
  • Differ in type but not amount
  • Continuous (or Quantitative)
  • Differ in amount

19
Step 4 Data Analysis
  • Statistics are what we use to summarize
    relationship among variables and to estimate the
    odds that they reflect more than mere chance
  • Descriptive Statistics Summarize, organize, and
    describe a sample of data.
  • Inferential Statistics Used to make inferences
    from sample data to a larger sample or
    population.
  • Distributions

20
Descriptive Statistics
  • Measures of Central Tendency
  • Mean, Median, Mode
  • Measures of Variability
  • Range, Variance, SD

21
Differences in Variance
Low variance
Normal
High variance
22
Inferential Statistics
  • Compares a hypothesis to an alternative
  • Statistical Significance The likelihood that the
    observed difference would be obtained if the null
    hypothesis were true
  • Statistical Power Likelihood of finding a
    statistically significant difference when a true
    difference exists

23
Correlation
  • Correlation
  • Used to assess the relationship between 2
    variables
  • Represented by the correlation coefficient r
  • r can take on values from 1 to 1
  • Size denotes the magnitude of the relationship
  • 0 means no relationship

24
Correlation and Regression
  • Correlation
  • Scatterplot
  • Regression Line
  • Linear vs. Non-Linear
  • Multiple Correlations
  • Correlation and Causation

25
Prediction of the DV with one IV
  • Correlations allow us to make predictions

86
D V
115
IV
26
Interpretation Evaluating Measures
  • How do you determine the usefulness of the
    information gathered from our measures?
  • The Answer
  • Reliability Evidence
  • Validity Evidence

27
Interpretation Evaluating Measures
  • Reliability Consistency or stability of a
    measure.
  • A measure should yield a similar score each time
    it is given
  • We can get a reliable measure by reducing errors
    of measurement any factor that affects obtained
    scores but is not related to the thing we want to
    measure.
  • Errors of measurement
  • Random factors, practice effects, etc.

28
Evaluating Measures Reliability
  • Test-Retest (Index of Stability)
  • Method Give the same test on two occasions and
    correlate sets of scores (coefficient of
    stability)
  • Error Anything that differentially influences
    scores across time for the same test
  • Issue How long should the time interval be?
  • Limitations
  • Not good for tests that are supposed to assess
    change
  • Not good for tests of things that change quickly
    (i.e., mood)
  • Difficult and expensive to retest
  • Memory/practice effects are likely

29
Evaluating Measures Reliability
  • Equivalent Forms (Index of Equivalence)
  • Method Give two versions of a test and correlate
    scores (coefficient of equivalence)
  • Reflects the extent to which the two different
    versions are measuring the same concept in the
    same way
  • Issue are tests really parallel? length of
    interval?
  • Limitations
  • Difficult and expensive
  • Testing time
  • Unique estimate for each interval

30
Evaluating Measures Reliability
  • Internal Consistency Reliability
  • Method take a single test and look at how well
    the items on the test relate to each other
  • Split-half similar to alternate forms (e.g., odd
    vs. even items)
  • Cronbachs Alpha mathematically equivalent to
    the average of all possible split-half estimates
  • Limitations
  • Only use for multiple item tests
  • Some tests are not designed to be homogeneous
  • Doesnt assess stability over time

31
Evaluating Measures Reliability
  • Inter-Rater Reliability
  • Method two different raters rate the same target
    and the ratings are correlated
  • Correlation reflects the proportion of
    consistency among the ratings
  • Issue reliability doesnt imply accuracy
  • Limitations
  • Need informed, trained raters
  • Ratings are not a good way to measure many
    attributes

32
Interpretation Evaluating Measures
  • Validity
  • The accurateness of inferences made based on
    data.
  • Whether a measure accurately and completely
    represents what was intended to be measured.
  • Validity is not a property of the test
  • It is a property of the inferences we make from
    the test scores

33
Evaluating Measures Validity
  • Criterion-Related
  • Predictive
  • Concurrent
  • Content-Related
  • Construct-Related
  • Reliability is a necessary but not sufficient
    condition for validity

34
Content Validity
  • The extent to which a predictor provides a
    representative sample of the thing were
    measuring
  • Example First Exam
  • Content history, research methods, criterion
    theory, job analysis, measurement in selection
  • Evidence
  • SME evaluation

35
Criterion-Related Validity
  • The extent to which a predictor relates to a
    criterion
  • Evidence
  • Correlation (called the validity coefficient)
  • A good validity coefficient is around .3 to .4
  • Concurrent Validity
  • Predictive Validity

36
Construct Validity
  • The extent to which a test is an accurate
    representation of the construct it is trying to
    measure
  • Construct validity results from the slow
    accumulation of evidence (multiple methods)
  • Evidence
  • Content validity and criterion-related validity
    can provide support for construct validity
  • Convergent validity
  • Divergent (discriminant) validity

37
Step 5 Conclusions From Research
  • You are making inferences!
  • What if it youre inferences seem wrong?
  • Theory is wrong?
  • Information (data) is bad?
  • Bad measurement?
  • Bad research design?
  • Bad sample?
  • Analysis was wrong?

38
Step 5 Conclusions From Research
  • Cumulative Process
  • Dissemination
  • Conference presentations journal publications
  • Boundary conditions
  • Generalizability
  • Causation
  • Serendipity

39
Research Ethics
  • Informed consent
  • Welfare of subjects
  • Conflicting obligations to the organization and
    to the participants

40
(No Transcript)
About PowerShow.com