Music and SelfGenerated Images: Applying Dual Coding Theory, Generative Theory of Reading Comprehens - PowerPoint PPT Presentation

1 / 26
About This Presentation
Title:

Music and SelfGenerated Images: Applying Dual Coding Theory, Generative Theory of Reading Comprehens

Description:

Became bored (BORED) -.604, p = .022; posttest. Results (RQ1): Hierarchical multiple regression ... Model 3: CONFUSED, HEARDIF, BORED. R2 = .872 (p = .011) ... – PowerPoint PPT presentation

Number of Views:110
Avg rating:3.0/5.0
Slides: 27
Provided by: george46
Category:

less

Transcript and Presenter's Notes

Title: Music and SelfGenerated Images: Applying Dual Coding Theory, Generative Theory of Reading Comprehens


1
Music and Self-Generated Images Applying Dual
Coding Theory, Generative Theory of Reading
Comprehension, and Cognitive Load Theory to
Web-Based Music Instruction George J.
HenikEducational Communication and Technology
ProgramNew York UniversityUnited
Stateswww.georgehenik.com
  • ed.ss.webmaster_at_nyu.edu
  • georgehenik_at_yahoo.com

2
Problem
  • What is an effective computer/web-based
    theoretical approach for teaching music?

3
Proposed Solutions
  • Dual-Coding Theory (DCT) (Clark Paivio,
    1991 Paivio, 1971, 1990)
  • Sound and Image Connection (from DCT)
  • (Clark et al, 1974 Datteri, 2000 Henik,
    2002 Lipscomb and
  • Kendall, 1994 Hiraoka and Umemoto,
    1981)
  • Generative Theory of Reading Comp.
  • (Wittrock, 1990)
  • Visual Complexity Cog. Load
  • (Henik Allen, 2002 Miller, 1956 Sweller,
    1994)

4
The Study
  • Image and Music Connection?
  • Cant assume music to language mapping
  • Limited to interval recognition tasks
  • Distance between two notes
  • Short
  • Simplest music ear-training task
  • User-Generated Images
  • Users generate meaning

5
The Study
  • Must take interface into consideration
  • Would the keyboard be viewed as one piece or many
    separate parts?

6
Research Question 1
  • What is the relationship between using
    self-generated images for interval recognition
    and
  • pre-test scores
  • music background
  • drawing background
  • attitudes towards the study

7
Research Question 2
  • What are the effects of a more visually complex
    interface on learning musical intervals through
    user-generated images?

8
Sample
  • 2 groups of Graduate Students
  • 17 from Pilot 1
  • 14 from Pilot 2

9
Materials
  • Computers connected online
  • Flash 5 plugin
  • Headphones

10
Procedures
  • Pretest
  • Music Questionnaire
  • Treatment
  • Practice Screen
  • Three separate intervals to learn (M3, P4,
    Tritone)
  • Learner draws a picture to each
  • Pilot 1 Pilot 2

11
Procedures
12
Procedures
  • Drawing Questionnaire
  • Posttest
  • 35 - 45 Minutes

13
Results (RQ1)
  • What is the relationship between user-generated
    images for interval recognition and
  • a) prior music knowledge
  • b) prior drawing knowledge
  • c) music interest and
  • d) drawing interest?

14
Results (RQ1)
  • Sig. correlations on the exit survey
  • Felt confused (CONFUSED)
  • -.683, p.007 posttest
  • Able to hear differences (HEARDIF)
  • .551, p .041 pretest
  • Became bored (BORED)
  • -.604, p .022 posttest

15
Results (RQ1)
  • Hierarchical multiple regression
  • Model 1 Pretest
  • Model 2 Drawing and Music Background
  • Model 3 CONFUSED, HEARDIF, BORED
  • R2 .872 (p .011)
  • POSTTEST 8.497 1.485PRETEST -
    0.711MUSIC_BACKGROUND 0.477DRAWING_BACKGROUND
    0.515CONFUSED 1.113HEARDIF 0.599BORED
    ? (MUSIC_BACKGROUND, CONFUSED, and BORED were not
    significant in the model).

16
Results (Pilot 1 vs 2)
  • What are the effects of a more visually complex
    interface on learning musical intervals through
    user-generated images?

17
Results (Pilot 1 vs 2)
  • ANCOVA
  • Need to determine group differences between Pilot
    1 2, want to control for the pretest.
  • Pretest scores as the covariate
  • Treatment group as a fixed factor

18
Results (Pilot 1 vs 2)
  • ANCOVA
  • Covariate (pretest) significant
  • (F 7.385, p .011)
  • IV (treatment group) decreased but not
    significantly.
  • Overall R2 .254

19
Conclusions
  • Why was music background found not to be
    significant in Pilot 2?
  • Particular sample had a lower range for music
    background than for either Pilot 1 or drawing
    background

20
Conclusions
  • Only question from Exit survey that was
    significant in the overall model was the
    participants ability to hear differences
    (HEARDIF)
  • Though the other questions had a correlation to
    either pre or post tests, this relationship is
    not singularly important when mixed with other
    factors

21
Conclusions
  • ANCOVA showed that the pretests were significant
    but no significant differences between groups
  • Addition of keyboard combined with low music
    background may have increased cognitive load

22
Conclusions
  • By generating visual information in conjunction
    with aural information, users are undergoing a
    deeper level of cognitive processing - and
    learning these intervals
  • How effective is this method?
  • Further research

23
Implications
  • Only one of many future studies to help develop
    guidelines for musical multimedia environments.
  • Careful attention to interface

24
Implications
  • Take advantage of multimedia!
  • Power over traditional learning through images
    which can be dynamically displayed
    (user-generated)

25
Future Research
  • Using homogenous groups of music students
  • Comparing different levels of images (none,
    static, user-generated) in a repeated measures
    design
  • Effect of learner differences on interval
    recognition tasks (eg., field dependency)

26
Thank You
  • Any Questions?
  • http//www.georgehenik.com
Write a Comment
User Comments (0)
About PowerShow.com