Assessment and evaluation of pedagogical outcomes in the area of Networked Supported Collaborative Learning - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

Assessment and evaluation of pedagogical outcomes in the area of Networked Supported Collaborative Learning

Description:

Assessment and evaluation of pedagogical outcomes in the area of Networked Supported Collaborative Learning Symeon Retalis (retal_at_unipi.gr) University of Piraeus – PowerPoint PPT presentation

Number of Views:48
Avg rating:3.0/5.0
Slides: 22
Provided by: MariaSar9
Category:

less

Transcript and Presenter's Notes

Title: Assessment and evaluation of pedagogical outcomes in the area of Networked Supported Collaborative Learning


1
Assessment and evaluation of pedagogical outcomes
in the area of Networked Supported Collaborative
Learning
  • Symeon Retalis (retal_at_unipi.gr)
  • University of Piraeus
  • Department of Technology Education and Digital
    Systems
  • Computer Supported Learning Engineering Lab

http//www.softlab.ece.ntua.gr/research/research_p
rojects/tell/
2
Why do we care about the effectiveness of
e-learning?
  • Research on e-learning has been driven by what
    many are calling the information revolution.
  • E-learning, which was once a poor and often
    unwelcome stepchild within the academic
    community, is becoming increasingly more visible
    as a part of all educational levels.
  • The attitudes and satisfaction of students using
    e-learning also are characterized as generally
    positive.
  • Thomas Russell, in his recently published
    annotated bibliography entitled The No
    Significant Difference Phenomenon, lists 355
    sources dating back as early as 1928 that seem to
    bolster these arguments http//www.nosignificantd
    ifference.org.
  • However, a closer look at the evidence suggests a
    more cautious view of the effectiveness of
    e-learning

3
Contemporary research on the effectiveness
  • Research Approaches
  • Descriptive research involves the collection of
    data to answer specific questions.
  • Data are usually collected through
    questionnaires, interviews, or standardized
    attitude scales. An important component of
    descriptive research is the validation of the
    questionnaire in order to determine if it
    measures what it was developed to measure.
  • Typical descriptive studies are concerned with
    the assessment of attitudes, opinions, and
    conditions.
  • A case study is an in-depth investigation of one
    learning unit.
  • The researcher can use a variety of methods to
    gather data, however, the explanation of the unit
    is generally written in narrative form.
  • Correlational research involves collecting data
    in order to determine whether, and to what
    degree, a relationship exists between two or more
    quantifiable variables.
  • An estimate is provided of just how related two
    variables are. It is important to note that
    correlational research almost never establishes a
    cause-effect relationship.
  • One example of a correlational study might be
    determining the relationship (correlation)
    between student satisfaction with an instructor
    and the type of technology used.
  • Experimental research is the only type of
    research that can truly test hypotheses
    concerning cause- and-effect relationships.
  • In an experimental study, the researcher
    manipulates at least one independent variable and
    observes the effect on one or more dependent
    variables. In other words, the researcher
    determines who gets what, which group of
    subjects will get which treatment.
  • The groups are generally referred to as
    experimental and control groups.

4
Key shortcomings of original research (I)
  • Much of the research does not control for
    extraneous variables and therefore cannot show
    cause and effect.
  • Most experimental studies of e-learning are
    designed to measure how a specific technologythe
    causeimpacts upon some type of learning
    outcome or influences the attitudes of
    studentsthe effect.
  • To accurately assess this relationship, other
    potential causes must not influence the
    measured outcomes. If other variables influence
    outcomes, it is impossible to attribute cause
    to the technology being used.
  • The validity and reliability of the instruments
    used to measure student outcomes and attitudes
    are questionable.
  • An important component of good educational
    research relates to proper measurement of
    learning outcomes and/or student attitudes. In
    short, do the instrumentssuch as final
    examinations, quizzes, questionnaires, or
    attitude scalesmeasure what they are supposed to
    measure?
  • A well-conducted study would include the validity
    and reliability of the instruments so that the
    reader can have confidence in the results.

5
Key shortcomings of original research (II)
  • Many studies do not adequately control for the
    feelings and attitudes of the students and
    facultywhat the educational research refers to
    as reactive effects.
  • Reactive effects are a number of factors
    associated with the way in which a study is
    conducted and the feelings and attitudes of the
    students involved.
  • One reactive effect, known as the Novelty Effect,
    refers to increased interest, motivation, or
    participation on the part of students simply
    because they are doing something different, not
    better per se.
  • Other shortcoming
  • There is a lack of a theoretical or conceptual
    framework
  • No systematic methods for collecting and
    analysing/interpreting data have been followed
  • It is not taken into consideration how the
    different learning styles of students relate to
    the use of particular technologies

6
TELL Towards Effective network supported
coLLaborative learning activitieshttp//www.sof
tlab.ece.ntua.gr/research/research_projects/tell/
  • University of Piraeus, GR
  • Politechnico di Milan, IT
  • Maastricht Learning Lab, NL
  • National Technical University of Athens, GR
  • University of Valladolid, ES
  • University of Patras, GR
  • A Priory Ltd, UK

7
Aim and objectives
  • This project is a methodical and systematic
    effort
  • to support the understanding of the learning
    process that happens in networked supported
    collaborative learning (NSCL) environments,
  • to provide methods and tools to measure the
    effectiveness of networked supported
    collaborative learning activities,
  • to offer means for training the human actors
    involved (or who would like to get involved) into
    collaborative learning activities and
  • to support the design of new effective
    technological tools for collaborative learning.

8
Project focus
  • This project focuses on specifying the concept of
    effective network supported learning activities
    within a variety of contexts, and as a synergy of
  • instructional methods,
  • technology,
  • subject matter,
  • and other contextual factors which provide the
    conditions necessary to support learning seen
    as both knowledge construction and skill
    acquisition.

9
Project activities
  • The project consortium will also exchange
    know-how and experiences about
  • the evaluation process and
  • tools for networked supported collaboration and
    interaction between the actors (students, tutors
    etc.)
  • in multiple and diverse learning environments,
  • in order to provide holistic conceptual
    evaluation frameworks and systematize the
    measurement of effectiveness in quantitative and
    qualitative approaches.
  • Moreover, the project partners will cooperate in
    order to create software system architectural
    frameworks that will allow network supported
    collaborative learning tools to interchange data
    with other tools and with evaluation tools.

10
Project Work flow (3)
WP1 Peer reviews of evaluation studies
WP2 User trials and evaluation field research on
NSCL
Resources about evaluation methods and tools (WP1
deliverable)
WP3 Design patterns construction
WP4 Interchangeability of data among NSCL tools
Project Management
Tutorials and Workshops
Dissemination
11
Project Deliverables
  • This project will offer
  • a set of design patterns for NSCL
  • Use the experience of the ELEN project
    http//www.tisip.no/E-LEN
  • Resources about evaluation methods and tools for
    network supported collaborative learning, on
    which an evaluation toolkit for networked
    supported collaborative learning will be based.
  • It will develop a meta-study of evaluation
    studies of network supported collaborative
    learning and document it both online as well as
    in a paper based handbook.
  • A report based on multiple evaluation studies
    that will happen in real educational environments
    (schools, universities, workplaces, etc.) using
    different networked supported collaborative
    learning systems (Blackboard Polaris, Learning
    Space, 3D active worlds, group based simulations,
    etc.) and strategies (synchronous, asynchronous,
    workplace collaborative learning, etc.)
  • A conceptual framework for the interchangeability
    of data among different networked supported
    collaborative learning systems
  • Tutorials and workshops
  • Dissemination events and resources (web site,
    papers, etc.)

12
Needs in evaluation
  • The point to understand about evaluation is that
    it needs to
  • operate within a shared framework and
  • recognise the different needs of the main
    stakeholder groups (such as families with young
    children and single adults and retired people or
    such as teachers, learners, parents, employers
    and government).
  • Without some kind of shared framework there are
    no possibilities for shared understanding.

13
Evaluands
  • In education, we tend to find the following kinds
    of things featuring as evaluands (things to be
    evaluated)
  • A learning resource (such as a textbook, a
    multimedia programme, a website)
  • A learning tool/platform (such as a sliderule, a
    modeling program)
  • A course (a connected set of objectives,
    activities and resources usually intended for a
    defined target group of learners and usually also
    involving some form of assessment to see if
    learning objectives have been met)
  • A teaching strategy (such as lecturing to the
    whole class or problem-based learning)
  • A learning environment (a connected set of
    resources and tools, arranged in space and
    usually inhabited by a known group of learners
    and one or more teachers)
  • An innovation project (a planned sequence of
    activities intended to create a defined output,
    and/or achieve some defined outcomes, within a
    defined envelope of time and resources)

The main point to accept here is that things
out of context are hard to evaluate. We need to
plan to evaluate a complex configuration of
things and processes.
14
Evaluation in the domain of network supported
collaborative learning
  • There is a small, specialized literature on this
    topic (see e.g. Berge Myers, 2000 Rossman,
    1999)
  • The main new issues we need to take into account
    are
  • the specific educational processes and goals
    associated with NSCL for example that it is
    concerned with collaborative not individualistic
    learning, that it emphasizes learning through
    social interaction (mediated by technology)
  • the foregrounding of tools and infrastructure (we
    usually want to know something about the
    qualities, advantages, etc of new tools or
    infrastructure)
  • the importance of defined and implicit ways of
    working, assumptions and expectations about how
    to collaborate over the Internet, etc

15
Findings in NSCL (i)
  • In general, studies of this kind tend not to
    reveal significant differences between groups of
    students involved in NSCL and groups taught in
    other ways.
  • One of the largest studies in the literature
    in terms of student numbers is that reported by
    Carswell et al (2000), using data from an
    undergraduate course in computer science.
  • Lewis (2002) reports another study which shows no
    clear advantages of NSCL over conventional forms
    of learning in some assessment tests the NSCL
    group performed better in others the
    conventional group performed better.
  • Lewiss analysis led to a suggestion that it was
    not so much the use or otherwise of NSCL that
    made the difference. A more important factor was
    how students engaged in NSCL a higher level of
    engagement leading to better learning outcomes.
  • Kashy et al (2000) high positive success rate
    from introducing NSCL on a large (500 student)
    on-campus introductory physics course.
  • Final grades in a web-based class correlated with
    the number of messages read and posted by
    students during the semester (Wang Newlin,
    2000).

16
Findings in NCSL (ii)
  • Need especially to comment on the problem that
  • most comparative studies (NSCL vs conventional)
    show no significant differences
  • major stakeholders want evidence of value for
    money with respect to their investment in NSCL.
    The answer may be to focus more on
  • there are some good signs of educational benefit
    but they are in areas that involve complex
    problem solving, higher order thinking, serious
    discussion of difficult ideas, etc.
  • Educational improvements here wont be picked up
    unless the tests used are geared to these higher
    order learning outcomes
  • theres a serious need for studies which are
    theoretically and methodologically
    sophisticated/powerful in the sense that you
    need to have a good theoretical model of the
    potential benefits of NSCL to know what to look
    for.
  • Good examples are studies by Jonassen Kwon
    (2001), Benbunan-Fich Hiltz (1999)

17
Findings in NCSL (iii)
  • Pinelle Gutwin, 2000 A Review of Groupware
    Evaluations
  • The main findings are that
  • almost one-third (1/3)of the groupware systems
    were not evaluated in any formal way,
  • that only about one-quarter (1/4) of the articles
    included evaluations in a real-world setting,
  • and that a wide variety of evaluation techniques
    are in use.
  • From the studies included in this survey, 41 of
    the articles that
  • included evaluations were of actual real world
    software implementations, but only 25 considered
    the softwares organizational and work impact.

Their main conclusions from the review are that
more attention must be paid to evaluating
groupware systems and that there is room for
additional evaluation techniques that are simple
and low in cost.
18
On the mainstreaming of NSCL
  • using the web and communicating with others
    online are taken for granted.
  • IM and SMS are no more exotic to this generation,
    it seems, than note-passing and talking on the
    telephone were to mine, and blogging is just the
    modern analog of keeping a personal journal....
  • In short, after barely more than 30 years for
    existence, NSCL has become more of a practical
    necessity than an object of fascination and
    fetish.'
  • (Herring, 2004, p33)

No novel effect anymore! We need to examine
what works and what does not in a complex
e-learning environment with relation to the high
order learning outcomes
19
Step forward
  • Michael Patton (1996) has pointed out, there is a
    long history of evaluation reports having no
    effect on programme and project implementation,
    policy and funding decisions, etc.
  • The main reason is that insufficient attention
    gets paid to the interface between evaluation
    reporting and decision-making.
  • In Pattons view, this means that there has to be
    close liaison between the evaluation team and the
    decision-makers (at whatever level they sit),
  • so that the evaluators can understand the key
    concerns of the decision-makers and the
    decision-makers can anticipate how the
    evaluators evidence will help them make better
    decisions.
  • This close liaison does not have to compromise
    the independence and objectivity of the
    evaluators.

20
Need for an evaluation conceptual model
The most important part of a sucessfull design is
the underlying conceptual model. The hard part
of design formulating an appropriate conceptual
model and then assuring that everything else be
consistent with it. Donald Norman
21
Questions?
http//www.softlab.ece.ntua.gr/research/research_p
rojects/tell/
Thanks to TELL project partners for their input
Write a Comment
User Comments (0)
About PowerShow.com