EECS 690 - PowerPoint PPT Presentation

About This Presentation
Title:

EECS 690

Description:

EECS 690 April 9 A top-down approach: This approach is meant to generate a rule set from one or more specific ethical theories. Wallach and Allen start off ... – PowerPoint PPT presentation

Number of Views:57
Avg rating:3.0/5.0
Slides: 12
Provided by: Gille197
Category:

less

Transcript and Presenter's Notes

Title: EECS 690


1
EECS 690
  • April 9

2
A top-down approach
  • This approach is meant to generate a rule set
    from one or more specific ethical theories.
  • Wallach and Allen start off pessimistic about the
    viability of this approach, but point out that
    adherence to rules is an aspect of morality that
    must still be captured.

3
The Big Picture Theories (in Western thought)
  • Utilitarianism
  • Deontology
  • What will be the authors questions about these
    theories is what the computability requirements
    would be for each. This approach may shed a
    unique light on the practice of morality itself.

4
Consequentialism
  • Utilitarianism (a subset of consequentialism)
    might initially appeal to us because of Benthams
    focus on calculability.
  • James Gips, in 1995, supplied this list of
    computational requirements for a consequentialist
    robot
  • A way of describing the situation in the world
  • A way of generating possible actions
  • A means of predicting the situation that would
    result if an action were taken given the current
    situation
  • A method of evaluating a situation in terms of
    its goodness or desirability

5
Some difficulties
  • How can one assign numbers to something as
    subjective as happiness?
  • Do we aim for total or average happiness?
  • What are the morally relevant features of any
    given situation? (People, animals, ecosystems?)
  • How far/wide should the calculation of effect go?
  • How much time is a moral agent allowed to devote
    to the decision-making process?
  • Note that these are not only problems generated
    while thinking about the computability of moral
    theories, they are problems that concern peoples
    application of these moral theories, and they are
    issues that have not been widely settled, and not
    for lack of discussion.

6
A note
  • The authors do a good job of avoiding the
    question how do humans do this? when discussing
    ethical algorithms and behaviors. It may well be
    that general human behavior is not a good model
    to emulate for ethical systems.
  • This raises the question of what standard to hold
    ethical systems to. Do we tolerate the same
    range of moral failure among these systems? These
    are questions that might fit here, but for the
    sake of organization are addressed later in the
    book.

7
Asimovs Laws of Robotics
  • A robot may not injure a human being or, through
    inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings
    except where such orders would conflict with the
    First Law
  • A robot must protect its own existence as long as
    such protection does not conflict with the First
    or Second Laws.
  • (Later, a Zeroth law was added A robot may not
    harm humanity, or, by inaction, allow humanity to
    come to harm)

8
Laws of Robotics
  • Asimov was really serious about this, and was (I
    think foolishly) optimistic about the usefulness
    of robotic laws as stated. (Asimovs short essay
    on the robotic laws forthcoming on the Further
    Resources section)
  • This doesnt fit with consequentialist theories
    very well because of its reliance on special
    duties
  • The zeroth law is hopelessly vague for an
    action-guiding principle, and the first law alone
    can generate conflicts.
  • There may be a real pressing difficulty with
    negative responsibility

9
Specific versus Abstract
  • Specific rules are very easy to apply, but have
    limited usefulness in novel situations. Still,
    perhaps part of what ethical systems require is a
    few specific rules for specific circumstances,
    though these alone would not be sufficient.
  • Abstract rules are more generally useful, as they
    allow adaptation, but are correspondingly
    difficult to apply.

10
The Categorical Imperative
  • Act only as you could will that your maxim become
    universal law.
  • A computer would need to appreciate
  • a goal
  • a maxim (a behavior-guiding means to the goal)
  • an understanding of the implications for
    achievement of the goal by making the maxim
    universal
  • Lying, for example, could not be a universal law,
    because its goals would be thwarted by its being
    universalized. (This provides a problem,
    according to critics of Kant.)

11
Language vagueness and morality
  • Our language is full of words that are vague but
    that have clear applications and misapplications.
    (e.g. baldness is a vague concept, but Captain
    Picard IS bald, and the members of the band ZZ
    Top are not)
  • Perhaps by focusing on the clear applications of
    moral rules, we might achieve something useful
    for the less clear cases.
Write a Comment
User Comments (0)
About PowerShow.com