The Ethical Status of Artificial Agents With and Without Consciousness EIntentionality 91106 - PowerPoint PPT Presentation

1 / 45
About This Presentation
Title:

The Ethical Status of Artificial Agents With and Without Consciousness EIntentionality 91106

Description:

... in a small way with emerging caring' attitudes of humans towards people ... If only with human designers/users, then such moral' AAs don't seem to have ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 46
Provided by: SteveTo4
Category:

less

Transcript and Presenter's Notes

Title: The Ethical Status of Artificial Agents With and Without Consciousness EIntentionality 91106


1
The Ethical Status of Artificial Agents With
and Without ConsciousnessE-Intentionality
9/11/06
  • Steve Torrance
  • Middlesex University, UK
  • and
  • University of Sussex, UK
  • stevet_at_sussex.ac.uk

2
  • This is an expanded version of a talk given at a
    conference of the ETHICBOTS project in Naples,
    Oct 17-18, 2006.
  • See S. Torrance The Ethical Status of
    Artificial Agents With and Without
    Consciousness (extended abstract), in G.
    Tamburrin and E. Datteri (eds) Ethics of Human
    Interaction with Robotic, Bionic and AI Systems
    Concepts and Policies, Napoli Istituto Italiano
    per gli Studi Filosofici, 2006.
  • See also S. Torrance, Ethics and Consciousness
    in Artificial Agents, submitted to Artificial
    Intelligence and Society

3
What this talk covers
  • Artificial Agency (AA)
  • Artificial Consciousness (AC)
  • Artificial Ethics (AE)
  • Artificial Intelligence
  • our interaction with them
  • and our ethical relation to them.

4
Artificial X
  • One kind of definition-schema
  • ? Creating machines which perform in ways which
    require X when humans perform in those ways
  • (or which justify the attribution of X?)
  • Outward performance, versus
  • psychological reality within?

5
Artificial Consciousness
  • Artificial Consciousness (AC)
  • ? creating machines which perform in ways which
    require consciousness when humans perform in
    those ways (?)
  • Where is the psychological reality of
    consciousness in this?
  • ? functional versus phenomenal
    consciousness?

6
Shallow and deep AC research
  • Shallow AC developing functional replications
    of consciousness in artificial agents
  • Without any claim to inherent psychological
    reality
  • Deep AC developing psychologically real
    (phenomenal) consciousness

7
Continuum or divide?
  • Continuum or divide?
  • Is deep AC realizable using current
    computationally-based technologies (or does it
    require biological replications)?
  • Thin versus thick phenomenality
  • (See S.Torrance Two Concepts of Machine
    Phenomenality, (to be submitted, JCS)

8
Real versus simulated AC -an ethically
significant boundary?
  • Psychologically real versus just simulated
    artificial consciousness
  • This appears to mark an ethically significant
    boundary
  • (perhaps unlike the comparable boundary in AI?)
  • Not to deny that debates like the Chinese Room
    have aroused strong passions over many years
  • Working in the area of AC
  • (unlike working in AI?)
  • puts special ethical responsibilities on
    shoulders of researchers

9
Techno-ethics
  • This takes us into the area of techno-ethics
  • Reflection on the ethical responsibilities of
    those who are involved in technological R D
  • (including the technologies of artificial agents
    (AI, robotics, MC, etc.))
  • Broadly, techno-ethics can be define as
  • Reflection on how
  • we, as developers and users of technologies,
  • ought to use such technologies to best meet
  • our existing ethical ends,
  • within existing ethical frameworks
  • Much of the ethics of artificial agent research
    comes under the general techno-ethics umbrella

10
From techno-ethics to artificial ethics
  • Whats special about the artificial agent
    research is that the artificial agents so
    produced may count (in various senses) as ethical
    agents in their own right
  • This may involve a revision of our existing
    ethical conceptions in various ways
  • Particularly when we are engaged in research in
    (progressively deeper) artificial consciousness
  • Bearing this in mind, we need to distinguish
    between techno-ethics and artificial ethics
  • (The latter may overlap with the former)

11
Towards artificial ethics (AE)
  • A key puzzle in AE
  • Perhaps ethical reality (or real ethical status)
    goes together with psychological reality??

12
Shallow and deep AE
  • Shallow AE
  • Developing ways in which the artificial agents we
    produce can conform to, simulate, the ethical
    constraints we believe desirable
  • (Perhaps a sub-field of techno-ethics?)
  • Deep AE
  • Creating beings with inherent ethical status?
  • Rights?
  • Responsibilities?
  • The boundaries between shallow and deep AE may be
    perceived as fuzzy
  • And may be intrinsically fuzzy

13
Proliferation
  • A reason for taking this issue seriously
  • AA, AC, etc. as potential mass-technologies
  • Tendency for successful technologies to
    proliferate across the globe
  • What if AC becomes a widely adopted technology?
  • This should raise questions both of a
    techno-ethical kind and of a kind specific to AE

14
Techno-ethical considerations
BENZ 3-WHEELER 1.7 LITRES (GERMANY, 1885)
  • The responsibilities of current researchers in
    robotics, etc. can be compared to that of the
    founding fathers of automobile design or powered
    flight
  • A certain sort of innocence in relation to the
    implications of proliferation that might have
    been anticipated

CAR WRECK (USA, 2005)
http//www.car-accidents.com/
15
Considerations that seem to transcend the merely
techno-ethical
  • There may be deep controversies concerning the
    ethical status of life-like artificial agents
    (shallow / deep AC agents)
  • There could be enormous shifts in our ethical
    landscape
  • Our conception of the ethical community, of who
    we are, may become hotly contested

16
Instrumentality
  • Instrumental versus intrinsic stance
  • Normally we take our technologies as our tools or
    instruments
  • Instrumental/intrinsic division in relation to
    psychological reality of consciousness?
  • As we progress towards deep AC there could be a
    blurring of the boundaries between the two
  • (already seen in a small way with emerging
    caring attitudes of humans towards
    people-friendly robots)
  • This is one illustration of the move from
    conventional techno-ethics and artificial ethics

17
Artificial Ethics (AE)
  • AE could be defined as
  • The activity of creating systems which perform in
    ways which imply (or confer) the possession of
    ethical status when humans perform in those ways.
    (?)
  • The emphasis on performance could be questioned
  • ? What is the relation between AE and AC?
  • What is ethical (moral) status?

18
Two key elements of Xs moral status (in the eyes
of Y)
  • Xs being the recipient or target of moral
    concern by Y (moral consumption) Y ? X
  • Xs being the source of moral concern towards Y
    (moral production) X ? Y

19
Two key elements of Xs moral status
  • Xs being the recipient or target of moral
    concern by Y (moral consumption)
  • Xs being the source of moral concern towards Y
    (moral production)

(a)
moral agent (natural/artificial)
(b)
moral community (totality of moral agents)
20
Two key elements of moral status
  • Being the recipient or target of moral concern
    (moral consumption)
  • Being the source of moral concern (moral
    production)

(a)
moral agent (natural/artificial)
(b)
moral community (totality of moral agents)
21
Ethical status in the absence of consciousness
  • Trying to refine our conception on the relation
    between AC and AE
  • What difference does consciousness make to
    artificial agency?
  • In order to shed light on this question we need
    to investigate
  • the putative ethical status of artificial agents
    (AAs) when (psychologically real) consciousness
    is acknowledged to be ABSENT.

22
Our ethical interaction with non-conscious
artificial agents
  • ?? Could non-conscious artificial agents
  • have genuine moral status
  • As moral consumers?
  • ? (having moral claims on us)
  • (b) As moral producers?
  • ? (having moral responsibilities towards us (and
    themselves))

23
A Strong View of AE
  • Psychologically real consciousness is necessary
    for AAs to be considered BOTH
  • as genuine moral consumers
  • AND
  • (b) as genuine moral producers
  • AND there are strong constraints on what counts
    as psychologically real consciousness.
  • ? So, on the strong view, non-conscious AAs
    will have no real ethical status

24
  • One way to weaken the strong view
  • by accepting weaker criteria for what counts as
    psychologically real consciousness
  • e.g. by saying Of course you need consciousness
    for ethical status, but soon robots, etc. will be
    conscious in a psychologically real sense.

25
A weaker view
  • Psychologically real consciousness is NOT
    necessary for an AA to be considered
  • as a genuine moral producer
  • (i.e. as having genuine moral responsibilities)
  • But it may be necessary for an AA to be
    considered
  • (b) as a genuine moral consumer
  • (i.e. as having genuine moral claims on the moral
    community)

26
A version of the weaker view
  • A version of the weaker view is to be found in
  • Floridi, L. and Sanders, J. 2004. On the Morality
    of Artificial Agents, Minds and Machines, 14(3)
    349-379.
  • Floridi Sanders Some (quite weak kinds of)
    artificial agents may be considered as having a
    genuine kind of moral accountability
  • even if not moral responsibility in
  • a full-blooded sense
  • (i.e. this kind of moral status may attach to
    such agents quite independently of their status
    as conscious agents)

27
Examining the strong view
  • See Steve Torrance, Ethics and Consciousness in
    Artificial Agents,
  • Submitted, Artificial Intelligence and Society
  • Being a fully morally responsible agent requires
  • empathetic intelligence or rationality
  • moral emotions or sensibilities
  • ? These seem to require presence of
    psychologically real consciousness

BUT.
28
Shallow artificial ethics a paradox
  • Paradox
  • Even if not conscious, we will expect artificial
    agents to behave responsibly
  • ?To perform outwardly to ethical standards of
    conduct
  • This creates an urgent and very challenging
    programme of research for now
  • ? developing appropriate shallow ethical
    simulations

29
Locus of responsibility
  • Where would the locus of responsibility of such
    systems lie?
  • For example, when they break down, give wrong
    advice, etc?
  • On current consensus With designers, operators
    rather than with AA itself.
  • If only with human designers/users, then such
    moral AAs dont seem to have genuine moral
    status even as moral producers?
  • BUT

30
Moral implications of increasing cognitive
superiority of AAs
  • Well communicate with artificial agents (AAs) in
    richer and subtler ways
  • We may look to AAs for moral advice and support
  • We may defer to their normative decisions
  • E.g when multiplicity of factors require superior
    cognitive powers to humans
  • ? Automated moral pilot systems?

31
Non-conscious AAs as moral producers
  • None of these properties seem to require
    consciousness
  • So the strong view seems to be in doubt?
  • Perhaps non-conscious AAs can be genuine moral
    producers
  • On the question of When can we trust a moral
    judgment given by a machine?
  • ?See Blay Whitby, Computing Machinery and
    Morality
  • submitted, AI and Society

32
So
  • So non-conscious artificial agents perhaps could
    be genuine moral producers
  • At least in limited sorts of ways

33
  • Ive challenged this in my paper Ethics and
    Consciousness in Artificial Agents
  • I said that having the capacity for genuine
    morally responsible judgment and action require a
    kind of empathic rationality
  • And its difficult to see how such empathic
    rationality could exist in a being which didnt
    have psychologically real consciousness
  • but Im far from sure

34
  • In any case, it will be a hard and complex job to
    ensure that they simulate moral production in an
    ethically acceptable way.

35
Non-conscious AAs as moral consumers
  • What about non-conscious AAs as moral consumers?
  • (i.e. as candidates for our moral concern)?
  • Could it ever be rational for us to consider
    ourselves as having genuine moral obligations
    towards non-conscious AAs?

36
Consciousness and moral consumption
  • At first sight being a true moral consumer
    seems to require being able to consciously
    experience pain, distress, need, satisfaction,
    joy, sorrow, etc.
  • i.e. psychologically real consciousness
  • Otherwise why waste resources?
  • ? The coach crash scenario

BUT
37
The case of property ownership
  • AAs may come to have interests which we may be
    legally (and morally?) obliged to respect
  • Andrew Martin robot in Bicentennial Man
  • Acquires (through courts) legal entitlement to
    own property in his own person

38
Bicentennial Man
  • Household android is acquired by Martin family
    christened Andrew
  • His decorative products
  • exquisitely crafted from driftwood
  • become highly prized collectors' items

39
Bicentennial Man (2)
  • Andrews owner wins for him the legal right to
    have bank account and legally own the wealth
    accumulated from sales of his artworks, (though
    still deemed a machine)
  • Conflict between Andrew and his owner who
    refuses to give him his freedom leads to his
    moving to his own home

40
  • Andrew, arguably, has moral, not just legal
    rights to his property
  • It would be morally wrong for us not to respect
    them (e.g. to steal from him)
  • His rights to maintain his property
  • (and our obligation not infringe those rights)
  • does not depend on our attributing
    consciousness to him

41
A case of robot moral (not just legal) rights?
  • Andrew, arguably, has moral, not just legal
    rights to his property
  • Would it not be morally wrong for us not to
    respect his legal rights?
  • (morally wrong, e.g., to steal from him?)

42
Does it matter if he is non-conscious?
  • Arguably, Andrews moral rights to maintain his
    property
  • (and our moral obligation to not infringe those
    rights)
  • do not depend on our attributing
    consciousness to him

43
  • On the legal status of artificial agents, see
  • David Calverley, Imagining a Non-Biological
    Machine as a Legal Person,
  • Submitted, Artificial Intelligence and Society
  • For further related discussion of Asimovs
    Bicentennial Man, see
  • Susan Leigh Anderson, Asimovs Three Laws of
    Robotics and Machine Metaethics
  • ibid.

44
Conclusions
  • We need to distinguish between shallow and deep
    AC and AE
  • We need to distinguish techno-ethics from
    artificial ethics (especially strong AE)
  • There seems to be a link between an artificial
    agents status as a conscious being and its
    status as an ethical being
  • A strong view of AC says that genuine ethical
    status in artificial agents (both as ethical
    consumers and ethical producers) requires
    psychologically real consciousness in such agents.

45
Conclusions,continued
  • 5. Questions can be raised about the strong view
  • - (automated ethical advisors property
    ownership)
  • 6. There are many important ways in which a kind
    of (shallow) ethics has to be developed for
    present day and future non-conscious agents.
  • 7. But in an ultimate, deep sense, perhaps AC
    and AE go together closely
  • (NB In my paper Ethics and Consciousness in
    Artificial Agents
  • I defend the strong view much more robustly, as
    the organic view.)
Write a Comment
User Comments (0)
About PowerShow.com