Secure Interaction Design - PowerPoint PPT Presentation

1 / 42
About This Presentation
Title:

Secure Interaction Design

Description:

– PowerPoint PPT presentation

Number of Views:52
Avg rating:3.0/5.0
Slides: 43
Provided by: KaPin
Category:

less

Transcript and Presenter's Notes

Title: Secure Interaction Design


1
Secure Interaction Design
  • Ka-Ping Yee
  • 10 December 2002
  • 4th ICICS, Singapore

2
Overview
  • Defining security
  • Security myths
  • Previous work
  • User model
  • Design principles
  • Acknowlegements
  • Summary

3
What is security?
  • Functional soundness?
  • Reliability?
  • (Provable) correctness?
  • Secrecy, integrity, availability?

definitions
4
What is security?
  • Application
  • buffer overflow
  • password
  • web application
  • Network
  • sniffing
  • spoofing
  • session hijacking

classification suggested by Changwoo Pyo,Gyungho
Lee (yesterday)
definitions
5
What is security?
  • My claim
  • Software correctness
  • is not sufficient for security.

definitions
6
What is security?
  • Consider e-mail viruses (e.g. Melissa, ILOVEYOU).
  • E-mail is delivered to recipient correctly.
  • Message is decoded correctly.
  • Attachment is opened correctly.
  • Program is executed correctly.
  • Address book functions correctly.
  • Mail API sends out mail correctly.
  • Where is the security problem?

definitions
7
What is security?
  • A computer is secure if you can depend on it and
    its software to behave as you expect.
  • Garfinkel and Spafford, Practical Internet
    Security

definitions
8
What is security?
  • A computer is secure if you can depend on it and
    its software to behave as you expect.
  • Garfinkel and Spafford, Practical Internet
    Security
  • Who is you?
  • What do you expect?

definitions
9
Myths about computer security
  • If the software is correct, then it is secure.
  • Security is only for experts.
  • Computer security is a hard science.
  • A more secure system is always harder to use.

definitions myths
10
Myths about computer security
  • If the software is correct, then it is secure.

Viruses show that correctness is insufficient.
definitions myths
11
Myths about computer security
  • If the software is correct, then it is secure.
  • Security is only for experts.

The expert cannot always know what the user wants.
definitions myths
12
Myths about computer security
  • If the software is correct, then it is secure.
  • Security is only for experts.
  • Computer security is a hard science.

definitions myths
Security requires an understanding of user
expectations.
13
Myths about computer security
  • If the software is correct, then it is secure.
  • Security is only for experts.
  • Computer security is a hard science.
  • A more secure system is always harder to use.

definitions myths
Good security makes systems easier to use.
14
Why trade off usability?
  • A computer is secure from a particular users
    perspective if the user can depend on it and its
    software to behave as the user expects.
  • Acceptable security is a requirement for
    usability.
  • Acceptable usability is a requirement for
    security.

definitions myths
15
Previous work
  • User studies of security software
  • Dhamija, Sasse passwords
  • Karat iterative testing
  • Karvonen SPKI
  • Mosteller error messages
  • Whitten PGP
  • Zurko authorization

definitions myths previous work
16
Previous work
  • User-centred design of security software
  • Zurko role-based access control
  • Design recommendations
  • Holmström secure business card metaphor(but
    results not very convincing)
  • Karvonen know your user, list all possible
    security issues, involve user as little as
    possible

definitions myths previous work
17
Everybody says...
  • Users are ignorant.
  • Karvonen, Nikander, ...
  • Users dont care about security.
  • Schneier, Spafford, ...
  • People are the weakest link.
  • Adams, Sasse, Karvonen, ...

definitions myths previous work
18
...but...
  • Every system has a user.

definitions myths previous work
19
Microsoft fallacies
  • Law 1 If a bad guy can persuade you to run his
    program on your computer, it's not your computer
    anymore.
  • When you choose to run a program, you are making
    a decision to turn over control of your computer
    to it. Once a program is running, it can do
    anything, up to the limits of what you yourself
    can do on the machine.
  • Microsoft

definitions myths previous work
20
Microsoft fallacies
  • Law 1 If a bad guy can persuade you to run his
    program on your computer, it's not your computer
    anymore.
  • When you choose to run a program, you are making
    a decision to turn over control of your computer
    to it. Once a program is running, it can do
    anything, up to the limits of what you yourself
    can do on the machine.
  • Microsoft

BLAMING THE VICTIM
definitions myths previous work
21
Microsoft fallacies
  • BackOrifice does not expose or exploit any
    security issue with the Windows platform or the
    BackOffice suite of products.
  • BackOrifice does not compromise the security of
    a Windows network. Instead, it relies on the user
    to install it and, once installed, has only the
    rights and privileges that that the user has on
    the computer.
  • Microsoft

definitions myths previous work
22
Microsoft fallacies
  • BackOrifice does not expose or exploit any
    security issue with the Windows platform or the
    BackOffice(r) suite of products.
  • BackOrifice does not compromise the security of
    a Windows network. Instead, it relies on the user
    to install it and, once installed, has only the
    rights and privileges that that the user has on
    the computer.
  • Microsoft

BLAMING THE VICTIM
definitions myths previous work
23
What does the user expect?
  • I propose the Actor-Ability Model.
  • Set of actors A A0, A1, A2, ... where A0 is
    the user.
  • Each actor Ai has a set of potential abilities Pi
    anda set of real abilities Ri .
  • The users state is ltA0, A1, A2, ..., P0, P1,
    P2, ...gt.

definitions myths previous work model
24
What does the user expect?
  • No-surprise condition
  • P0 ? R0
  • Pi ? Ri (for i gt 0)

definitions myths previous work model
25
Design principles
  • Path of Least Resistance
  • Appropriate Boundaries
  • Explicit Authority Trusted Path
  • Expected Ability Identifiability
  • Visibility Clarity
  • Revocability Expressiveness

definitions myths previous work model
principles
26
Design principles
  • Path of Least Resistance
  • To the greatest extent possible,
  • the natural way to do a task
  • should be the secure way.

definitions myths previous work model
principles
27
Interlude Least resistance

definitions myths previous work model
principles
28
Design principles
  • Appropriate Boundaries
  • The interface should expose,
  • and the system should enforce,
  • distinctions between objects and between actions
  • that matter to the user.

definitions myths previous work model
principles
29
Interlude Bad boundaries
  • This is a real dialog inInternet Explorer.
  • Im forced to make anall-or-nothing choice!

definitions myths previous work model
principles
30
Design principles
  • Explicit Authorization
  • A users authorities
  • must only be provided to other actors
  • as a result of an explicit action
  • that is understood to imply granting.
  • this is Pi ? Ri (for i gt 0)

definitions myths previous work model
principles
31
Interlude When do we ask?

definitions myths previous work model
principles
32
Interlude When do we ask?

definitions myths previous work
conflicts model principles
33
Design principles
  • Visibility
  • The interface should allow the user
  • to easily review any active authority
    relationships
  • that would affect security-relevant decisions.

definitions myths previous work
conflicts model principles
34
Interlude What do we show?

709am up 117 days, 602, 1 user, load
average 0.17, 0.23, 0.23 110 processes 109
sleeping, 1 running, 0 zombie, 0 stopped CPU
states 7.6 user, 4.5 system, 0.0 nice,
87.8 idle Mem 512888K av, 496952K used,
15936K free, 60K shrd, 29728K buff Swap
1052216K av, 146360K used, 905856K free
181484K cached PID USER PRI NI
SIZE RSS SHARE STAT CPU MEM TIME
COMMAND 24733 root 18 0 2556 2556 488 S
6.0 0.4 142 chargen 25184 ping 16
0 996 996 748 R 3.9 0.1 001
top 24276 root 9 0 1888 1864 1484 S
0.7 0.3 004 sshd 23519 apache 10 0 21792
13M 8080 S 0.1 2.6 023 httpd 23520
apache 10 0 21456 12M 8076 S 0.1 2.5
020 httpd 1 root 8 0 188 148
148 S 0.0 0.0 025 init 2 root 9
0 0 0 0 SW 0.0 0.0 000
keventd 3 root 9 0 0 0 0
SW 0.0 0.0 000 kapm-idled 4 root
19 19 0 0 0 SWN 0.0 0.0 033
ksoftirqd_CPU0 5 root 9 0 0 0
0 SW 0.0 0.0 9412 kswapd 6 root
9 0 0 0 0 SW 0.0 0.0 002
kreclaimd 7 root 9 0 0 0 0
SW 0.0 0.0 008 bdflush 8 root 9
0 0 0 0 SW 0.0 0.0 015
kupdated 9 root -1 -20 0 0 0
SWlt 0.0 0.0 000 mdrecoveryd 654 root
9 0 348 288 288 S 0.0 0.0 241
syslogd 659 root 9 0 852 120 120 S
0.0 0.0 006 klogd 744 root 9 0
1988 1988 1728 S 0.0 0.3 007 ntpd 757
daemon 9 0 172 116 116 S 0.0 0.0
000 atd 786 root 9 0 360 232 200
S 0.0 0.0 003 sshd 807 root 8 0
476 336 292 S 0.0 0.0 056 xinetd
866 root 8 0 396 332 312 S 0.0
0.0 034 crond 915 root 9 0 2076
476 476 S 0.0 0.0 025 miniserv.pl 919
root 9 0 108 48 48 S 0.0 0.0
000 mingetty 920 root 9 0 108 48
48 S 0.0 0.0 000 mingetty
Not this
definitions myths previous work model
principles
35
Interlude What do we show?

definitions myths previous work model
principles
36
Design principles
  • Identifiability
  • The interface should enforce that
  • distinct objects and distinct actions
  • have unspoofably identifiable
  • and distinguishable representations.

definitions myths previous work model
principles
37
Interlude Violating identifiability

definitions myths previous work model
principles
38
Interlude Fixing identifiability

definitions myths previous work model
principles
39
Design principles
  • Clarity
  • The effect of any security-relevant action
  • must be apparent before the action is taken.

definitions myths previous work model
principles
40
Interlude Violating Clarity

What program? What source? What
privileges? What purpose? How long? How to
revoke? Remember this decision? What
decision? Might as well click Yes its the
default.
definitions myths previous work model
principles
41
Acknowledgements
  • Morgan Ames, Verna Arts, Nikita Borisov, Jeff
    Dunmall, Tal Garfinkel, Marti Hearst, Norm Hardy,
    Johann Hibschman, Josh Levenberg, Lisa Megna,
    Mark S. Miller, Chip Morningstar, Kragen Sitaker,
    Marc Stiegler, Dean Tribble, Doug Tygar, David
    Wagner, Miriam Walker, David Waters

definitions myths previous work model
principles thanks
42
Summary
  • This talk has
  • argued that usability and security are
    complementary
  • presented the Actor-Ability Model
  • directly addressed questions avoided in earlier
    work
  • proposed ten design principles for secure
    interaction
  • shown how they apply in practice
  • ? http//zesty.ca/sid ?
  • Please come talk to me, and pick up a supplement
    to the paper.

definitions myths previous work model
principles thanks summary
Write a Comment
User Comments (0)
About PowerShow.com