Usable Privacy and Security: Trust, Phishing, and Pervasive Computing - PowerPoint PPT Presentation

About This Presentation
Title:

Usable Privacy and Security: Trust, Phishing, and Pervasive Computing

Description:

Experimental group same as above, but spent 15 minute break reading web-based ... weekdays and only between 8am and 6pm' ... Core set of technologies for ... – PowerPoint PPT presentation

Number of Views:119
Avg rating:3.0/5.0
Slides: 77
Provided by: jason203
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing


1
Usable Privacy and Security Trust, Phishing,
and Pervasive Computing
Jason I. HongCarnegie Mellon University
2
Everyday Privacy and Security Problem
3
Everyday Privacy and Security Problem
4
Usable Privacy and Security Important
  • People increasingly asked to make trust decisions
  • Consequences of wrong decision can be dramatic
  • Emerging ubicomp technologies leading to new
    risks

Find Friends
Smart Homes
Smart Stores
5
Grand Challenge
  • Give end-users security controls they can
    understandand privacy they can control for the
    dynamic, pervasive computing environments of the
    future.
  • - Computing Research Association 2003

6
Our Usable Privacy and Security Work
  • Supporting Trust Decisions
  • Interviews to understand decision-making
  • Embedded training
  • Anti-Phishing Phil
  • User-Controllable Privacy and Security in
    Pervasive Computing
  • Contextual instant messaging
  • Person Finder
  • Access control to resources

7
Project Supporting Trust Decisions
  • Goal here is to help people make better decisions
  • Context here is anti-phishing
  • Large multi-disciplinary team project
  • Supported by NSF, ARO, CMU CyLab
  • Six faculty, five PhD students
  • Computer science, human-computer interaction,
    public policy, social and decision sciences,
    CERT

8
Fast Facts on Phishing
  • A semantic attack aimed directly at people
    rather than computers
  • Please update your account
  • Fill out survey and get 25
  • Question about your auction
  • Rapidly growing in scale and damage
  • Estimated 3.5 million phishing victims
  • 7000 new phishing sites in Dec 2005 alone
  • 1-2 billion in damages
  • More profitable (and safer) to phish than rob a
    bank

9
Outline Supporting Trust Decisions
  • Human-Side of Anti-Phishing
  • Interviews to understand decision-making
  • Embedded Training
  • Anti-Phishing Phil
  • Computer-Side
  • PILFER Email Anti-Phishing Filter
  • Automated Testbed for Anti-Phishing Toolbars
  • CANTINA Our Anti-Phishing Algorithm
  • Automate where possible, support where necessary

10
  • What do users know about phishing?

11
Interview Study
  • Interviewed 40 Internet users, included 35
    non-experts
  • Mental models interviews included email role
    play and open ended questions
  • Interviews recorded and coded
  • J. Downs, M. Holbrook, and L. Cranor. Decision
    Strategies and Susceptibility to Phishing. In
    Proceedings of the 2006 Symposium On Usable
    Privacy and Security, 12-14 July 2006,
    Pittsburgh, PA.

12
Little Knowledge of Phishing
  • Only about half knew meaning of the term
    phishing
  • Something to do with the band Phish, I take it.

13
Little Attention Paid to URLs
  • Only 55 of participants said they had ever
    noticed an unexpected or strange-looking URL
  • Most did not consider them to be suspicious

14
Some Knowledge of Scams
  • 55 of participants reported being cautious when
    email asks for sensitive financial info
  • But very few reported being suspicious of email
    asking for passwords
  • Knowledge of financial phish reduced likelihood
    of falling for these scams
  • But did not transfer to other scams, such as
    amazon.com password phish

15
Naive Evaluation Strategies
  • The most frequent strategies dont help much in
    identifying phish
  • This email appears to be for me
  • Its normal to hear from companies you do
    business with
  • Reputable companies will send emails
  • I will probably give them the information that
    they asked for. And I would assume that I had
    already given them that information at some point
    so I will feel comfortable giving it to them
    again.

16
Other Findings
  • Web security pop-ups are confusing
  • Yeah, like the certificate has expired. I dont
    actually know what that means.
  • Minimal knowledge of lock icon
  • Dont know what encryption means
  • Summary
  • People generally not good at identifying scams
    they havent specifically seen before
  • People dont use good strategies to protect
    themselves

17
  • Can we train people not to fall for phishing?

18
Web Site Training Study
  • Laboratory study of 28 non-expert computer users
  • Two conditions, both asked to evaluate 20 web
    sites
  • Control group evaluated 10 web sites, took 15
    minute break to read email or play solitaire,
    evaluated 10 more web sites
  • Experimental group same as above, but spent 15
    minute break reading web-based training materials
  • Experimental group performed significantly
    better identifying phish after training
  • Less reliance on professional-looking designs
  • Looking at and understanding URLs
  • Web site asks for too much information

People can learn from web-based training
materials, if only we could get them to read
them!
19
How Do We Get People Trained?
  • Most people dont proactively look for training
    materials on the web
  • Many companies send security notice emails to
    their employees and/or customers
  • But these tend to be ignored
  • Too much to read
  • People dont consider them relevant
  • People think they already know how to protect
    themselves

20
Embedded Training
  • Can we train people during their normal use of
    email to avoid phishing attacks?
  • Periodically, people get sent a training email
  • Training email looks like a phishing attack
  • If person falls for it, intervention warns and
    highlights what cues to look for in succinct and
    engaging format
  • P. Kumaraguru, Y. Rhee, A. Acquisti, L. Cranor,
    J. Hong, and E. Nunge. Protecting People from
    Phishing The Design and Evaluation of an
    Embedded Training Email System.
  • to be presented at CHI 2007

21
Diagram Intervention
22
Diagram Intervention
Explains why they are seeing this message
23
Diagram Intervention
Explains how to identify a phishing scam
24
Diagram Intervention
Explains what a phishing scam is
25
Diagram Intervention
Explains simple things you can do to protect self
26
Comic Strip Intervention
27
Embedded Training Evaluation
  • Lab study comparing our prototypes to standard
    security notices
  • EBay, PayPal notices
  • Diagram that explains phishing
  • Comic strip that tells a story
  • 10 participants in each condition (30 total)
  • Roughly, go through 19 emails, 4 phishing attacks
    scattered throughout, 2 training emails too
  • Emails are in context of working in an office

28
Embedded Training Results
  • Existing practice of security notices is
    ineffective
  • Diagram intervention somewhat better
  • Comic strip intervention worked best
  • Statistically significant

29
Next Steps
  • Iterate on intervention design
  • Have already created newer designs, ready for
    testing

30
Next Steps
  • Iterate on intervention design
  • Have already created newer designs, ready for
    testing
  • Understand why comic strip worked better
  • Story? Comic format? Less text to read?
  • Preparing for larger scale deployment
  • More participants
  • Evaluate retention over time
  • Deploy outside lab conditions if possible
  • Real world deployment and evaluation
  • Trademark issues (though possible workaround?)
  • Also need corporate partners

31
Anti-Phishing Phil
  • A game to teach people not to fall for phish
  • Embedded training focuses on email
  • Game focuses on web browser, URLs
  • Goals
  • How to parse URLs
  • Where to look for URLs
  • Use search engines instead
  • Available on our website soon

32
Anti-Phishing Phil
33
Usable Privacy and Security Work
  • Supporting Trust Decisions
  • Interviews to understand decision-making
  • Embedded training
  • Anti-Phishing Phil
  • User-Controllable Privacy and Security in
    Pervasive Computing
  • Contextual instant messaging
  • Person Finder
  • Access control to resources

34
The Problem
  • Mobile devices becoming integrated into everyday
    life
  • Mobile communication
  • Sharing location information with others
  • Remote access to home
  • Mobile e-commerce
  • Managing security and privacy policies is hard
  • Preferences hard to articulate
  • Policies hard to specify
  • Limited input and output
  • Leads to new sources of vulnerability and
    frustration

35
Our Goal
  • Develop core set of technologies for managing
    privacy and security on mobile devices
  • Simple UIs for specifying policies
  • Clear notifications and explanations of what
    happened
  • Better visualizations to summarize results
  • Machine learning for learning preferences
  • Start with small evaluations, continue with
    large-scale ones
  • Large multi-disciplinary team and project
  • Six faculty, 1.5 postdocs, six students
  • Supported by NSF, CMU CyLab
  • Roughly 1 year into project

36
Usable Privacy and Security Work
  • Supporting Trust Decisions
  • Interviews to understand decision-making
  • Embedded training
  • Anti-Phishing Phil
  • User-Controllable Privacy and Security in
    Pervasive Computing
  • Contextual instant messaging
  • Person Finder
  • Access control to resources

37
Contextual Instant Messaging
  • Facilitate coordination and communication by
    letting people request contextual information via
    IM
  • Interruptibility (via SUBTLE toolkit)
  • Location (via Place Lab WiFi positioning)
  • Active window
  • Developed a custom client and robot on top of AIM
  • Client (Trillian plugin) captures and sends
    context to robot
  • People can query imbuddy411 robot for info
  • howbusyis username
  • Robot also contains privacy rules governing
    disclosure

38
Contextual Instant MessagingPrivacy Mechanisms
  • Web-based specification of privacy preferences
  • Users can create groups andput screennames into
    groups
  • Users can specify what each group can see

39
Contextual Instant MessagingPrivacy Mechanisms
  • Notifications of requests

40
Contextual Instant MessagingPrivacy Mechanisms
  • Social translucency

41
Contextual Instant MessagingPrivacy Mechanisms
  • Audit logs

42
Contextual Instant MessagingEvaluation
  • Recruited ten people for two weeks
  • Selected people highly active in IM (ie
    undergrads ?)
  • Each participant had 90 buddies and 1300
    incoming and outgoing messages per week
  • Notified other parties of imbuddy411 service
  • Update AIM profile to advertise
  • Would notify other parties at start of
    conversation

43
Contextual Instant MessagingResults
  • Total of 242 requests for contextual information
  • 53 distinct screen names, 13 repeat users

44
Contextual Instant MessagingResults
  • 43 privacy groups, 4 per participant
  • Groups organized as class, major, clubs,gender,
    work, location, ethnicity, family
  • 6 groups revealed no information
  • 7 groups disclosed all information
  • Only two instances of changes to rules
  • In both cases, friend asked participant to
    increase level of disclosure

45
Contextual Instant MessagingResults
  • Likert scale survey at end
  • 1 is strongly disagree, 5 is strongly agree
  • All participants agreed contextual information
    sensitive
  • Interruptibility 3.6, location 4.1, window 4.9
  • Participants were comfortable using our controls
    (4.1)
  • Easy to understand (4.4) and modify (4.2)
  • Good sense of who had seen what (3.9)
  • Participants also suggested improvements
  • Notification of offline requests
  • Better notifications to reduce interruptions
    (abnormal use)
  • Better summaries (User x asked for location 5
    times today)

46
Contextual Instant MessagingCurrent Status
  • Preparing for another round of deployment
  • Larger group of people
  • A few more kinds of contextual information
  • Developing privacy controls that scale better
  • More people, more kinds of information

47
Usable Privacy and Security Work
  • Supporting Trust Decisions
  • Interviews to understand decision-making
  • Embedded training
  • Anti-Phishing Phil
  • User-Controllable Privacy and Security in
    Pervasive Computing
  • Contextual instant messaging
  • Person Finder
  • Access control to resources

48
People Finder
  • Location useful for micro-coordination
  • Meeting up
  • Okayness checking
  • Developed phone-based client
  • GSM localization (Intel)
  • Conducted studies to see how people specify
    rules ( how well)
  • See how well machine learning can learn
    preferences

49
People FinderMachine Learning
  • Using case-based reasoning (CBR)
  • My colleagues can only see my location on
    weekdays and only between 8am and 6pm
  • Its now 615pm, so the CBR might allow, or
    interactively ask
  • Chose CBR over other machine learning
  • Better dialogs with users (ie more
    understandable)
  • Can be done as you go (rather than accumulating
    large corpus and doing post-hoc)

50
People FinderStudy on Preferences and Rules
  • How well people could specify rules, and if
    machine learning could do better
  • 13 participants (1 for pilot study)
  • Specify rules at beginning of study
  • Presented a series of thirty scenarios
  • Shown what their rules would do, asked if correct
    and utility
  • Given option to change rule if desired

51
People FinderStudy on Rules
52
People FinderResults User Burden
Mean (sec) Std dev (sec)
Rule Creation 321.53 206.10
Rule Maintenance 101.15 110.02
Total 422.69 213.48
53
People FinderResults Accuracy
54
People FinderCurrent Conclusions
  • Roughly 5 rules per participant
  • Users not good at specifying rules
  • Time consuming low accuracy (61) even when
    they can refine their rules over time (67)
  • Interesting contrast with imbuddy411, where
    people were comfortable
  • Possible our scenarios biased towards exceptions
  • CBR seems better in terms of accuracy and burden
  • Additional experiments still needed

55
People FinderCurrent Work
  • Small-scale deployment of phone-based People
    Finder with a group of friends
  • Still needs more value, people finder by itself
    not sufficient
  • Trying to understand pain points on next
    iteration
  • Need more accurate location
  • GSM localization accuracy haphazard
  • Integration with imbuddy411
  • Smart phones expensive, IM vastly increases user
    base

56
Usable Privacy and Security Work
  • Supporting Trust Decisions
  • Interviews to understand decision-making
  • Embedded training
  • Anti-Phishing Phil
  • User-Controllable Privacy and Security in
    Pervasive Computing
  • Contextual instant messaging
  • Person Finder
  • Access control to resources

57
Grey Access Control to Resources
  • Distributed smartphone-based access control
    system
  • physical resources like office doors, computers,
    and coke machines
  • electronic ones like computer accounts and
    electronic files
  • currently only physical doors
  • Proofs assembled from credentials
  • No central access control list
  • End-users can create flexible policies

58
GreyCreating Policies
  • Proactive policies
  • Manually create a policy beforehand
  • Alice can always enter my office
  • Reactive policies
  • Create a policy based on a request
  • Can I get into your office?
  • Grey sees who is responsible for resource, and
    forwards
  • Might select from multiple people (owner,
    secretary, etc)
  • Can add the user, add time limits too

59
GreyDeployment at CMU
  • 25 participants (9 part of the Grey team)
  • Floor plan with Grey-enabled Bluetooth doors

60
GreyEvaluation
  • Monitored Grey usage over several months
  • Interviews with each participant every 4-8 weeks
  • Time on task in using a shared kitchen door

61
GreyResults of Time on Task of a Shared Kitchen
Door
62
GreyResults of Time on Task of a Shared Kitchen
Door
63
GreyResults of Time on Task of a Shared Kitchen
Door
64
GreySurprises
  • Grey policies did not mirror physical keys
  • Grey more flexible and easier to change
  • Lots of non-research obstacles
  • user perception that the system was slow
  • system failures causing users to get locked out
  • need network effects to study some interesting
    issues
  • Security is about unauthorized users out, our
    users more concerned with how easy for them to
    get in
  • never mentioned security concerns when interviewed

65
GreyCurrent work
  • Iterating on the user interfaces
  • More wizard-based UIs for less-used features
  • Adding more resources to control
  • Visualizations of accesses
  • Relates to abnormal situations noted in
    contextual IM

66
GreyCurrent work in Visualizations
67
Some Early Lessons
  • Many indirect issues in studying usable privacy
    and security (value proposition, network effects)
  • People seem willing to use apps if good enough
    control and feedback for privacy and security
  • Lots of iterative design needed
  • Cornwell, J., et al. User-Controllable Security
    and Privacy for Pervasive Computing. In the
    Proceedings of The 8th IEEE Workshop on Mobile
    Computing Systems and Applications (HotMobile
    2007).

68
Conclusions
  • Supporting Trust Decisions
  • People not very good at protecting selves from
    phishing
  • Developing training programs, user interfaces,
    and algorithms for anti-phishing
  • Embedded training and Anti-Phishing Phil
  • User-Controllable Privacy and Security in
    Pervasive Computing
  • Core set of technologies for specifying
    managing policies
  • Contextual Instant Messaging, People Finder, Grey

69
Questions?
  • Alessandro Acquisti
  • Lorrie Cranor
  • Sven Dietrich
  • Julie Downs
  • Mandy Holbrook
  • Jason Hong
  • Jinghai Rao
  • Norman Sadeh
  • NSF CNS-0627513
  • NSF IIS-0534406
  • ARO D20D19-02-1-0389
  • Cylab
  • Jason Cornwell
  • Serge Egelman
  • Ian Fette
  • Gary Hsieh
  • P. Kumaraguru (PK)
  • Madhu Prabaker
  • Yong Rhee
  • Steve Sheng
  • Karen Tang
  • Kami Vaniea
  • Yue Zhang

70
People FinderResults Accuracy
71
Difficult to Build Usable Interfaces
(a) (c)
72
(No Transcript)
73
(No Transcript)
74
(No Transcript)
75
People FinderStudy on Preferences and Rules
  • First conducted informal studies to understand
    factors important for location disclosures
  • Asked people to describe in natural language
  • Social relation, time, location
  • My colleagues can only see my location on
    weekdays and only between 8am and 6pm

76
Future Privacy and Security Problem
  • You think you are in one context, actually
    overlapped in many others
  • Without this understanding, cannot act
    appropriately
Write a Comment
User Comments (0)
About PowerShow.com