Can Machine Learning Be Secure - PowerPoint PPT Presentation

1 / 20
About This Presentation
Title:

Can Machine Learning Be Secure

Description:

Causative. Alter training process. 7. Attack models: Specificity. Indiscriminate ' ... Causative Targeted. Availability. Integrity. 10. Simple learning model ... – PowerPoint PPT presentation

Number of Views:17
Avg rating:3.0/5.0
Slides: 21
Provided by: Mimi92
Category:

less

Transcript and Presenter's Notes

Title: Can Machine Learning Be Secure


1
Can Machine Learning Be Secure?
2
Co-authors
  • Marco Barreno
  • Blaine Nelson
  • Russell Sears
  • Anthony Joseph

3
Continuous retraining
  • Key idea System learns what to trust and not
    trust
  • Applications
  • Spam detection
  • Intrusion detection system
  • Questions
  • Can attacker manipulate or degrade learning
    system?
  • Defenses against attacks on learning?

4
Continuous retraining
  • Key idea System learns what to trust and not
    trust
  • Applications
  • Spam detection
  • Intrusion detection system
  • Questions
  • Can attacker manipulate or degrade learning
    system? Yes
  • Defenses against attacks on learning? Yes!

5
What is machine learning?
  • Function f maps events msgs ? classes
    trusted, untrusted
  • We have a training set S (over event
    distribution)
  • Supervised learning S includes labels trusted,
    untrusted
  • Unsupervised learning no labels
  • Goal Find an algorithm
  • Input Algorithm training set S
  • Output Learned function f ? F
  • Minimize ? cost of error of f(x) x ? S

6
Attack models Influence
  • Exploratory
  • Probe or offline analysis to find weaknesses
  • Causative
  • Alter training process

7
Attack models Specificity
  • Indiscriminate
  • Any false negative
  • Targeted
  • Focus on a specific point or set of points

8
Attack models security violation
  • Availability
  • Increase classification errors undermine
    confidence
  • Integrity
  • False negative Intrusion points viewed as
    normal

9
Attack models summary
10
Simple learning model
  • Hypersphere based approach
  • Assume fixed R
  • Retraining step
  • form new centroid chosen from points inside circle

11
Hypersphere Attack
Outliers
12
Example exploiting hyperspheres
13
Example exploiting hyperspheres
14
Bounding attackers effort
  • Lower bound on attackers effort
  • M of attack points per round
  • T of rounds (gt1)
  • ?x displacement
  • R radius of hypersphere
  • M ( )T-1 e?x/R-1

15
Defenses
  • Regularization
  • Prior distributions using domain knowledge
  • Detection
  • Clustering multiple learning algorithms
  • Disinformation
  • Switches roles of attacker and defender
  • Randomization
  • Increases attacker effort

16
Defenses summary
17
Summary
  • Machine learning as defense
  • We need to understand security of machine
    learning
  • Traditional PAC (Probably Approximately Correct)
    Learning model doesnt fit well
  • Attacks defenses many open problems!
  • Overarching theme
  • Arms race techniques must evolve constantly

18
(No Transcript)
19
Sample Collector
Initial training
Subsequent recognition


wave signal
wave signal
Feature Extraction
Feature Extraction
Unsupervised Learning
Keystroke Classifier
Language Model Correction
Language Model Correction (optional)
Sample Collector
Classifier Builder
recovered keystrokes


20
Keystroke Classifier
Initial training
Subsequent recognition


wave signal
wave signal
Feature Extraction
Feature Extraction
Unsupervised Learning
Keystroke Classifier
Language Model Correction
Language Model Correction (optional)
Sample Collector
Classifier Builder
recovered keystrokes

Write a Comment
User Comments (0)
About PowerShow.com