Collection of general data mining briefings - PowerPoint PPT Presentation

1 / 55
About This Presentation
Title:

Collection of general data mining briefings

Description:

Intrusion detection and Malicious code detection (worms and virus) ... Is it possible to consider higher-level source codes for malicious code detection? ... – PowerPoint PPT presentation

Number of Views:84
Avg rating:3.0/5.0
Slides: 56
Provided by: chrisc8
Category:

less

Transcript and Presenter's Notes

Title: Collection of general data mining briefings


1
Malicious Code Detection and
Security Applications
Prof. Bhavani Thuraisingham The University of
Texas at Dallas
September 8, 2008 Lecture 5

2
Outline
  • Data mining overview
  • Intrusion detection and Malicious code detection
    (worms and virus)
  • Digital forensics and UTD work
  • Algorithms for Digital Forensics

3
What is Data Mining?
4
Whats going on in data mining?
  • What are the technologies for data mining?
  • Database management, data warehousing, machine
    learning, statistics, pattern recognition,
    visualization, parallel processing
  • What can data mining do for you?
  • Data mining outcomes Classification, Clustering,
    Association, Anomaly detection, Prediction,
    Estimation, . . .
  • How do you carry out data mining?
  • Data mining techniques Decision trees, Neural
    networks, Market-basket analysis, Link analysis,
    Genetic algorithms, . . .
  • What is the current status?
  • Many commercial products mine relational
    databases
  • What are some of the challenges?
  • Mining unstructured data, extracting useful
    patterns, web mining, Data mining, security and
    privacy

5
Data Mining for Intrusion Detection Problem
  • An intrusion can be defined as any set of
    actions that attempt to compromise the integrity,
    confidentiality, or availability of a resource.
  • Attacks are
  • Host-based attacks
  • Network-based attacks
  • Intrusion detection systems are split into two
    groups
  • Anomaly detection systems
  • Misuse detection systems
  • Use audit logs
  • Capture all activities in network and hosts.
  • But the amount of data is huge!

6
Misuse Detection
  • Misuse Detection

7
Problem Anomaly Detection
  • Anomaly Detection

8
Our Approach Overview
Training Data
Class
Hierarchical Clustering (DGSOT)
Testing
SVM Class Training
DGSOT Dynamically growing self organizing tree
Testing Data
9
Our Approach Hierarchical Clustering
Our Approach
Hierarchical clustering with SVM flow chart
10
Results
Training Time, FP and FN Rates of Various
Methods
 
11
Introduction Detecting Malicious Executables
using Data Mining
  • What are malicious executables?
  • Harm computer systems
  • Virus, Exploit, Denial of Service (DoS), Flooder,
    Sniffer, Spoofer, Trojan etc.
  • Exploits software vulnerability on a victim
  • May remotely infect other victims
  • Incurs great loss. Example Code Red epidemic
    cost 2.6 Billion
  • Malicious code detection Traditional approach
  • Signature based
  • Requires signatures to be generated by human
    experts
  • So, not effective against zero day attacks

12
State of the Art in Automated Detection
  • Automated detection approaches
  • Behavioural analyse behaviours like source,
    destination address, attachment type, statistical
    anomaly etc.
  • Content-based analyse the content of the
    malicious executable
  • Autograph (H. Ah-Kim CMU) Based on automated
    signature generation process
  • N-gram analysis (Maloof, M.A. et .al.) Based on
    mining features and using machine learning.

13
Our New Ideas (Khan, Masud and Thuraisingham)
  • Content -based approaches consider only
    machine-codes (byte-codes).
  • Is it possible to consider higher-level source
    codes for malicious code detection?
  • Yes Diassemble the binary executable and
    retrieve the assembly program
  • Extract important features from the assembly
    program
  • Combine with machine-code features

14
Feature Extraction
  • Binary n-gram features
  • Sequence of n consecutive bytes of binary
    executable
  • Assembly n-gram features
  • Sequence of n consecutive assembly instructions
  • System API call features
  • DLL function call information

15
The Hybrid Feature Retrieval Model
  • Collect training samples of normal and malicious
    executables.
  • Extract features
  • Train a Classifier and build a model
  • Test the model against test samples

16
Hybrid Feature Retrieval (HFR)
  • Training

17
Hybrid Feature Retrieval (HFR)
  • Testing

18
Feature Extraction
  • Binary n-gram features
  • Features are extracted from the byte codes in the
    form of n-grams, where n 2,4,6,8,10 and so on.
  • Example
  • Given a 11-byte sequence 0123456789abcdef012
    345,
  • The 2-grams (2-byte sequences) are 0123, 2345,
    4567, 6789, 89ab, abcd, cdef, ef01, 0123, 2345
  • The 4-grams (4-byte sequences) are 01234567,
    23456789, 456789ab,...,ef012345 and so on....
  • Problem
  • Large dataset. Too many features (millions!).
  • Solution
  • Use secondary memory, efficient data structures
  • Apply feature selection

19
Feature Extraction
  • Assembly n-gram features
  • Features are extracted from the assembly programs
    in the form of n-grams, where n 2,4,6,8,10 and
    so on.
  • Example
  • three instructions
  • push eax mov eax, dword0f34 add ecx,
    eax
  • 2-grams
  • (1) push eax mov eax, dword0f34
  • (2) mov eax, dword0f34 add ecx, eax
  • Problem
  • Same problem as binary
  • Solution
  • Same solution

20
Feature Selection
  • Select Best K features
  • Selection Criteria Information Gain
  • Gain of an attribute A on a collection of
    examples S is given by

21
Experiments
  • Dataset
  • Dataset1 838 Malicious and 597 Benign
    executables
  • Dataset2 1082 Malicious and 1370 Benign
    executables
  • Collected Malicious code from VX Heavens
    (http//vx.netlux.org)
  • Disassembly
  • Pedisassem ( http//www.geocities.com/sangcho/ind
    ex.html )
  • Training, Testing
  • Support Vector Machine (SVM)
  • C-Support Vector Classifiers with an RBF kernel

22
Results
  • HFS Hybrid Feature Set
  • BFS Binary Feature Set
  • AFS Assembly Feature Set

23
Results
  • HFS Hybrid Feature Set
  • BFS Binary Feature Set
  • AFS Assembly Feature Set

24
Results
  • HFS Hybrid Feature Set
  • BFS Binary Feature Set
  • AFS Assembly Feature Set

25
Future Plans
  • System call
  • seems to be very useful.
  • Need to Consider Frequency of call
  • Call sequence pattern (following program path)
  • Actions immediately preceding or after call
  • Detect Malicious code by program slicing
  • requires analysis

26
Data Mining for Buffer Overflow Introduction
  • Goal
  • Intrusion detection.
  • e.g. worm attack, buffer overflow attack.
  • Main Contribution
  • 'Worm' code detection by data mining coupled with
    'reverse engineering'.
  • Buffer overflow detection by combining data
    mining with static analysis of assembly code.

27
Background
  • What is 'buffer overflow'?
  • A situation when a fixed sized buffer is
    overflown by a larger sized input.
  • How does it happen?
  • example

........ char buff100 gets(buff) ........
buff
Stack
memory
Input string
28
Background (cont...)
  • Then what?

buff
Stack
........ char buff100 gets(buff) ........
buff
Stack
memory
Return address overwritten
Attacker's code
buff
Stack
memory
New return address points to this memory location
29
Background (cont...)
  • So what?
  • Program may crash or
  • The attacker can execute his arbitrary code
  • It can now
  • Execute any system function
  • Communicate with some host and download some
    'worm' code and install it!
  • Open a backdoor to take full control of the
    victim
  • How to stop it?

30
Background (cont...)
  • Stopping buffer overflow
  • Preventive approaches
  • Detection approaches
  • Preventive approaches
  • Finding bugs in source code. Problem can only
    work when source code is available.
  • Compiler extension. Same problem.
  • OS/HW modification
  • Detection approaches
  • Capture code running symptoms. Problem may
    require long running time.
  • Automatically generating signatures of buffer
    overflow attacks.

31
CodeBlocker (Our approach)
  • A detection approach
  • Based on the Observation
  • Attack messages usually contain code while normal
    messages contain data.
  • Main Idea
  • Check whether message contains code
  • Problem to solve
  • Distinguishing code from data

32
Some Statistics
  • Statistics to support this observation(a)on
    Windows platforms
  • most web servers (port 80) accept data only
  • remote access services (ports 111, 137, 138, 139)
    accept data only Microsoft SQL Servers (port
    1434) accept data only
  • workstation services (ports 139 and 445) accept
    data only.
  • (b) On Linux platforms, most
  • Apache web servers (port 80) accept data only
  • BIND (port 53) accepts data only
  • SNMP (port 161) accepts data only
  • most Mail Transport (port 25) accepts data only
  • Database servers (Oracle, MySQL, PostgreSQL) at
    ports 1521, 3306 and 5432 accept data only.

33
Severity of the problem
  • It is not easy to detect actual instruction
    sequence from a given string of bits

34
Our solution
  • Apply data mining.
  • Formulate the problem as a classification problem
    (code, data)
  • Collect a set of training examples, containing
    both instances
  • Train the data with a machine learning algorithm,
    get the model
  • Test this model against a new message

35
CodeBlocker Model
36
Feature Extraction
37
Disassembly
  • We apply SigFree tool
  • implemented by Xinran Wang et al. (PennState)

38
Feature extraction
  • Features are extracted using
  • N-gram analysis
  • Control flow analysis
  • N-gram analysis

What is an n-gram? -Sequence of n
instructions Traditional approach -Flow of
control is ignored 2-grams are 02, 24, 46,...,CE
Assembly program
Corresponding IFG
39
Feature extraction (cont...)
  • Control-flow Based N-gram analysis

What is an n-gram? -Sequence of n
instructions Proposed Control-flow based
approach -Flow of control is
considered 2-grams are 02, 24, 46,...,CE, E6
Assembly program
Corresponding IFG
40
Feature extraction (cont...)
  • Control Flow analysis. Generated features
  • Invalid Memory Reference (IMR)
  • Undefined Register (UR)
  • Invalid Jump Target (IJT)
  • Checking IMR
  • A memory is referenced using register addressing
    and the register value is undefined
  • e.g. mov ax, dx 5
  • Checking UR
  • Check if the register value is set properly
  • Checking IJT
  • Check whether jump target does not violate
    instruction boundary

41
Putting it together
  • Why n-gram analysis?
  • Intuition in general, disassembled executables
    should have a different pattern of instruction
    usage than disassembled data.
  • Why control flow analysis?
  • Intuition there should be no invalid memory
    references or invalid jump targets.
  • Approach
  • Compute all possible n-grams
  • Select best k of them
  • Compute feature vector (binary vector) for each
    training example
  • Supply these vectors to the training algorithm

42
Experiments
  • Dataset
  • Real traces of normal messages
  • Real attack messages
  • Polymorphic shellcodes
  • Training, Testing
  • Support Vector Machine (SVM)

43
Results
  • CFBn Control-Flow Based n-gram feature
  • CFF Control-flow feature

44
Novelty, Advantages, Limitations, Future
  • Novelty
  • We introduce the notion of control flow based
    n-gram
  • We combine control flow analysis with data mining
    to detect code / data
  • Significant improvement over other methods (e.g.
    SigFree)
  • Advantages
  • Fast testing
  • Signature free operation
  • Low overhead
  • Robust against many obfuscations
  • Limitations
  • Need samples of attack and normal messages.
  • May not be able to detect a completely new type
    of attack.
  • Future
  • Find more features
  • Apply dynamic analysis techniques
  • Semantic analysis

45
Analysis of Firewall Policy Rules Using Data
Mining Techniques
  • Firewall is the de facto core technology of
    todays network security
  • First line of defense against external network
    attacks and threats
  • Firewall controls or governs network access by
    allowing or denying the incoming or outgoing
    network traffic according to firewall policy
    rules.
  • Manual definition of rules often result in in
    anomalies in the policy
  • Detecting and resolving these anomalies manually
    is a tedious and an error prone task
  • Solutions
  • Anomaly detection
  • Theoretical Framework for the resolution of
    anomaly
  • A new algorithm will simultaneously detect and
    resolve any anomaly that is present in the
    policy rules
  • Traffic Mining Mine the traffic and detect
    anomalies

46
Traffic Mining
  • To bridge the gap between what is written in the
    firewall policy rules and what is being observed
    in the network is to analyze traffic and log of
    the packets traffic mining
  • Network traffic trend may show that some rules
    are out-dated or not used recently

Firewall Policy Rule
47
  • Traffic Mining Results

1 TCP,INPUT,129.110.96.117,ANY,...,80,DENY 2
TCP,INPUT,...,ANY,...,80,ACCEPT 3
TCP,INPUT,...,ANY,...,443,DENY 4
TCP,INPUT,129.110.96.117,ANY,...,22,DENY 5
TCP,INPUT,...,ANY,...,22,ACCEPT 6
TCP,OUTPUT,129.110.96.80,ANY,...,22,DENY 7
UDP,OUTPUT,...,ANY,...,53,ACCEPT 8
UDP,INPUT,...,53,...,ANY,ACCEPT 9
UDP,OUTPUT,...,ANY,...,ANY,DENY 10
UDP,INPUT,...,ANY,...,ANY,DENY 11
TCP,INPUT,129.110.96.117,ANY,129.110.96.80,22,DENY
12 TCP,INPUT,129.110.96.117,ANY,129.110.96.80,80
,DENY 13 UDP,INPUT,...,ANY,129.110.96.80,ANY,
DENY 14 UDP,OUTPUT,129.110.96.80,ANY,129.110.10.
,ANY,DENY 15 TCP,INPUT,...,ANY,129.110.96.80,
22,ACCEPT 16 TCP,INPUT,...,ANY,129.110.96.80,
80,ACCEPT 17 UDP,INPUT,129.110..,53,129.110.96.
80,ANY,ACCEPT 18 UDP,OUTPUT,129.110.96.80,ANY,129
.110..,53,ACCEPT
Rule 1, Rule 2 gt GENRERALIZATION Rule 1, Rule
16 gt CORRELATED Rule 2, Rule 12 gt
SHADOWED Rule 4, Rule 5 gt GENRERALIZATION Rule
4, Rule 15 gt CORRELATED Rule 5, Rule 11
gt SHADOWED
Anomaly Discovery Result
48
Worm Detection Introduction
  • What are worms?
  • Self-replicating program Exploits software
    vulnerability on a victim Remotely infects other
    victims
  • Evil worms
  • Severe effect Code Red epidemic cost 2.6
    Billion
  • Goals of worm detection
  • Real-time detection
  • Issues
  • Substantial Volume of Identical Traffic, Random
    Probing
  • Methods for worm detection
  • Count number of sources/destinations Count
    number of failed connection attempts
  • Worm Types
  • Email worms, Instant Messaging worms, Internet
    worms, IRC worms, File-sharing Networks worms
  • Automatic signature generation possible
  • EarlyBird System (S. Singh -UCSD) Autograph (H.
    Ah-Kim - CMU)

49
Email Worm Detection using Data Mining
Task given some training instances of both
normal and viral emails, induce a hypothesis
to detect viral emails.
We used Naïve Bayes SVM
Outgoing Emails
The Model
Test data
Feature extraction
Classifier
Machine Learning
Training data
Clean or Infected ?
50
Assumptions
  • Features are based on outgoing emails.
  • Different users have different normal
    behaviour.
  • Analysis should be per-user basis.
  • Two groups of features
  • Per email (of attachments, HTML in body,
    text/binary attachments)
  • Per window (mean words in body, variable words in
    subject)
  • Total of 24 features identified
  • Goal Identify normal and viral emails based
    on these features

51
Feature sets
  • Per email features
  • Binary valued Features
  • Presence of HTML script tags/attributes
    embedded images hyperlinks
  • Presence of binary, text attachments MIME types
    of file attachments
  • Continuous-valued Features
  • Number of attachments Number of words/characters
    in the subject and body
  • Per window features
  • Number of emails sent Number of unique email
    recipients Number of unique sender addresses
    Average number of words/characters per subject,
    body average word length Variance in number of
    words/characters per subject, body Variance in
    word length
  • Ratio of emails with attachments

52
Data Mining Approach
Classifier
Clean/ Infected
Test instance
Clean/ Infected
infected?
SVM
Naïve Bayes
Test instance
Clean?
Clean
53
Data set
  • Collected from UC Berkeley.
  • Contains instances for both normal and viral
    emails.
  • Six worm types
  • bagle.f, bubbleboy, mydoom.m,
  • mydoom.u, netsky.d, sobig.f
  • Originally Six sets of data
  • training instances normal (400) five worms
    (5x200)
  • testing instances normal (1200) the sixth worm
    (200)
  • Problem Not balanced, no cross validation
    reported
  • Solution re-arrange the data and apply
    cross-validation

54
Our Implementation and Analysis
  • Implementation
  • Naïve Bayes Assume Normal distribution of
    numeric and real data smoothing applied
  • SVM with the parameter settings one-class SVM
    with the radial basis function using gamma
    0.015 and nu 0.1.
  • Analysis
  • NB alone performs better than other techniques
  • SVM alone also performs better if parameters are
    set correctly
  • mydoom.m and VBS.Bubbleboy data set are not
    sufficient (very low detection accuracy in all
    classifiers)
  • The feature-based approach seems to be useful
    only when we have
  • identified the relevant features
  • gathered enough training data
  • Implement classifiers with best parameter
    settings

55
Digital Forensics and UTD Work
  • Machines are infected through unauthorized
    intrusions, worms and viruses
  • Therefore data has to be acquired from the
    machine, we skip this step as we get the data
    from open source web sites
  • We then apply our analysis tools based on data
    mining
  • Our current research at UTD is focusing mainly on
    Botnets and also to some extent Honeypots.
  • We are also conducting research on Active
    Defense trying to find out the adversary is
    upto.

56
Algorithms for Digital Forensics
  • http//www.dfrws.org/2007/proceedings/p49-beebe.pd
    f
  • http//portal.acm.org/citation.cfm?id1113034.1113
    074collGUIDEdlGUIDEidxJ79partperiodicalWa
    ntTypeperiodicaltitleCommunications20of20the
    20ACM
Write a Comment
User Comments (0)
About PowerShow.com