Title: Data Mining and Knowledge Discovery Lecture 3 Data Preprocessing, Classification
1Data Mining andKnowledge DiscoveryLecture
(3)Data Pre-processing,Classification
Association Rule Mining
2Data Pre-processing
- Data cleaning
- Fill in missing values, smooth noisy data,
identify or remove outliers, and resolve
inconsistencies - Data integration
- Integration of multiple databases, data cubes, or
files - Data transformation
- Normalisation and aggregation
- Data reduction
- Obtains reduced representation in volume but
produces the same or similar analytical results - Data discretisation
- Part of data reduction but with particular
importance, especially for numerical data
3Why Data Preprocessing?
- Data in the real world is dirty
- incomplete lacking attribute values, lacking
certain attributes of interest, or containing
only aggregate data - noisy containing errors or outliers (exceptions
or anomalies) - inconsistent containing discrepancies in codes
or names - No quality data, no quality mining results!
- Quality decisions must be based on quality data
- Data warehouse needs consistent integration of
quality data
4Data Cleaning
- Data cleaning tasks
- Fill in missing values
- Identify outliers and smooth out noisy data
- Correct inconsistent data
5Missing Data
- Data is not always available
- E.g., many tuples have no recorded value for
several attributes, such as customer income in
sales data - Missing data may be due to
- equipment malfunction
- inconsistent with other recorded data and thus
deleted - data not entered due to misunderstanding
- certain data may not be considered important at
the time of entry - not register history or changes of the data
- Missing data may need to be inferred.
6How to Handle Missing Data?
- Fill in the missing value manually tedious
infeasible? - Use a global constant to fill in the missing
value e.g., unknown, a new class?! - Use the attribute mean to fill in the missing
value - Use the attribute mean for all samples belonging
to the same class to fill in the missing value
smarter - Use the most probable value to fill in the
missing value inference-based such as Bayesian
formula or decision tree
7Noisy Data
- Noise random error or variance in a measured
variable - Incorrect attribute values may due to
- faulty data collection instruments
- data entry problems
- data transmission problems
- technology limitation
- inconsistency in naming convention
- Other data problems which requires data cleaning
- duplicate records
- incomplete data
- inconsistent data
8How to Handle Noisy Data?
- Binning method
- first sort data and partition into (equi-depth)
bins - then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc. - Clustering
- detect and remove outliers
9Binning Methods for Data Smoothing
- Sorted data for price (in dollars) 4, 8, 9,
15, 21, 21, 24, 25, 26, 28, 29, 34 - Partition into (equi-depth) bins
- - Bin 1 4, 8, 9, 15
- - Bin 2 21, 21, 24, 25
- - Bin 3 26, 28, 29, 34
- Smoothing by bin means
- - Bin 1 9, 9, 9, 9
- - Bin 2 23, 23, 23, 23
- - Bin 3 29, 29, 29, 29
- Smoothing by bin boundaries
- - Bin 1 4, 4, 4, 15
- - Bin 2 21, 21, 25, 25
- - Bin 3 26, 26, 26, 34
10Cluster Analysis
11Data Integration
- Data integration
- combines data from multiple sources into a
coherent store - Schema integration
- integrate metadata from different sources
- Entity identification problem identify real
world entities from multiple data sources, e.g.,
A.cust-id ? B.cust- - Detecting and resolving data value conflicts
- for the same real world entity, attribute values
from different sources are different - possible reasons different representations,
different scales, e.g., metric vs. British units
12Handling Redundant Data
- Redundant data occur often when integration of
multiple databases - The same attribute may have different names in
different databases - One attribute may be a derived attribute in
another table, e.g., annual revenue - Redundant data may be able to be detected by
correlational analysis - Careful integration of the data from multiple
sources may help reduce/avoid redundancies and
inconsistencies and improve mining speed and
quality
13Data Transformation
- Smoothing remove noise from data
- Aggregation summarisation, data cube
construction - Generalisation concept hierarchy climbing
- Attribute/feature construction
- New attributes constructed from the given ones
14Data Reduction Strategies
- Warehouse may store terabytes of data Complex
data analysis/mining may take a very long time to
run on the complete data set - Data reduction
- Obtains a reduced representation of the data set
that is much smaller in volume but yet produces
the same (or almost the same) analytical results - Data reduction strategies
- Dimensionality reduction
- Compressions
- Histograms
- Clustering
15Dimensionality Reduction
- Feature selection (i.e., attribute subset
selection) - Select a minimum set of features such that the
probability distribution of different classes
given the values for those features is as close
as possible to the original distribution given
the values of all features - reduce of patterns in the patterns, easier to
understand - Heuristic methods (due to exponential of
choices) - step-wise forward selection
- step-wise backward elimination
- combining forward selection and backward
elimination - decision-tree induction
16Example of Decision Tree Induction
Initial attribute set A1, A2, A3, A4, A5, A6
A4 ?
A6?
A1?
Class 2
Class 2
Class 1
Class 1
Reduced attribute set A1, A4, A6
17Heuristic Feature Selection Methods
- There are 2d possible sub-features of d features
- Several heuristic feature selection methods
- Best single features under the feature
independence assumption choose by significance
tests. - Best step-wise feature selection
- The best single-feature is picked first
- Then next best feature condition to the first,
... - Step-wise feature elimination
- Repeatedly eliminate the worst feature
- Best combined feature selection and elimination
- Optimal branch and bound
- Use feature elimination and backtracking
18Data Compression
- String compression
- There are extensive theories and well-tuned
algorithms - Typically lossless
- But only limited manipulation is possible without
expansion - Audio/video compression
- Typically lossy compression, with progressive
refinement - Sometimes small fragments of signal can be
reconstructed without reconstructing the whole - Time sequence is not audio
- Typically short and vary slowly with time
19Data Compression
Original Data
Compressed Data
lossless
Original Data Approximated
lossy
20Histograms
- A popular data reduction technique
- Divide data into buckets and store average (sum)
for each bucket - Can be constructed optimally in one dimension
using dynamic programming - Related to quantisation problems.
21Clustering
- Partition data set into clusters, and one can
store cluster representation only - Can be very effective if data is clustered but
not if data is smeared - Can have hierarchical clustering and be stored in
multi-dimensional index tree structures
22Discretisation
- Three types of attributes
- Nominal values from an unordered set
- Ordinal values from an ordered set
- Continuous real numbers
- Discretisation
- divide the range of a continuous attribute into
intervals - Some classification algorithms only accept
categorical attributes. - Reduce data size by discretisation
- Prepare for further analysis
23Discretisation and Concept hierarchy
- Discretisation
- reduce the number of values for a given
continuous attribute by dividing the range of the
attribute into intervals. Interval labels can
then be used to replace actual data values. - Concept hierarchies
- reduce the data by collecting and replacing low
level concepts (such as numeric values for the
attribute age) by higher level concepts (such as
young, middle-aged, or senior).
24Discretisation and concept hierarchy generation
for numeric data
- Binning (see sections before)
- Histogram analysis (see sections before)
- Clustering analysis (see sections before)
- Entropy-based discretisation
- Segmentation by natural partitioning
25Entropy Based Discretization
- Fayyad and Irani (1993)
- Entropy based methods use the class-information
present in the data. - The entropy (or the information content) is
calculated on the basis of the class label.
Intuitively, it finds the best split so that the
bins are as pure as possible, i.e. the majority
of the values in a bin correspond to having the
same class label. Formally, it is characterized
by finding the split with the maximal information
gain.
26 Entropy-based Discretization (cont)
- Suppose we have the following (attribute-value/cla
ss) pairs. Let S denotes the 9 pairs given here.
S (0,Y), (4,Y), (12,Y), (16,N), (16,N), (18,Y),
(24,N), (26,N), (28,N). - Let p1 4/9 be the fraction of pairs with
classY, and p2 5/9 be the fraction of pairs
with classN. - The Entropy (or the information content) for S is
defined as - Entropy(S) - p1log2(p1) p2log2(p2) .
- In this case Entropy(S).991076.
- If the entropy small, then the set is relatively
pure. The smallest possible value is 0. - If the entropy is larger, then the set is mixed.
The largest possible value is 1, which is
obtained when p1p2.5
27Entropy Based Discretisation(cont)
- Given a set of samples S, if S is partitioned
into two intervals S1 and S2 using boundary T,
the entropy after partitioning is -
- where denotes cardinality. The boundary T are
chosen from the midpoints of the atributes
values, i e 2, 8, 14, 16, 17, 21, 25, 27 - For instance if T attribute value14
- S1 (0,P), (4,P), (12,P) and S2 (16,N),
(16,N), (18,P), (24,N), (26,N), (28,N) - E(S,T)(3/9)E(S1)(6/9)E(S2)3/90(6/9)
0.6500224 - E(S,T).4333
- Information gain of the split, Gain(S,T)
Entropy(S) - E(S,T). - Gain.9910-.4333.5577
28Entropy Based Discretisation (cont)
- Similarly, for T v21 one obtains
- Information Gain.9910-.6121.2789. Therefore
v14 is a better partition. - The goal of this algorithm is to find the split
with the maximum information gain. Maximal gain
is obtained when E(S,T) is minimal. - The best split(s) are found by examining all
possible splits and then selecting the optimal
split. The boundary that minimize the entropy
function over all possible boundaries is selected
as a binary discretisation. - The process is recursively applied to partitions
obtained until some stopping criterion is met,
e.g.,
29Entropy Based Discretisation(cont)
where
and,
Here c is the number of classes in S, c1 is the
number of classes in S1 and c2 is the number of
classes in S2. This is called the Minimum
Description Length Principle (MDLP)
30Segmentation by natural partitioning
- 3-4-5 rule can be used to segment numeric data
into relatively uniform, natural intervals. - If an interval covers 3, 6, 7 or 9 distinct
values at the most significant digit, partition
the range into 3 equi-width intervals - If it covers 2, 4, or 8 distinct values at the
most significant digit, partition the range into
4 intervals - If it covers 1, 5, or 10 distinct values at the
most significant digit, partition the range into
5 intervals
31Data Mining Classification
- Predictive Modelling
- Based on the features present in the
class_labeled training data, develop a
description or model for each class. It is used
for - better understanding of each class, and
- prediction of certain properties of unseen data
- If the field being predicted is a numeric
(continuous ) variables then the prediction
problem is a regression problem - If the field being predicted is a categorical
then the prediction problem is a classification
problem - Predictive Modelling is based on inductive
learning (supervised learning)
32Predictive Modelling (Classification)
Linear Classifier
Non Linear Classifier
debt
o
o
o
o
o
o
o
o
o
o
income
aincome bdebt lt t gt No loan !
33Predictive Modelling (Classification)
- Task determine which of a fixed set of classes
an example belongs to - Input training set of examples annotated with
class values. - Outputinduced hypotheses (model/concept
description/classifiers)
Learning Induce classifiers from training data
Inductive Learning System
Training Data
Classifiers (Derived Hypotheses)
Predication Using Hypothesis for Prediction
classifying any example described in the same
manner
Classifier
Decision on class assignment
Data to be classified
34Classification Algorithms
Basic Principle (Inductive Learning Hypothesis)
Any hypothesis found to approximate the target
function well over a sufficiently large set of
training examples will also approximate the
target function well over other unobserved
examples.
Typical Algorithms
- Decision trees
- Rule-based induction
- Neural networks
- Memory(Case) based reasoning
- Genetic algorithms
- Bayesian networks
35Decision Tree Learning
General idea Recursively partition data into
sub-groups Select an attribute and formulate a
logical test on attribute Branch on each
outcome of test, move subset of examples
(training data) satisfying that outcome to the
corresponding child node. Run recursively on
each child node. Termination rule specifies when
to declare a leaf node. Decision tree learning
is a heuristic, one-step lookahead (hill
climbing), non-backtracking search through the
space of all possible decision trees.
36Decision Tree Example
Day Outlook Temperature Humidity Wind Play
Tennis 1 Sunny Hot High Weak No 2 Sunny Hot
High Strong No 3 Overcast Hot High Weak Yes 4
Rain Mild High Weak Yes 5 Rain Cool Normal We
ak Yes 6 Rain Cool Normal Strong No 7 Overcast
Cool Normal Strong Yes 8 Sunny Mild High Wea
k No 9 Sunny Cool Normal Weak Yes 10 Rain Mild
Normal Weak Yes 11 Sunny Mild Normal Strong Ye
s 12 Overcast Mild High Strong Yes 13 Overcast H
ot Normal Weak Yes 14 Rain Mild High Strong No
37Decision Tree Training
DecisionTree(examples) Prune
(Tree_Generation(examples)) Tree_Generation
(examples) IF termination_condition
(examples) THEN leaf ( majority_class
(examples) ) ELSE LET Best_test
selection_function (examples) IN FOR EACH
value v OF Best_test Let subtree_v
Tree_Generation ( e ? example e.Best_test v
) IN Node (Best_test, subtree_v ) Definition
selection used to partition training
data termination condition determines when to
stop partitioning pruning algorithm attempts to
prevent overfitting
38Selection Measure the Critical Step
The basic approach to select a attribute is to
examine each attribute and evaluate its
likelihood for improving the overall decision
performance of the tree. The most widely used
node-splitting evaluation functions work by
reducing the degree of randomness or impurity
in the current node Entropy function
(C4.5) Information gain
- ID3 and C4.5 branch on every value and use an
entropy minimisation heuristic to select best
attribute. - CART branches on all values or one value only,
uses entropy minimisation or gini function. - GIDDY formulates a test by branching on a subset
of attribute values (selection by entropy
minimisation)
39Overfitting
- Consider error of hypothesis H over
- training data error_training (h)
- entire distribution D of data error_D (h)
- Hypothesis h overfits training data if there is
an alternative hypothesis h such that - error_training (h) lt error_training (h)
- error_D (h) gt error (h)
40Preventing Overfitting
- Problem We dont want to these algorithms to fit
to noise - Reduced-error pruning
- breaks the samples into a training set and a test
set. The tree is induced completely on the
training set. - Working backwards from the bottom of the tree,
the subtree starting at each nonterminal node is
examined. - If the error rate on the test cases improves by
pruning it, the subtree is removed. The process
continues until no improvement can be made by
pruning a subtree, - The error rate of the final tree on the test
cases is used as an estimate of the true error
rate.
41Decision Tree Pruning physician fee freeze
n adoption of the budget resolution y
democrat (151.0) adoption of the budget
resolution u democrat (1.0) adoption of
the budget resolution n education
spending n democrat (6.0) education
spending y democrat (9.0) education
spending u republican (1.0) physician fee
freeze y synfuels corporation cutback n
republican (97.0/3.0) synfuels corporation
cutback u republican (4.0) synfuels
corporation cutback y duty free
exports y democrat (2.0) duty free
exports u republican (1.0) duty free
exports n education spending n
democrat (5.0/2.0) education spending
y republican (13.0/2.0) education
spending u democrat (1.0) physician fee freeze
u water project cost sharing n democrat
(0.0) water project cost sharing y
democrat (4.0) water project cost sharing
u mx missile n republican (0.0)
mx missile y democrat (3.0/1.0) mx
missile u republican (2.0)
Simplified Decision Tree physician fee freeze
n democrat (168.0/2.6) physician fee freeze y
republican (123.0/13.9) physician fee freeze
u mx missile n democrat (3.0/1.1) mx
missile y democrat (4.0/2.2) mx missile
u republican (2.0/1.0)
Evaluation on training data (300 items)
Before Pruning After Pruning
---------------- ---------------------------
Size Errors Size Errors
Estimate 25 8( 2.7) 7 13(
4.3) ( 6.9) lt
42Evaluation of Classification Systems
Training Set examples with class values for
learning. Test Set examples with class values
for evaluating. Evaluation Hypotheses are used
to infer classification of examples in the test
set inferred classification is compared to known
classification. Accuracy percentage of examples
in the test set that are classified correctly.
43Mining Association Rules in Large Databases
- Association rule mining
- Mining single-dimensional Boolean association
rules from transactional databases - From association mining to correlation analysis
44What Is Association Mining?
- Association rule mining
- Finding frequent patterns, associations,
correlations, or causal structures among sets of
items or objects in transaction databases,
relational databases, and other information
repositories. - Applications
- Basket data analysis, cross-marketing, catalog
design, loss-leader analysis, clustering,
classification, etc. - Examples.
- Rule form Body Head support, confidence.
- buys(x, diapers) buys(x, beers) 0.5,
60 - major(x, CS) takes(x, DB) grade(x, A)
1, 75
45Association Rule Basic Concepts
- Given (1) database of transactions, (2) each
transaction is a list of items (purchased by a
customer in a visit) - Find all rules that correlate the presence of
one set of items with that of another set of
items - E.g., 98 of people who purchase tires and auto
accessories also get automotive services done - Applications
- ? Maintenance Agreement (What the store
should do to boost Maintenance Agreement sales) - Home Electronics ? (What other products
should the store stock up?) - Attached mailing in direct marketing
- Detecting ping-ponging of patients, faulty
collisions
46Rule Measures Support and Confidence
Customer buys both
- Find all the rules X Y ? Z with minimum
confidence and support - support, s, probability that a transaction
contains X, Y, Z - confidence, c, conditional probability that a
transaction having X, Y also contains Z
Customer buys diaper
Customer buys beer
- Let minimum support 50, and minimum confidence
50, we have - A ? C (50, 66.6)
- C ? A (50, 100)
47- Support A measure of the frequency with which an
itemset occurs in a DB.
supp(A) records that contain A m
If an itemset has support higher than some
specified threshold we say that the itemset is
supported or frequent (some authors use the term
large). Support threshold is normally set
reasonably low (say) 1.
48Confidence A measure, expressed as a ratio, of
the support for an AR compared to the support of
its antecedent.
conf(A?B) supp(A?B) supp(A)
- We say that we are confident in a rule if its
confidence exceeds some threshold (normally set
reasonably high, say, 80).
49Association Rule Mining A Road Map
- Boolean vs. quantitative associations (Based on
the types of values handled) - buys(x, SQLServer) buys(x, DMBook)
buys(x, DBMiner) 0.2, 60 - age(x, 30..39) income(x, 42..48K)
buys(x, PC) 1, 75 - Single dimension vs. multiple dimensional
associations (see ex. Above) - Single level vs. multiple-level analysis
- What brands of beers are associated with what
brands of diapers? - Various extensions
- Correlation, causality analysis
- Association does not necessarily imply
correlation or causality - Maxpatterns and closed itemsets
- Constraints enforced
- E.g., small sales (sum lt 100) trigger big buys
(sum gt 1,000)?
50Mining Association Rules in Large Databases
- Association rule mining
- Mining single-dimensional Boolean association
rules from transactional databases - Mining multilevel association rules from
transactional databases - From association mining to correlation analysis
- Summary
51Mining Association RulesAn Example
Min. support 50 Min. confidence 50
- For rule A ? C
- support support(A, C) 50
- confidence support(A, C)/support(A) 66.6
- The Apriori principle
- Any subset of a frequent itemset must be frequent
52Mining Frequent Itemsets the Key Step
- Find the frequent itemsets the sets of items
that have minimum support - A subset of a frequent itemset must also be a
frequent itemset - i.e., if AB is a frequent itemset, both A and
B should be a frequent itemset - Iteratively find frequent itemsets with
cardinality from 1 to k (k-itemset) - Use the frequent itemsets to generate association
rules.
53The Apriori Algorithm
- Join Step Ck is generated by joining Lk-1with
itself - Prune Step Any (k-1)-itemset that is not
frequent cannot be a subset of a frequent
k-itemset - Pseudo-code
- Ck Candidate itemset of size k
- Lk frequent itemset of size k
- L1 frequent items
- for (k 1 Lk !? k) do begin
- Ck1 candidates generated from Lk
- for each transaction t in database do
- increment the count of all candidates in
Ck1 that are
contained in t - Lk1 candidates in Ck1 with min_support
- end
- return ?k Lk
54The Apriori Algorithm Example
Database D
L1
C1
Scan D
C2
C2
L2
Scan D
C3
L3
Scan D
55How to Count Supports of Candidates?
- Why counting supports of candidates a problem?
- The total number of candidates can be huge
- One transaction may contain many candidates
- Method
- Candidate itemsets are stored in a hash-tree
- Leaf node of hash-tree contains a list of
itemsets and counts - Interior node contains a hash table
- Subset function finds all the candidates
contained in a transaction
56Example of Generating Candidates
- L3 abc, abd, acd, ace, bcd
- Self-joining L3L3
- abcd from abc and abd
- acde from acd and ace
- Pruning
- acde is removed because ade is not in L3
- C4 abcd
57Methods to Improve Aprioris Efficiency
- Hash-based itemset counting A k-itemset whose
corresponding hashing bucket count is below the
threshold cannot be frequent - Transaction reduction A transaction that does
not contain any frequent k-itemset is useless in
subsequent scans - Partitioning Any itemset that is potentially
frequent in DB must be frequent in at least one
of the partitions of DB - Sampling mining on a subset of given data, lower
support threshold a method to determine the
completeness - Dynamic itemset counting add new candidate
itemsets only when all of their subsets are
estimated to be frequent
58Is Apriori Fast Enough? Performance Bottlenecks
- The core of the Apriori algorithm
- Use frequent (k 1)-itemsets to generate
candidate frequent k-itemsets - Use database scan and pattern matching to collect
counts for the candidate itemsets - The bottleneck of Apriori candidate generation
- Huge candidate sets
- 104 frequent 1-itemset will generate 107
candidate 2-itemsets - To discover a frequent pattern of size 100, e.g.,
a1, a2, , a100, one needs to generate 2100 ?
1030 candidates. - Multiple scans of database
- Needs (n 1 ) scans, n is the length of the
longest pattern
59Quantitative Association Rules
- Numeric attributes are dynamically discretised
- Such that the confidence or compactness of the
rules mined is maximised. - 2-D quantitative association rules Aquan1 ?
Aquan2 ? Acat - Cluster adjacent
- association rules
- to form general
- rules using a 2-D
- grid.
- Example
age(X,30-34) ? income(X,24K - 48K) ?
buys(X,high resolution TV)
60Mining Association Rules in Large Databases
- Association rule mining
- Mining single-dimensional Boolean association
rules from transactional databases - From association mining to correlation analysis
61Interestingness Measurements
- Objective measures
- Two popular measurements
- support and
- confidence
- Subjective measures (Silberschatz Tuzhilin,
KDD95) - A rule (pattern) is interesting if
- it is unexpected (surprising to the user) and/or
- actionable (the user can do something with it)
62Criticism to Support and Confidence
- Example 1 (Aggarwal Yu, PODS98)
- Among 5000 students
- 3000 play basketball
- 3750 eat cereal
- 2000 both play basket ball and eat cereal
- play basketball ? eat cereal 40, 66.7 is
misleading because the overall percentage of
students eating cereal is 75 which is higher
than 66.7. - play basketball ? not eat cereal 20, 33.3 is
far more accurate, although with lower support
and confidence
63Criticism to Support and Confidence (Cont.)
- Example 2
- X and Y positively correlated,
- X and Z, negatively related
- support and confidence of
- XgtZ dominates
- We need a measure of dependent or correlated
events - P(BA)/P(B) is also called the lift of rule A gt B
64Other Interestingness Measures Interest
- Interest (correlation, lift)
- taking both P(A) and P(B) in consideration
- P(AB)P(B)P(A), if A and B are independent
events - A and B negatively correlated, if the value is
less than 1 otherwise A and B positively
correlated
65Good Reference
- More on these topics and other related to KDD and
Data mining - http//www.netnam.vn/unescocourse/knowlegde/know_f
rm.htm