Embed |

**Data Mining Classification and Prediction**** Presentation Transcript**

1.Data Mining: Concepts and Techniques

2.Classification and Prediction

What is classification? What is prediction?

Issues regarding classification and prediction

Classification by decision tree induction

Bayesian Classification

Classification by backpropagation

Classification based on concepts from association rule mining

Other Classification Methods

Prediction

Classification accuracy

Summary

3.Classification vs. Prediction

Classification:

predicts categorical class labels

classifies data (constructs a model) based on the training set and the values (class labels) in a classifying attribute and uses it in classifying new data

Prediction:

models continuous-valued functions, i.e., predicts unknown or missing values

Typical Applications

credit approval

target marketing

medical diagnosis

treatment effectiveness analysis

4.Classification—A Two-Step Process

Model construction: describing a set of predetermined classes

Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute

The set of tuples used for model construction: training set

The model is represented as classification rules, decision trees, or mathematical formulae

Model usage: for classifying future or unknown objects

Estimate accuracy of the model

The known label of test sample is compared with the classified result from the model

Accuracy rate is the percentage of test set samples that are correctly classified by the model

Test set is independent of training set, otherwise over-fitting will occur

5.Classification Process (1): Model Construction

6.Classification Process (2): Use the Model in Prediction

7.Supervised vs. Unsupervised Learning

Supervised learning (classification)

Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations

New data is classified based on the training set

Unsupervised learning (clustering)

The class labels of training data is unknown

Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data

8.Classification and Prediction

What is classification? What is prediction?

Issues regarding classification and prediction

Classification by decision tree induction

Bayesian Classification

Classification by backpropagation

Classification based on concepts from association rule mining

Other Classification Methods

Prediction

Classification accuracy

Summary

9.Issues regarding classification and prediction (1): Data Preparation

Data cleaning

Preprocess data in order to reduce noise and handle missing values

Relevance analysis (feature selection)

Remove the irrelevant or redundant attributes

Data transformation

Generalize and/or normalize data

10.Issues regarding classification and prediction (2): Evaluating Classification Methods

Predictive accuracy

Speed and scalability

time to construct the model

time to use the model

Robustness

handling noise and missing values

Scalability

efficiency in disk-resident databases

Interpretability:

understanding and insight provded by the model

Goodness of rules

decision tree size

compactness of classification rules

11.Classification and Prediction

What is classification? What is prediction?

Issues regarding classification and prediction

Classification by decision tree induction

Bayesian Classification

Classification by backpropagation

Classification based on concepts from association rule mining

Other Classification Methods

Prediction

Classification accuracy

Summary

12.Classification by Decision Tree Induction

Decision tree

A flow-chart-like tree structure

Internal node denotes a test on an attribute

Branch represents an outcome of the test

Leaf nodes represent class labels or class distribution

Decision tree generation consists of two phases

Tree construction

At start, all the training examples are at the root

Partition examples recursively based on selected attributes

Tree pruning

Identify and remove branches that reflect noise or outliers

Use of decision tree: Classifying an unknown sample

Test the attribute values of the sample against the decision tree

13.Training Dataset

14.Output: A Decision Tree for “buys_computer”

15.Algorithm for Decision Tree Induction

Basic algorithm (a greedy algorithm)

Tree is constructed in a top-down recursive divide-and-conquer manner

At start, all the training examples are at the root

Attributes are categorical (if continuous-valued, they are discretized in advance)

Examples are partitioned recursively based on selected attributes

Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain)

Conditions for stopping partitioning

All samples for a given node belong to the same class

There are no remaining attributes for further partitioning – majority voting is employed for classifying the leaf

There are no samples left

16.Attribute Selection Measure

Information gain (ID3/C4.5)

All attributes are assumed to be categorical

Can be modified for continuous-valued attributes

Gini index (IBM IntelligentMiner)

All attributes are assumed continuous-valued

Assume there exist several possible split values for each attribute

May need other tools, such as clustering, to get the possible split values

Can be modified for categorical attributes

17.Information Gain (ID3/C4.5)

Select the attribute with the highest information gain

Assume there are two classes, P and N

Let the set of examples S contain p elements of class P and n elements of class N

18.Information Gain in Decision Tree Induction

19.Attribute Selection by Information Gain Computation

20.Gini Index (IBM IntelligentMiner)

21.Extracting Classification Rules from Trees

22.Avoid Overfitting in Classification

The generated tree may overfit the training data

Too many branches, some may reflect anomalies due to noise or outliers

Result is in poor accuracy for unseen samples

Two approaches to avoid overfitting

Prepruning: Halt tree construction early—do not split a node if this would result in the goodness measure falling below a threshold

Difficult to choose an appropriate threshold

Postpruning: Remove branches from a “fully grown” tree—get a sequence of progressively pruned trees

Use a set of data different from the training data to decide which is the “best pruned tree”

23.Approaches to Determine the Final Tree Size

Separate training (2/3) and testing (1/3) sets

Use cross validation, e.g., 10-fold cross validation

Use all the data for training

but apply a statistical test (e.g., chi-square) to estimate whether expanding or pruning a node may improve the entire distribution

Use minimum description length (MDL) principle:

halting growth of the tree when the encoding is minimized

24.Enhancements to basic decision tree induction

Allow for continuous-valued attributes

Dynamically define new discrete-valued attributes that partition the continuous attribute value into a discrete set of intervals

Handle missing attribute values

Assign the most common value of the attribute

Assign probability to each of the possible values

Attribute construction

Create new attributes based on existing ones that are sparsely represented

This reduces fragmentation, repetition, and replication

25.Classification in Large Databases

Classification—a classical problem extensively studied by statisticians and machine learning researchers

Scalability: Classifying data sets with millions of examples and hundreds of attributes with reasonable speed

Why decision tree induction in data mining?

relatively faster learning speed (than other classification methods)

convertible to simple and easy to understand classification rules

can use SQL queries for accessing databases

comparable classification accuracy with other methods

26.Scalable Decision Tree Induction Methods in Data Mining Studies

SLIQ (EDBT’96 — Mehta et al.)

builds an index for each attribute and only class list and the current attribute list reside in memory

SPRINT (VLDB’96 — J. Shafer et al.)

constructs an attribute list data structure

PUBLIC (VLDB’98 — Rastogi & Shim)

integrates tree splitting and tree pruning: stop growing the tree earlier

RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)

separates the scalability aspects from the criteria that determine the quality of the tree

builds an AVC-list (attribute, value, class label)

27.Data Cube-Based Decision-Tree Induction

Integration of generalization with decision-tree induction (Kamber et al’97).

Classification at primitive concept levels

E.g., precise temperature, humidity, outlook, etc.

Low-level concepts, scattered classes, bushy classification-trees

Semantic interpretation problems.

Cube-based multi-level classification

Relevance analysis at multi-levels.

Information-gain analysis with dimension + level.

28.Presentation of Classification Results

29. Classification and Prediction

What is classification? What is prediction?

Issues regarding classification and prediction

Classification by decision tree induction

Bayesian Classification

Classification by backpropagation

Classification based on concepts from association rule mining

Other Classification Methods

Prediction

Classification accuracy

Summary

30.Bayesian Classification: Why?

Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most practical approaches to certain types of learning problems

Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct. Prior knowledge can be combined with observed data.

Probabilistic prediction: Predict multiple hypotheses, weighted by their probabilities

Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured

31.Bayesian Theorem

32.Naïve Bayes Classifier (I)

33.Naive Bayesian Classifier (II)

34.Bayesian classification

35.Estimating a-posteriori probabilities

36.Naïve Bayesian Classification

37.Play-tennis example: estimating P(xi|C)

38.Play-tennis example: classifying X

39.The independence hypothesis…

40.Bayesian Belief Networks (I)

41.Bayesian Belief Networks (II)

Bayesian belief network allows a subset of the variables conditionally independent

A graphical model of causal relationships

Several cases of learning Bayesian belief networks

Given both network structure and all the variables: easy

Given network structure but only some variables

When the network structure is not known in advance

42.Classification and Prediction

What is classification? What is prediction?

Issues regarding classification and prediction

Classification by decision tree induction

Bayesian Classification

Classification by backpropagation

Classification based on concepts from association rule mining

Other Classification Methods

Prediction

Classification accuracy

Summary

43.Neural Networks

Advantages

prediction accuracy is generally high

robust, works when training examples contain errors

output may be discrete, real-valued, or a vector of several discrete or real-valued attributes

fast evaluation of the learned target function

Criticism

long training time

difficult to understand the learned function (weights)

not easy to incorporate domain knowledge

44.A Neuron

45.Network Training

The ultimate objective of training

obtain a set of weights that makes almost all the tuples in the training data classified correctly

Steps

Initialize weights with random values

Feed the input tuples into the network one by one

For each unit

Compute the net input to the unit as a linear combination of all the inputs to the unit

Compute the output value using the activation function

Compute the error

Update the weights and the bias

46.Multi-Layer Perceptron

47.Network Pruning and Rule Extraction

48.Classification and Prediction

What is classification? What is prediction?

Issues regarding classification and prediction

Classification by decision tree induction

Bayesian Classification

Classification by backpropagation

Classification based on concepts from association rule mining

Other Classification Methods

Prediction

Classification accuracy

Summary

49.Association-Based Classification

Several methods for association-based classification

ARCS: Quantitative association mining and clustering of association rules (Lent et al’97)

It beats C4.5 in (mainly) scalability and also accuracy

Associative classification: (Liu et al’98)

It mines high support and high confidence rules in the form of “cond_set => y”, where y is a class label

CAEP (Classification by aggregating emerging patterns) (Dong et al’99)

Emerging patterns (EPs): the itemsets whose support increases significantly from one class to another

Mine Eps based on minimum support and growth rate

50.Classification and Prediction

What is classification? What is prediction?

Issues regarding classification and prediction

Classification by decision tree induction

Bayesian Classification

Classification by backpropagation

Classification based on concepts from association rule mining

Other Classification Methods

Prediction

Classification accuracy

Summary

51.Other Classification Methodsk-nearest neighbor classifier

case-based reasoning

Genetic algorithm

Rough set approach

Fuzzy set approaches

52.Instance-Based Methods

53.The k-Nearest Neighbor Algorithm

54.Discussion on the k-NN Algorithm

55.Case-Based Reasoning

56.Also uses: lazy evaluation + analyze similar instances

Difference: Instances are not “points in a Euclidean space”

Example: Water faucet problem in CADET (Sycara et al’92)

Methodology

Instances represented by rich symbolic descriptions (e.g., function graphs)

Multiple retrieved cases may be combined

Tight coupling between case retrieval, knowledge-based reasoning, and problem solving

Research issues

Indexing based on syntactic similarity measure, and when failure, backtracking, and adapting to additional cases

57.Genetic Algorithms

GA: based on an analogy to biological evolution

Each rule is represented by a string of bits

An initial population is created consisting of randomly generated rules

e.g., IF A1 and Not A2 then C2 can be encoded as 100

Based on the notion of survival of the fittest, a new population is formed to consists of the fittest rules and their offsprings

The fitness of a rule is represented by its classification accuracy on a set of training examples

Offsprings are generated by crossover and mutation

58.Rough Set Approach

Rough sets are used to approximately or “roughly” define equivalent classes

A rough set for a given class C is approximated by two sets: a lower approximation (certain to be in C) and an upper approximation (cannot be described as not belonging to C)

Finding the minimal subsets (reducts) of attributes (for feature reduction) is NP-hard but a discernibility matrix is used to reduce the computation intensity

59.Fuzzy Set Approaches

Fuzzy logic uses truth values between 0.0 and 1.0 to represent the degree of membership (such as using fuzzy membership graph)

Attribute values are converted to fuzzy values

e.g., income is mapped into the discrete categories {low, medium, high} with fuzzy values calculated

For a given new sample, more than one fuzzy value may apply

Each applicable rule contributes a vote for membership in the categories

Typically, the truth values for each predicted category are summed

60.Classification and Prediction

What is classification? What is prediction?

Issues regarding classification and prediction

Classification by decision tree induction

Bayesian Classification

Classification by backpropagation

Classification based on concepts from association rule mining

Other Classification Methods

Prediction

Classification accuracy

Summary

61.What Is Prediction?

Prediction is similar to classification

First, construct a model

Second, use model to predict unknown value

Major method for prediction is regression

Linear and multiple regression

Non-linear regression

Prediction is different from classification

Classification refers to predict categorical class label

Prediction models continuous-valued functions

62.Predictive Modeling in Databases

Predictive modeling: Predict data values or construct generalized linear models based on the database data.

One can only predict value ranges or category distributions

Method outline:

Minimal generalization

Attribute relevance analysis

Generalized linear model construction

Prediction

Determine the major factors which influence the prediction

Data relevance analysis: uncertainty measurement, entropy analysis, expert judgement, etc.

Multi-level prediction: drill-down and roll-up analysis

63.Regress Analysis and Log-Linear Models in Prediction

64. Locally Weighted Regression

65.Prediction: Numerical Data

66.Prediction: Categorical Data

67.Classification and Prediction

What is classification? What is prediction?

Issues regarding classification and prediction

Classification by decision tree induction

Bayesian Classification

Classification by backpropagation

Classification based on concepts from association rule mining

Other Classification Methods

Prediction

Classification accuracy

Summary

68.Classification Accuracy: Estimating Error Rates

69.Boosting and Bagging

Boosting increases classification accuracy

Applicable to decision trees or Bayesian classifier

Learn a series of classifiers, where each classifier in the series pays more attention to the examples misclassified by its predecessor

Boosting requires only linear time and constant space

70.Boosting Technique (II) — Algorithm

71.Summary

Classification is an extensively studied problem (mainly in statistics, machine learning & neural networks)

Classification is probably one of the most widely used data mining techniques with a lot of extensions

Scalability is still an important issue for database applications: thus combining classification with database techniques should be a promising topic

Research directions: classification of non-relational data, e.g., text, spatial, multimedia, etc..

copyright © 2012. All right reserved.

Designed and Developed by BVM SOLUTION