• Keine Ergebnisse gefunden

Data Warehousing & Data Mining

N/A
N/A
Protected

Academic year: 2021

Aktie "Data Warehousing & Data Mining"

Copied!
13
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Data Warehousing

& Data Mining

Wolf-Tilo Balke Silviu Homoceanu

Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de

9. Business Intelligence

9.1 Business Intelligence Overview 9.2 Principles of Data Mining 9.3 Association Rule Mining

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 2

9. Business Intelligence

• What is Business Intelligence (BI)?

–The process, technologies and tools needed to turn datainto information, information into knowledge and knowledge into plansthat drive profitable business action

–BI comprises data warehousing, business analytic tools, and content/knowledge management

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 3

9.1 BI Overview

• Typical BI applications are

–Customer segmentation

–Propensity to buy (customer disposition to buy) –Customer profitability

–Fraud detection

–Customer attrition (loss of customers)

–Channel optimization (connecting with the customer)

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 4

9.1 BI Overview

• Customer segmentation

–What market segments do my customers fall into, and what are their characteristics?

–Personalize customer relationships for higher customer satisfaction and retention

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 5

9.1 BI Overview

• Propensity to buy

–Which customers are most likely to respond to my promotion?

–Target customers based on their need to increase their loyalty to your product line

•Also, increase campaign profitability by focusing on the customers most likely to buy

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 6

9.1 BI Overview

(2)

• Customer profitability

–What is the lifetime profitability of my customer?

–Make individual business interaction decision based on the overall profitability of customers

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 7

9.1 BI Overview

• Fraud detection

–How can I tell which transactions are likely to be fraudulent?

•If your wife has just proposed to increase your life insurance policy, you should probably order pizza for a while

–Quickly determine fraud and take immediate action to minimize damage

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 8

9.1 BI Overview

• Customer attrition

–Which customer is at risk of leaving?

–Prevent loss of high-value customers and let go of lower-value customers

• Channel optimization

–What is the best channel to reach

my customer in each segment?

–Interact with customers based on their preference and your need to manage cost

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 9

9.1 BI Overview

• BI architecture

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 10

9.1 BI Overview

• Automated decision tools

–Rule-based systems that provide a solution usually in one functional area to a specific repetitive

management problem in one industry

•E.g., automated loan approval, intelligent price setting

• Business performance management (BPM)

–Based on the balanced scorecard methodology –A framework for defining, implementing, and managing

an enterprise’s business strategy by linking objectives with factual measures

9.1 BI Overview

• Dashboards

–Provide a comprehensive visual view of corporate performance measures, trends, and exceptions from multiple

business areas

•Allows executives to see hot spots in seconds and explore the situation

9.1 BI Overview

(3)

• What is data mining (knowledge discovery in databases)?

–Extraction of interesting (non-trivial, implicit,

previously unknown and potentially useful) information or patterns from data in large databases

• What is not data mining?

–(Deductive) query processing

–Expert systems or small ML/statistical programs

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 13

9.2 Data Mining

• Data Mining applications

–Database analysis and decision

support

•Market analysis and management

•Risk analysis and management e.g., forecasting, customer retention, improved underwriting, quality control, competitive analysis

•Fraud detection and management –Other Applications

•Text mining (news group, email, documents) and Web analysis (Google Analytics)

•Intelligent query answering

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 14

9.2 Principles of DM

• Market analysis

–Target marketing

•Find clusters of “model” customers who share the same characteristics:

interest, income level, spending habits, etc.

–Determine customer purchasing patterns over time

•Conversion of single to a joint bank account: marriage, etc.

–Cross-market analysis

•Associations/co-relations between product sales

•Prediction based on the association information

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 15

9.2 Principles of DM

–Customer profiling

•Data mining can tell you what types of customers buy what products (clustering or classification)

–Identifying customer requirements

•Identifying the best products for different customers

•Use prediction to find what factors will attract new customers

–Provide summary information

•Various multidimensional summary reports

•Statistical summary information (data central tendency and variation)

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 16

9.2 Principles of DM

• Corporate analysis and risk management –Finance planning and asset evaluation

Cash flow analysis and prediction

Contingent claim analysis to evaluate assets

Cross-sectional and time series analysis (financial-ratio, trend analysis, etc.) –Resource planning

Summarize and compare the resources and spending

–Competition

Monitor competitors and market directions

Group customers into classes and a class-based pricing procedure

Set pricing strategy in a highly competitive market

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 17

9.2 Principles of DM

• Fraud detection and management

–Applications

•Widely used in health care, retail, credit card services, telecommunications (phone card fraud), etc.

–Approach

•Use historical data to build models of fraudulent behavior and use data mining to help identify similar instances –Examples

•Car insurance: detect a group of people who stage accidents to collect claims

•Money laundering: detect suspicious money transactions (US Treasury's Financial Crimes Enforcement Network)

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 18

9.2 Principles of DM

(4)

• Other applications –Sports

IBM Advanced Scout analyzed NBA game statistics (shots blocked, assists, and fouls) to gain competitive advantage for New York Knicks and Miami Heat

–Astronomy

JPL and the Palomar Observatory discovered 22 quasars with the help of data mining

–Internet Web Surf-Aid

IBM Surf-Aid applies data mining algorithms to Web access logs for market-related pages to discover customer preference and behavior pages, analyzing effectiveness of Web marketing, improving Web site organization, etc.

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 19

9.2 Principles of DM

• Architecture of DM systems

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 20

9.2 Principles of DM

Data Warehouse

ETL Filtering

Database or data warehouse server

Data mining engine Pattern evaluation

Graphical user interface

Knowledge-base

Databases

• DM functionalities

–Association(correlation and causality)

Multi-dimensional vs. single-dimensional association

age(X, “20..29”) , income(X, “20..29K”) ⟶buys(X, “PC”) [support = 2%, confidence = 60%]

contains(T, “computer”) ⟶contains(x, “software”) [1%, 75%]

–Classification andPrediction

Finding models (functions) that describe and distinguish classes or concepts for future predictions, e.g., classify countries based on climate, or classify cars based on gas mileage

Presentation: decision-tree, classification rule, neural network

Prediction: predict some unknown or missing numerical values

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 21

9.2 Principles of DM

–Cluster analysis

•Class label is unknown: group data to form new classes, e.g., cluster houses to find distribution patterns

•Clustering based on the principle: maximizing the intra-class similarity and minimizing the interclass similarity

–Outlier analysis

•Outlier: a data object that does not comply with the general behavior of the data

•Can be considered as noise or exception, but is quite useful in fraud detection, rare events analysis

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 22

9.2 Principles of DM

–Trend and evolution analysis

•Trends and deviation can be detected by regression analysis

•Sequential pattern mining, periodicity analysis

•Similarity-based analysis

9.2 Principles of DM

• Association rule mining has the objective of finding all co-occurrence relationships (called associations), among data items

–Classical application: market basket data analysis, which aims to discover how items are purchased by customers in a supermarket

•E.g., Cheese ⟶Wine [support = 10%, confidence = 80%]

meaning that 10% of the customers buy cheese and wine together, and 80% of customers buying cheese also buy wine

9.3 Association Rule Mining

(5)

• Basic concepts of association rules

–Let I = {i1, i2, …, im} be a set of items.

Let T = {t1, t2, …, tn} be a set of transactions where each transaction tiis a set of items such that ti⊆ I.

–An association rule is an implication of the form:

X ⟶ Y, whereX ⊂ I, Y ⊂ I andX ⋂ Y = ∅

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 25

9.3 Association Rule Mining

• Association rule mining market basket analysis example

–I – set of all itemssold in a store

•E.g., i1= Beef, i2= Chicken, i3= Cheese, … –T – set of transactions

•The content of a customers basket

•E.g., t1: Beef, Chicken, Milk; t2: Beef, Cheese; t3: Cheese, Boots; t4: …

–An association rule might be

• Beef, Chicken ⟶Milk, where {Beef, Chicken} is Xand {Milk} is Y

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 26

9.3 Association Rule Mining

• Rules can be weak or strong

–The strength of a rule is measured by its

supportand confidence

–The support of a rule X ⟶ Y, is the percentage of transactions in T that contains X andY

•Can be seen as an estimate of the probability Pr({X,Y} ⊆ ti)

•With nas number of transactions in T the support of the ruleX ⟶ Y is:

support supportsupport

support = |{i | {X, Y} ⊆ ti}| / n

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 27

9.3 Association Rule Mining

–The confidence of a rule X ⟶ Y, is the percentage of transactions in T containing X, that contain X ∪ Y

•Can be seen as estimate of the probability Pr(Y ⊆ ti|X ⊆ ti) confidence

confidence confidence

confidence = |{i | {X, Y} ⊆ ti}| / |{j | X ⊆ tj}|

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 28

9.3 Association Rule Mining

• How do we interpret support and confidence?

–If support is too low,the rule may just occur due to chance

•Acting on a rule with lowsupport may not be profitable since it covers too few cases

–If confidence is too low,we cannot reliably predict Y from X

• Objective of mining association rules is to discover all associated rules in T that have support and confidence greater than a minimum threshold (minsup, minconf)!

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 29

9.3 Association Rule Mining

• Finding rules based on support and confidence thresholds

–Let minsup = 30% and minconf = 80%

–Chicken, Clothes ⟶Milk is valid, [sup = 3/7 (42.84%), conf = 3/3 (100%)]

–Clothes ⟶Milk, Chicken is also valid, and there are more…

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 30

9.3 Association Rule Mining

Transactions

T1 Beef, Chicken, Milk

T2 Beef, Cheese

T3 Cheese, Boots

T4 Beef, Chicken, Cheese T5 Beef, Chicken, Clothes, Cheese, Milk T6 Clothes, Chicken, Milk T7 Chicken, Milk, Clothes

(6)

• This is rather a simplistic view of shopping baskets

–Some important information is not considered, e.g., the quantity of each item purchased, the price paid,…

• There are a large number of rule mining algorithms

–They use different strategies and data structures –Their resulting sets of rules are all the same

•Given a transaction data set T, and a minimum support and a minimum confident, the set of association rules existing in T is uniquely determined

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 31

9.3 Association Rule Mining

• Approaches in association rule mining

–Apriori algorithm

–Mining with multiple minimum supports –Mining class association rules

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 32

9.3 Association Rule Mining

• The best known mining algorithm is the Apriori algorithm

–Definition

Frequent itemsetis an itemset whose support is ≥ minsup –Two steps:

Find all frequent itemsets(also called large itemsets)

Use frequent itemsets to generate rules

• E.g., a frequent itemset

–{Chicken, Clothes, Milk} [sup = 3/7]

• And one rule from the frequent itemset

–Clothes ⟶Milk, Chicken [sup = 3/7, conf = 3/3]

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 33

9.3 Apriori Algorithm

• Step 1: frequent itemset generation

–Key idea:

•The apriori property (downward closure property):

Any subset of a frequent itemset is also a frequent itemset

•E.g., for minsup = 30%

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 34

9.3 Apriori Algorithm: Step 1

Transactions

T1 Beef, Chicken, Milk

T2 Beef, Cheese

T3 Cheese, Boots

T4 Beef, Chicken, Cheese T5 Beef, Chicken, Clothes, Cheese, Milk T6 Clothes, Chicken, Milk T7 Chicken, Milk, Clothes Chicken, Clothes, Milk

Chicken, Clothes Chicken, Milk Clothes, Milk

Chicken Clothes Milk

• Finding frequent items

–Idea:

•Find all 1-item frequent itemsets; then all 2-item frequent itemsets, etc.

•In each iteration k, only consider itemsets that contain an k-1 frequent itemset

•Optimization: the algorithm assumes that items are sorted in lexicographic order

The order is used throughout the algorithm in each itemset {w[1], w[2], …, w[k]} represents a k-itemset w consisting of items

w[1], w[2], …, w[k], where w[1] < w[2] < … < w[k] according to the lexicographic order

9.3 Apriori Algorithm: Step 1

–Initial step

•Find frequent itemsets of size 1: F1 –Generalization, k ≥ ≥ ≥ ≥ 2

•Ck= candidates of size k: those itemsets of size k that could be frequent, given Fk-1

•Fk= those itemsets that are actually frequent, Fk⊆Ck (need to scan the database once)

9.3 Apriori Algorithm: Step 1

(7)

–Generalization of candidates uses Fk-1as input and returns a superset (candidates) of the set of all frequent k-itemsets. It has two steps:

•Join step: generate all possible candidate itemsets Ckof length k, e.g., Ik= join(Ak-1, Bk-1) ⟺ Ak-1= {i1, i2, …, ik-2, ik-1} and Bk-1= {i1, i2, …, ik-2, i’k-1} and ik-1< i’k-1; Then Ik= {i1, i2, …, ik-2, ik-1, i’k-1}

•Prune step: remove those candidates in Ckthat do not respect the downward closure property(include k-1 non-frequent subsets)

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 37

9.3 Apriori Algorithm: Step 1

–Generalization e.g., F3= {{1, 2, 3}, {1, 2, 4}, {1, 3, 4}, {1, 3, 5}, {2, 3, 4}}

•Join

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 38

9.3 Apriori Algorithm: Step 1

{1, 2, 4}

{2, 3, 4}

{1, 3, 4}

{1, 3, 5}

{1, 3, 5} {2, 3, 4}

{1, 2, 3}

{2, 3, 4}

{1, 2, 4}

{1, 3, 4}

{1, 3, 5}

{1, 2, 3, 4}

{1, 3, 4}

{2, 3, 4}

{1, 3, 5} {1, 3, 4, 5}

•After join C4= {{1, 2, 3, 4}, {1, 3, 4, 5}}

•Pruning:

•After pruningC4= {{1, 2, 3, 4}}

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 39

9.3 Apriori Algorithm: Step 1

{1, 2, 3, 4}

{2, 3, 4}

{1, 2, 3}

{1, 2, 4}

{1, 3, 4}

{1, 2, 3, 4}

∈ F3 is a good candidate

F3= {{1, 2, 3}, {1, 2, 4}, {1, 3, 4}, {1, 3, 5}, {2, 3, 4}}

{1, 3, 4, 5}

{3, 4, 5}

{1, 3, 4}

{1, 3, 5}

{1, 4, 5}

{1, 3, 4, 5}

∉ F3 Removed from C4

• Finding frequent items, example, minsup = 0.5

–First T scan ({item}:count)

•C1: {1}:2, {2}:3, {3}:3, {4}:1, {5}:3

•F1: {1}:2, {2}:3, {3}:3, {5}:3;

{4} has a support of ¼ < 0.5 so it does not belong to the frequent items

•C2= prune(join(F1))

join : {1,2}, {1,3}, {1,5}, {2,3}, {2,5}, {3,5};

prune: C2 : {1,2}, {1,3}, {1,5}, {2,3}, {2,5}, {3,5}; (all items belong to F1)

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 40

9.3 Apriori Algorithm: Step 1

TID Items

T100 1, 3, 4 T200 2, 3, 5 T300 1, 2, 3, 5

T400 2, 5

–Second T scan

•C2: {1,2}:1, {1,3}:2, {1,5}:1, {2,3}:2, {2,5}:3, {3,5}:2

•F2: {1,3}:2, {2,3}:2, {2,5}:3, {3,5}:2

•Join: we could join {1,3} only with {1,4} or {1,5}, but they are not in F2. The only possible join in F2is {2, 3} with {2, 5}

resulting in {2, 3, 5};

•prune({2, 3, 5}): {2, 3}, {2, 5}, {3, 5} all belong to F2, hence, C3: {2, 3, 5}

–Third T scan

•{2, 3, 5}:2, then sup({2, 3, 5}) = 50%, minsup condition is fulfilled. Then F3: {2, 3, 5}

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 41

9.3 Apriori Algorithm: Step 1

TID Items

T100 1, 3, 4 T200 2, 3, 5 T300 1, 2, 3, 5

T400 2, 5

• Step 2: generating rules from frequent itemsets

–Frequent itemsets are not the same as association

rules

–One more step is needed to generate association rules: for each frequent itemsetI,for each proper nonempty subset Xof I:

•Let Y= I\X; X ⟶ Yis an association rule if:

Confidence(X ⟶ Y) ≥ minconf,

Support(X ⟶ Y) := |{i | {X, Y} ⊆ ti}| / n = support(I) Confidence(X ⟶ Y) := |{i | {X, Y} ⊆ ti}| / |{j | X ⊆ tj}|

= support(I) / support(X)

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 42

9.3 Apriori Algorithm: Step 2

(8)

• Rule generation example, minconf = 50%

–Suppose {2, 3, 5} is a frequent itemset, with sup=50%, as calculated in step 1

–Proper nonempty subsets: {2, 3}, {2, 5}, {3, 5}, {2}, {3}, {5}, with sup=50%, 75%, 50%, 75%, 75%, 75% respectively –These generate the following association rules:

2,3 ⟶ 5, confidence=100%; (sup(I)=50%; sup{2,3}=50%;

50/50= 1)

2,5 ⟶ 3, confidence=67%; (50/75)

3,5 ⟶ 2, confidence=100%; (…)

2 ⟶ 3,5, confidence=67%

3 ⟶ 2,5, confidence=67%

5 ⟶ 2,3, confidence=67%

–All rules have support = support(I) = 50%

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 43

9.3 Apriori Algorithm: Step 2

TID Items

T100 1, 3, 4 T200 2, 3, 5 T300 1, 2, 3, 5

T400 2, 5

• Rule generation, summary

–In order to obtain X ⟶ Y, we need to know support(I) and support(X)

–All the required information for confidence computation has already been recorded in itemset generation

•No need to read the transactions data any more

•This step is not as time-consuming as frequent itemsets generation

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 44

9.3 Apriori Algorithm: Step 2

• Apriori Algorithm, summary

–If k is the size of the largest itemset, then it makes at most k passes over data (in practice, k is bounded e.g., 10) –The algorithm is very fast: under some conditions, all rules

can be found in linear time –Scales up to large data sets

–The mining exploits sparseness of data, and high minsup and minconf thresholds

–High minsup threshold makes it impossible to find rules involving rare items in the data. The solution is a mining with multiple minimum supports approach

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 45

9.3 Apriori Algorithm

• Mining with multiple minimum supports

–Single minimum support assumes that all items in the

data are of the same nature and/or have similar frequencies, which is incorrect…

–In practice, some items appear very frequently in the data, while others rarely appear

•E.g., in a supermarket, people buy cooking pans much less frequently than they buy bread and milk

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 46

9.3 Multiple Minimum Supports

• Rare item problem: if the frequencies of items vary significantly, we encounter two problems

–If minsup is set too high, those rules that involve rare items will not be found

–To find rules that involve both frequent and rare items, minsup has to be set very low. This may cause combinatorial explosion because those frequent items will be associated with one another in all possible ways

9.3 Multiple Minimum Supports

• Multiple Minimum Supports

–Each item can have a minimum item support

•Different support requirements for different rules –To prevent very frequent items and very rare items

from appearing in the same itemsetS, we introduce a support difference constraint(φ)

•maxi∈S{sup(i)} - mini∈S{sup(i)} ≤ φ,

where 0 ≤ φ ≤ 1 is user specified

9.3 Multiple Minimum Supports

(9)

• Minsup of a rule

–Let MIS(i) be the minimum item support (MIS) value of item i. The minsup of a rule R is the lowest MIS valueof the items in the rule:

•Rule R: i1, i2, …, ik⟶ ik+1, …, irsatisfies its minimum support if its actual support is ≥ min(MIS(i1), MIS(i2), …, MIS(ir))

•E.g., the user-specified MIS values are as follows:

MIS(bread) = 2%, MIS(shoes) = 0.1%, MIS(clothes) = 0.2%

clothes ⟶bread [sup=0.15%,conf =70%] doesn’t satisfy its minsup clothes ⟶shoes [sup=0.15%,conf =70%] satisfies its minsup

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 49

9.3 Multiple Minimum Supports

• Downward closure property is not valid anymore

–E.g., consider four items 1, 2, 3 and 4 in a database

Their minimum item supports are

•MIS(1) = 10%, MIS(2) = 20%, MIS(3) = 5%, MIS(4) = 6%

•{1, 2} with a support of 9% is infrequent since min(10%, 20%) > 9%, but {1, 2, 3} could be frequent, if it would have a support of e.g. , 7%

If applied, downward closure, eliminates {1, 2} so that {1, 2, 3} is never evaluated

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 50

9.3 Multiple Minimum Supports

• How do we solve the downward closure property problem?

–Sort all items in Iaccording to their MIS values (make it a total order)

•The order is used throughout the algorithm in each itemset –Each itemset w is of the following form:

•{w[1], w[2], …, w[k]}, consisting of items, w[1], w[2], …, w[k], where MIS(w[1]) ≤MIS(w[2]) ≤… ≤MIS(w[k])

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 51

9.3 Multiple Minimum Supports

• Multiple minimum supports is an extension of the Apriori algorithm

–Step 1: frequent itemset generation

•Initial step

Produce the seeds for generating candidate itemsets

•Candidate generation For k = 2

•Generalization

For k > 2, pruning step differs from the Apriori algorithm

–Step 2: rule generation

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 52

9.3 Multiple Minimum Supports

• Step 1: frequent itemset generation

–Initial step

•Sort I according to the MIS value of each item. Let M represent the sorted items

•Scan the data once to record the support count of each item

•Go through the items in M to find the first item i, that meets MIS(i). Insert it into a list of seeds L

•For each subsequent item j in M (after i), if sup(j) ≥ MIS(i), then insert j in L

•Calculate F1from L based on MIS of each item in L

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 53

9.3 Multiple Minimum Supports: Step 1

–E.g., I={1, 2, 3, 4}, with given MIS(1)=10%, MIS(2)=20%, MIS(3)=5%, MIS(4)=6%, and consider n=100

transactions:

•Sort I, in M= {3, 4, 1, 2}

•Record support count ({item}:count) e.g., {3}:6, {4}:3, {1}:9 and {2}:25

•MIS(3) = 5%; sup ({3}) = 6%; sup(3) > MIS(3), so L={3}

Sup({4}) = 3% < MIS(3), so L remains {3}

Sup({1}) = 9% > MIS(3), L = {3, 1}

Sup({2}) = 25% > MIS(3), L = {3, 1, 2}

•F1= {{3}, {2}}, since sup({1}) = 9% < MIS(1)

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 54

9.3 Multiple Minimum Supports: Step 1

(10)

–Candidate generation, k = 2.

Let φ= 10%

•Take each item (seed) from L in order.

Use L and not F1due to the downward closure property invalidity!

•Test the chosen item against its MIS: sup({3}) ≥ MIS(3) If true, then we can use this value to form a level 2 candidate If not, then go to the next element in L

•If true, e.g., sup({3}) = 6% ≥ MIS(3) = 5%, then try to form a 2 level candidate together with each of the next items in L, e.g., {3, 1}, then {3, 2}

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 55

9.3 Multiple Minimum Supports: Step 1

Items 1 2 3 4

MIS 10 20 5 6

SUP 9 25 6 3

L {3, 1, 2}

–{3, 1} is a candidate :⟺ sup({1})≥ MIS(3) and

|sup({3}) – sup({1})| ≤ φ

•sup({1}) = 9%; MIS(3) = 5%; sup({3}) = 6%; φ:= 10%

9% > 5% and |6%-9%| < 10%, thus C2= {3, 1}

–Now try {3, 2}

•sup({2}) = 25%; 25% > 5% but |6%-25%| > 10% so this candidate will be rejected due to the support difference constraint

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 56

9.3 Multiple Minimum Supports: Step 1

Items 1 2 3 4

MIS 10 20 5 6

SUP 9 25 6 3

L {3, 1, 2}

–Pick the next seed from L, i.e. 1 (needed to try {1,2})

•sup({1}) < MIS(1) so we can not use 1 as seed!

–Candidate generation for k=2 remains C2= {3, 1}

•Now read the transaction list and calculate the support of each item in C2. Let’s assume sup({3, 1})=6, which is larger than min(MIS(3), MIS(1))

•Thus F2= {3, 1}

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 57

9.3 Multiple Minimum Supports: Step 1

Items 1 2 3 4

MIS 10 20 5 6

SUP 9 25 6 3

L {3, 1, 2}

–Generalization, k > 2 uses Fk-1as input and returns a superset (candidates) of the set of all frequent k- itemsets. It has two steps:

•Join step: same as in the case of k=2

Ik= join(Ak-1, Bk-1) ⟺ Ak-1= {i1, i2, …, ik-2, ik-1} and Bk-1= {i1, i2,

…, ik-2, i’k-1} and ik-1< i’k-1and |sup(ik-1) – sup(i’k-1)| ≤≤≤≤φφφφ.

Then Ik= {i1, i2, …, ik-2, ik-1, i’k-1}

•Prune step: for each (k-1) subset s of Ik, if s is not in Fk-1, then Ikcan be removed from Ck (it is not a good candidate).

There is however one exception to this rule, when s does not include the first item from Ik

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 58

9.3 Multiple Minimum Supports: Step 1

–Generalization, k > 2 example: let’s consider F3={{1, 2, 3}, {1, 2, 5}, {1, 3, 4}, {1, 3, 5}, {1, 4, 5}, {1, 4, 6}, {2, 3, 5}}

•After join we obtain {1, 2, 3, 5}, {1, 3, 4, 5} and {1, 4, 5, 6} (we do not consider the support difference constraint)

•After pruning we get C4= {{1, 2, 3, 5}, {1, 3, 4, 5}}

{1, 2, 3, 5} is ok

{1, 3, 4, 5} is not deleted although {3, 4, 5} ∉F3, because MIS(3) >

MIS(1). If MIS(3) = MIS(1), it could be deleted {1, 4, 5, 6} is deleted because {1, 5, 6} ∉F3

9.3 Multiple Minimum Supports: Step 1

• Step 2: rule generation

–Downward closure property is not valid anymore, therefore we have frequent k order items, which contain (k-1) non-frequent sub-items

In the Apriori algorithm we only recorded the support of frequent itemsets

For those non-frequent items we do not have the support value recorded

This problem arises when we form rules of the form A,B ⟶C, where MIS(C) = min(MIS(A), MIS(B), MIS(C)). It is called head-item problem

–Besides the head-item problem, the rule generation works exactly as in the Apriori algorithm

9.3 Multiple Minimum Supports: Step 2

(11)

• Rule generation example

–{Shoes, Clothes, Bread} is a frequent itemset since

•MIS({Shoes, Clothes, Bread}) = 0.1 < sup({Shoes, Clothes, Bread}) = 0.12

–However {Clothes, Bread} is not (since 0.2 > 0.15)

•So we may not calculate the confidence of all rules depending on Shoes, i.e. rules:

Clothes, Bread ⟶Shoes Clothes ⟶Shoes, Bread Bread ⟶Shoes, Clothes

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 61

9.3 Multiple Minimum Supports: Step 2

Items Bread Clothes Shoes

MIS 2 0.2 0.1

Items {Clothes, Bread} {Shoes, Clothes, Bread}

SUP 0.15 0.12

–Head-item problem, e.g., Clothes, Bread ⟶Shoes;

Clothes ⟶Shoes, Bread; Bread ⟶Shoes, Clothes

•If we have some item on the right side of a rule, which has the minimum MIS (e.g., Shoes), we may not be able to calculate the confidence without reading the data again

•Solution is to recordalso the support of only one non- frequent sub-itemset, the itemset obtained by eliminating the item with the minimum MIS e.g.,

{Clothes, Bread, Shoes} – {Shoes} = {Clothes, Bread}

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 62

9.3 Multiple Minimum Supports: Step 2

• Advantages

–It is a more realistic model for practical applications –The model enables us to find rare item rules, but

without producing a huge number of meaningless rules with frequent items

–By setting MIS values of some items to 100% (or more), we can effectively instruct the algorithms not to generate rules only involving these items

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 63

9.3 Multiple Minimum Supports

• Mining Class Association Rules (CAR)

–Normal association rule mining does not

have any target

•It finds all possible rules that exist in data, i.e., any item can appear as a consequent or a condition of a rule

–However, in some applications, the user is interested in some targets

•E.g., the user has a set of text documents from some known topics. He wants to find out what words are associated or correlated with each topic

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 64

9.3 Association Rule Mining

• CAR, example

–A text document data set

doc 1: Student, Teach, School : Education

doc 2: Student, School : Education

doc 3: Teach, School, City, Game : Education

doc 4: Baseball, Basketball : Sport

doc 5: Basketball, Player, Spectator : Sport

doc 6: Baseball, Coach, Game, Team : Sport

doc 7: Basketball, Team, City, Game : Sport –Let minsup = 20% and minconf = 60%. Examples of class

association rules:

Student, School ⟶Education [sup= 2/7, conf = 2/2]

Game ⟶Sport [sup= 2/7, conf = 2/3]

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 65

9.3 Class Association Rules

• CAR algorithm

–Unlike normal association rules, CARs can be mined directly in one step

–The key operation is to find all rule-items that have support above minsup

•A rule-item is of the form (condset, y), where condset is a set of items from I (i.e., condset⊆I), and y ∈Y is a class label where I ⋂ Y = ∅

–Each rule-item basically represents a rule

•condset⟶y

–The Apriori algorithm can be modified to generate CARs

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 66

9.3 Class Association Rules

(12)

• CAR can also be extended with multiple minimum supports

–The user can specify different minimum supports to different classes, which effectively assign a different minimum support to rules of each class

•E.g., a data set with two classes, Yes and No. We may want rules of class Yes to have the minimum support of 5% and rules of class No to have the minimum support of 10%

–By setting minimum class supports to 100% (or more for some classes), we can skip generating rules of those classes

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 67

9.3 Class Association Rules

• Tools

–Open source projects

•Weka

•RapidMiner –Commercial

•Intelligent Miner, replaced by DB2 Data Warehouse Editions

• PASW Modeler, developed by SPSS

•Oracle Data Mining (ODM)

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 68

9.3 Association Rule Mining

• Apriori algorithm, on auto characteristics data-set

–Class values: unacc, acc, good, vgood

–Attributes:

•Buying cost: vhigh, high, med, low

•Maintenance costs: vhigh, high, med, low

•Number of doors: 2, 3, 4, 5more

•Persons: 2, 4, more

•Safety: low, med, high

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 69

9.3 Association Rule Mining

• Apriori algorithm

–Number of rules –Support interval

•Upper and lower bound –Class index

–Confidence

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 70

9.3 Association Rule Mining

• Apriori algorithm

–Largest frequent itemsets comprise 3 items –Most powerful rules

are simple rules

•Most of the people find 2 person cars unacceptable

9.3 Association Rule Mining

–Lower confidence rule (62%)

•Unacceptable, 4 seat car, is probably unsafe (rule 30)

9.3 Association Rule Mining

(13)

• Open source projects also have their limits

–Car accidents data set

•350 000 rows

•54 attributes

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 73

9.3 Association Rule Mining

• Data Mining

–Time Series data

•Trend and Similarity Search Analysis –Sequence Patterns

DW & DM – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 74

Next lecture

Referenzen

ÄHNLICHE DOKUMENTE

Data Warehousing &amp; OLAP – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig

Data Warehousing &amp; OLAP – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig

Data Warehousing &amp; OLAP – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig

10.1 Decision Trees based Classification 10.2 Naive Bayesian Classification 10.3 Support Vector Machines (SVM)?. DW &amp; DM – Wolf-Tilo Balke – Institut für Informationssysteme

Data Warehousing &amp; OLAP – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig

Relational Database Systems 1 – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 54. 12.2 Supply

Data Warehousing &amp; OLAP – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 2..

Data Warehousing &amp; OLAP – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 3..