• Keine Ergebnisse gefunden

Machine learning with Bioconductor M. T. Morgan (

N/A
N/A
Protected

Academic year: 2022

Aktie "Machine learning with Bioconductor M. T. Morgan ("

Copied!
29
0
0

Volltext

(1)

Machine learning with Bioconductor

M. T. Morgan (mtmorgan@fhcrc.org) Fred Hutchinson Cancer Research Center

Seattle, WA

http://bioconductor.org 10 October 2006

(2)

Overview

A machine learning checklist

ˆ Filter (see lab)

ˆ Feature selection

ˆ Metrics: distance measures

ˆ Learn: un-supervised & supervised

ˆ Assess: cross-validation & beyond

(3)

Distance measures

Packages: dist, bioDist, daisy, . . .

Typical distance measures (e.g., bioDist)

ˆ euc: squared distance between two vectors; sensitive to scale

ˆ cor.dist: correlation (i.e., variance-standardized), so approximately scale-invariant

ˆ spearman.dist, tau.dist: rank-based correlation, so more robust

ˆ mutualInfo, MIdist: binned, then mutual information I(X ;Y) =

y∈Y

x∈X

p(x,y)log p(x,y) p(x) p(y)

ˆ man: ‘Manhattan’ distance

(4)

Euclidean distances

> library("Biobase")

> library("bioDist")

> library("ALL")

> data(ALL)

> allSubset = ALL[1:50, ALL$mol.biol %in%

+ c("BCR/ABL", "NEG")]

> allSubset$mol.biol <- factor(allSubset$mol.biol)

> eucDistance <- euc(allSubset)

Summarize, plot, and interpret. . .

> eucClust <- hclust(eucDistance)

> plot(as.dendrogram(eucClust), main = "euc")

(5)

0102030405060

euc

1007_s_at 1039_s_at 1020_s_at 1032_at 1000_at 1014_at 100_g_at 1030_s_at 1005_at 1008_f_at 1009_at 1013_at 1016_s_at 1042_at 1043_s_at 1045_s_at 1012_at 1029_s_at 102_at 1006_at 1037_at 1036_at 1002_f_at 1022_f_at 1017_at 1021_at 1028_at 1041_at 1025_g_at 1040_s_at 1001_at 1027_at 1033_g_at 1010_at 1034_at 1031_at 1044_s_at 1038_s_at 1019_g_at 1026_s_at 1018_at 101_at 1011_s_at 103_at 1015_s_at 1023_at 1003_s_at 1035_g_at 1004_at 1024_at

(6)

Distance metrics matter

ˆ euc measures Euclidean distance; sensitive to measurement scale

ˆ Between-gene expression values can be quite heterogenous

> summary(apply(exprs(allSubset), 1, mean))

Min. 1st Qu. Median Mean 3rd Qu. Max.

3.042 4.049 5.456 5.523 6.424 9.311

> summary(apply(exprs(allSubset), 1, var))

Min. 1st Qu. Median Mean 3rd Qu. Max.

0.02291 0.05178 0.07497 0.16850 0.21710 1.22200

(7)

Scale-independent distances

Different from euclidean distances?

> originalOptions <- par(mfrow = c(1, 2))

> eucClust <- hclust(euc(allSubset))

> plot(as.dendrogram(eucClust), main = "euc")

> corClust <- hclust(cor.dist(allSubset))

> plot(as.dendrogram(corClust), main = "cor.dist")

> par(originalOptions)

(8)

0102030405060

euc

1007_s_at 1039_s_at 1020_s_at 1032_at 1000_at 1014_at

100_g_at 1030_s_at 1005_at 1008_f_at 1009_at 1013_at

1016_s_at 1042_at

1043_s_at 1045_s_at

1012_at 1029_s_at 102_at

1006_at 1037_at 1036_at

1002_f_at 1022_f_at 1017_at 1021_at 1028_at 1041_at

1025_g_at 1040_s_at 1001_at 1027_at

1033_g_at

1010_at 1034_at 1031_at 1044_s_at 1038_s_at 1019_g_at 1026_s_at

1018_at 101_at 1011_s_at 103_at 1015_s_at 1023_at

1003_s_at 1035_g_at 1004_at 1024_at

0.00.20.40.60.81.0

cor.dist

1016_s_at 1022_f_at 1019_g_at 1018_at 1032_at

1035_g_at

1014_at 1031_at 1013_at

1008_f_at 1037_at 1000_at

100_g_at 1007_s_at 1009_at 1020_s_at 1021_at

1043_s_at 1044_s_at

1005_at 1011_s_at 1036_at 102_at 1045_s_at 1012_at 103_at 1029_s_at 1042_at 1003_s_at 1004_at 1015_s_at 1023_at 1033_g_at

1010_at 1028_at 1041_at

1040_s_at 1024_at 1026_s_at 1034_at

1030_s_at 1038_s_at 1039_s_at 1002_f_at 1025_g_at

1006_at 1027_at 1001_at 1017_at 101_at

(9)

Visualizing dendrogram structure

> eucMatrix <- as.matrix(euc(allSubset))

> heatmap(eucMatrix, symm = TRUE, col = heatmapColor, + distfun = as.dist, main = "euc")

1005_at1008_f_at 1009_at

1007_s_at 1039_s_at 1030_s_at100_g_at 1020_s_at

1032_at 1014_at 1000_at 1013_at 1016_s_at 1043_s_at

1042_at1045_s_at 1036_at

1002_f_at 1022_f_at 1012_at1029_s_at 102_at

1006_at 1037_at 1017_at 1021_at 1028_at 1041_at 1025_g_at 1040_s_at

1031_at 1044_s_at 1001_at 1027_at

1033_g_at

1034_at 1010_at

1038_s_at 1018_at101_at

1026_s_at 1019_g_at 1011_s_at 103_at1023_at

1015_s_at 1003_s_at 1035_g_at 1024_at 1004_at

1004_at 1024_at 1035_g_at 1003_s_at 1015_s_at 1023_at 103_at 1011_s_at 1019_g_at 1026_s_at 101_at 1018_at 1038_s_at 1010_at 1034_at 1033_g_at 1027_at 1001_at 1044_s_at 1031_at 1040_s_at 1025_g_at 1041_at 1028_at 1021_at 1017_at 1037_at 1006_at 102_at 1029_s_at 1012_at 1022_f_at 1002_f_at 1036_at 1045_s_at 1042_at 1043_s_at 1016_s_at 1013_at 1000_at 1014_at 1032_at 1020_s_at 100_g_at 1030_s_at 1039_s_at 1007_s_at 1009_at 1008_f_at 1005_at

euc

(10)

> corMatrix <- as.matrix(cor.dist(allSubset))

> heatmap(corMatrix, symm = TRUE, col = heatmapColor, + distfun = as.dist, main = "cor.dist")

1007_s_at1009_at 1020_s_at 1043_s_at

1021_at1044_s_at 1005_at 1011_s_at 1036_at 100_g_at1000_at 1013_at 1037_at

1008_f_at 1014_at 1031_at

1022_f_at 1016_s_at 1019_g_at 1018_at1035_g_at 1032_at

1030_s_at 1039_s_at 1038_s_at

1001_at 1017_at 101_at

1002_f_at 1025_g_at1027_at 1006_at

1045_s_at

102_at 103_at

1012_at 1029_s_at 1042_at 1040_s_at

1024_at 1034_at 1026_s_at 1003_s_at

1004_at

1015_s_at 1033_g_at 1023_at 1010_at 1028_at 1041_at

1041_at 1028_at 1010_at 1023_at 1033_g_at 1015_s_at 1004_at 1003_s_at 1026_s_at 1034_at 1024_at 1040_s_at 1042_at 1029_s_at 1012_at 103_at 102_at 1045_s_at 1006_at 1027_at 1025_g_at 1002_f_at 101_at 1017_at 1001_at 1038_s_at 1039_s_at 1030_s_at 1032_at 1035_g_at 1018_at 1019_g_at 1016_s_at 1022_f_at 1031_at 1014_at 1008_f_at 1037_at 1013_at 1000_at 100_g_at 1036_at 1011_s_at 1005_at 1044_s_at 1021_at 1043_s_at 1020_s_at 1009_at 1007_s_at

cor.dist

(11)

Options for subsequent analysis

ˆ Choose appropriate distance metric, if algorithm permits

ˆ Transform data prior to measuring distance

> exprs(allSubset) <- t(apply(exprs(allSubset), + 1, scale))

Better options indicated in the lab!

(12)

Machine learning

ˆ Methods of inference to create algorithms for prediction (classification of new samples)

Major types of machine learning

ˆ Unsupervised: no prior information on classification outcome, e.g., clustering. Implicit in visualization of distance metrics

ˆ Supervised: a priori information (such as tumor status) on classification

(13)

Supervised machine learning

Overall scenario

ˆ Use existing data with information on gene expression levels and phenotypes to devise an algorithm to classify samples with unknown phenotype

> levels(allSubset$mol.biol)

[1] "BCR/ABL" "NEG"

Steps

ˆ Apply non-specific filters to identify informative genes

ˆ Develop the classification algorithm

ˆ Assess performance of classification algorithm, typically using cross-validation

(14)

Machine learning algorithms

Linear algorithms

g(x) = w0 +wTx

ˆ x: sample; w: weights determined during training, w0: threshold for classification

ˆ ‘Linear’ indicates linear combination of features

ˆ Adjust weights to ‘best’ assign samples to their a priori types

ˆ Weights represent estimable parameters, and sample size limits the number of estimable parameters

ˆ E.g., linear discriminant analysis

(15)

Machine learning algorithms (continued)

ˆ Non-linear, e.g., neural networks

ˆ Regularized, e.g., support vector machines

ˆ Local, e.g., k nearest neighbor

ˆ Tree-based, e.g., classification and regression tree (CART) MLInterfaces

> library(MLInterfaces)

ˆ Unified interface to many machine learning algorithms

ˆ Interface provided for. . .

(16)

class knn1, knn.cv, lvq1, lvq2, lvq3, olvq1, som SOM

cluster agnes, clara, diana, fanny, silhouette e1071 bclust, cmeans, cshell, hclust, lca

naiveBayes, svm

gbm gbm

ipred bagging, ipredknn, lda, slda

MASS isoMDS, qda

nnet nnet

pamr cv, knn, pam, pamr randomForest randomForest

rpart rpart

(17)

Developing a machine learning algorithm

ˆ Divide sample into training and test sets

ˆ Identify an a priori classification

ˆ Use training set to develop a specific algorithm

ˆ Use test set to assess algorithm performance

> result <- knnB(allSubset, classifLab = "mol.biol", + trainInd = 1:41)

(18)

knnB

ˆ Invokes function knn, provided by package class

ˆ Distance metric: Euclidean

Summarize test classifications with a confusion matrix:

> confuMat(result)

predicted given BCR/ABL NEG

BCR/ABL 7 8

NEG 30 25

(19)

Model assessment with cross-validation

A great diversity of machine learning algorithms

ˆ Which is ‘best’ ? What is ‘best’ ?

ˆ Ability to correctly classify new samples?

ˆ Minimize uncertainty of each classification?

No free lunch: all models are best, in the domain of their assumptions

(20)

Assessing model performance

A quandary:

ˆ New samples are not already classified, so how can we know when our algorithm is working?

Solution:

ˆ Divide sample into training and test sets

ˆ Identify an a priori classification

ˆ Use training set to develop a specific algorithm

ˆ Use test set to assess algorithm performance

> result <- knnB(allSubset, classifLab = "mol.biol", + trainInd = 1:41)

(21)

Cross-validation

ˆ Repeatedly divide data into training set and test set, and assess algorithm performance

ˆ Several ways to divide data: leave-one-out, leave-out-group, etc.

Leave-one-out cross-validation

ˆ All but 1 sample included in the training set

ˆ Assess performance of trained algorithm based on classification (correct or not) of remaining sample

ˆ Repeat for all possible training sets: if there are n = 100 samples, then there are n = 100 cross-validations

(22)

Cross-validation with xval

> allKnnXval <- xval(allSubset, classLab = "mol.biol", + proc = knnB, xvalMethod = "LOO")

> length(allKnnXval)

[1] 111

> allKnnXval[1:4]

[1] "NEG" "NEG" "NEG" "NEG"

ˆ xvalMethod: leave-one-out (LOO), but others possible

ˆ Result is a character vector; each element represents one cross-classification, indicating how the ith individual was classified when left out

(23)

Assessing model fit

How well was each sample classified?

> as.character(allSubset$mol.biol[1:4])

[1] "BCR/ABL" "NEG" "BCR/ABL" "NEG"

> allKnnXval[1:4]

[1] "NEG" "NEG" "NEG" "NEG"

> table(given = allSubset$mol.biol, predicted = allKnnXval)

predicted given BCR/ABL NEG

BCR/ABL 17 20

NEG 26 48

(24)

Feature selection

ˆ Problem: sample size sets an upper limit on the number of features that can be used in a classification algorithm

ˆ Solution: reduce number of features, without using knowledge of classification ability, to those that are most informative

ˆ Must be applied consistently to each cross-validation

> library(genefilter)

Loading required package: survival Loading required package: splines

(25)

Implementing feature selection

> allSubset = ALL[, ALL$mol.biol %in% c("BCR/ABL", + "NEG")]

> allSubset$mol.biol <- factor(allSubset$mol.biol)

> exprs(allSubset) <- t(apply(exprs(allSubset), + 1, scale))

> tSelection <- function(data, classifier) {

+ tTests <- rowttests(data, data[[classifier]],

+ tstatOnly = FALSE)

+ abs(tTests$statistic) + }

> tStats <- tSelection(allSubset, "mol.biol")

> tTop50 <- order(tStats, decreasing = TRUE)[1:50]

(26)

Implementing feature selection (continued)

Any improvement with a single set of training individuals?

> confuMat(knnB(allSubset[tTop50, ], classifLab = "mol.biol", + trainInd = 1:41))

predicted given BCR/ABL NEG

BCR/ABL 14 1

NEG 0 55

> confuMat(knnB(allSubset[1:50, ], classifLab = "mol.biol", + trainInd = 1:41))

predicted given BCR/ABL NEG

BCR/ABL 7 8

(27)

Feature selection in each cross-validation

> tTopKnnXval <- xval(allSubset, "mol.biol",

+ knnB, "LOO", group = 0:0, fsFun = tSelection, + fsNum = 50)

> table(given = allSubset$mol.biol, predicted = tTopKnnXval[["out"]])

predicted given BCR/ABL NEG

BCR/ABL 31 6

NEG 1 73

> table(given = allSubset$mol.biol, predicted = allKnnXval)

predicted given BCR/ABL NEG

BCR/ABL 17 20

NEG 26 48

(28)

Recap

ˆ Distance metrics are very important

ˆ Diverse machine learning algorithms available

ˆ Cross-validation assesses algorithm performance

ˆ Feature selection reduces number of features to a (statistically and computationally) reasonable number

(29)

Directions

Machine learning

ˆ Assessing feature importance, e.g., assessing consequences of feature permutation in test sets with several samples

ˆ edd: use machine learning to choose between different models (e.g., unimodal; bimodal) describing the relationship between features and phenotypes

ˆ . . .

More generally. . .

ˆ Extensive opportunity for rigorous, creative analysis in Bioconductor (e.g., limma, for linear models) and R

Referenzen

ÄHNLICHE DOKUMENTE

Finally, in Spohn (1983, ch. 5 and 6; see also 1988) I have proposed the theory of ranking functions, as they are called nowadays, which yield a perfect deterministic analogue

In any case, Turkey would still be the top refugee hosting country in the world, followed by Pakistan with 1.4 million refugees (UNHCR 2019) but the perceived ‘migration pressure’

ii To avoid some (or all) of the nominal exchange rate appreciation that would have resulted from the capital inflow, the monetary authorities have intervened in varying degrees

- Tip: Wenn keine Nachricht den Schnitt (von links nach rechts??) überquert, ist der Lebensfaden des Systems gerissen; rechts des Schnittes kann dann keine Aktivität mehr

 Calculate the approximate mass of 3.000.000 litres of air at room temperature (80 % of. volume nitrogen,

 Calculate the approximate mass of 3.000.000 litres of air at room temperature (80 % of. volume nitrogen,

In der Stadt gibt es (da sind) viele weiße Autos.. In the city there are many

Prove: Let CP(R, G) be defined as the set of critical pairs regarding R and the set of equations G oriented in both ways?. If R is left-linear, then the following statements

This can be explained by the fact that the Mensch sample (being published earlier) has been known to Van Duijn and to Haustein and Neuwirth, while the latter two have

Those who also visited little known manufacturers in addi- tion to the well-known large manufacturers, who (still) cover approximately 75% of the market in Germany could, consider-

4 illustrates the further steps of the algorithm for the fixed bounding box B 0 until we reach the desired density of grid points around the reachable set (here the distance

Table 1 compares the cpu-times to calculate the reachable set using the optimization-based algorithm and the subdivision algorithm and Table 2 shows the number of needed gridpoints

- RQ: How do the scientific councillors see the role of Open Science in the knowledge transfer between research and policy. - In-depth interviews with science councillors (SCs))

Zur weiteren Unterscheidung von Eliten und ExpertInnen ist eine Differenzierung des Machtbegriffs notwendig: ExpertInnen können über Gestaltungsmacht und/oder Deutungsmacht

The aim of the thesis is the development, implementation and validation, of a machine learning framework for solving different types of gait classification problems. The

Thesaurus software developed at Tartu University Library as part of integrated library system INGRID offers significant aid for thesaurus construction, subject indexing and

mon factor was the packing density of 0.41 used for the hard sphere interference functions. Following this extension it is appropriate to discuss any theoretical

Prove that every M¨ uller-recognizable tree language is recognized by a M¨ uller automaton which is complete and contains a single initial state1. Exercise 5

This exercise sheet aims to assess your progress and to explicitly work out more details of some of the results proposed in the previous lectures. If in addition the b n can be