Logistic Regression
Two Worlds: Probabilistic & Algorithmic
Bayes Classifier
Probabilistic classifier with a generative setup based on class density models
Bayes (Gauss), NaΓ―ve Bayes
βDirectβ Classifiers
Find best parameter (e.g. π€) with respect to a specific loss function measuring
misclassification
Perceptron, SVM, Tree, ANN
We know two conceptual approaches to classification:
data class density estimation
classification
rule decision
learning
data classification
function decision
learning
Advantages of Both Worlds
β’ Posterior distribution has advantages over classification label:
β’ Asymmetric risks: need classification probability
β’ Classification certainty: Indicator if decision in unsure
β’ Algorithmic approach with direct learning has advantages:
β’ Focus of modelling power on correct classification where it counts
β’ Easier decision line interpretation
β’ Combination?
5
Discriminative Probabilistic Classifier
π π₯ πΆ1 π π₯ πΆ2
π Τ¦π₯ = πππ + π€0
Bayes Classifier Linear Classifier
Discriminative Probabilistic
Classifier
Bishop PRML
Bishop PRML
Towards a βDirectβ Probabilistic Classifier
β’ Idea 1: Directly learn a posterior distribution
For classification with the Bayes classifier, the posterior distribution is
relevant. We can directly estimate a model of this distribution. We know from NaΓ―ve Bayes that we can probably expect a good performance from the
posterior model.
β’ Idea 2: Extend linear classification with probabilistic interpretation
The linear classifier outputs a distance to the decision plane. We can use this value and interpret it probabilistically: βThe further away, the more certainβ
7
Logistic Regression
The Logistic Regression will implement both ideas: It is a model of a posterior class distribution for classification and can be interpreted as a probabilistic linear classifier. But it is a fully probabilistic model, not only a βpost-processingβ of a linear classifier.
It extends the hyperplane decision idea to Bayes world
β’ Direct model of the posterior for classification
β’ Probabilistic model (classification according to a probability distribution)
β’ Discriminative model (models posterior rather than likelihood and prior)
β’ Linear model for classification
β’ Simple and accessible (we can understand that)
β’ We can study the relation to other linear classifiers, i.e. SVM
History of Logistic Regression
β’ Logistic Regression is a very βoldβ method of statistical analysis and in widespread use, especially in the traditional statistical community (not machine learning).
1957/58, Walker, Duncan, Cox
β’ A method more often used to study and identify explaining factors rather than to do individual prediction.
Statistical analysis vs. prediction focus of modern machine learning
Many medical studies of risk factors etc. are based on logistic regression
9
Statistical Data Models
10Simplest form besides constant (one prototype) is a linear model.
( )
0 0 01
,
d
T
w i i
i
Lin x w x w w x w w x w
=
= ο₯ + = + = +
1
0, w
x w
x w
ο© οΉ ο© οΉ
= οͺ οΊ = οͺ οΊ
ο« ο» ο« ο»
( ) ,
0,
Lin
wx w x w w x
ο = + =
We do not know
P(x,y)
but we can assume a certain form.---> This is called a data model.
Repetition: Linear Classifier
Linear classification rule:
11
π π = πππ + π€0 π π β₯ 0 β
π π < 0 β
Decision boundary is a a hyperplane
Repetition: Posterior Distribution
β’ Classification with Posterior distribution: Bayes
Based on class densities and a prior
π πΆ2 π = π π πΆ2 π πΆ2
π π πΆ1 π πΆ1 + π π πΆ2 π πΆ2 π πΆ1 π = π π πΆ1 π πΆ1
π π πΆ1 π πΆ1 + π π πΆ2 π πΆ2
Bishop PRML
Combination: Discriminative Classifier
13
Probabilistic interpretation of classification output: ~distance to separation plane
Decision boundary
Bishop PRML
Notation Changes
β’ We work with two classes
Data with (numerical) feature vectors π₯Τ¦ and labels π β {π, π}
We do not use the notation of Bayes with π anymore. We will need the explicit label value of π in our models later.
β’ Classification goal: infer the best class label {π ππ π}for a given feature point
π¦β = arg max
π¦β{0,1}π(π¦|π)
β’ All our modeling focuses only on the posterior of having class 1:
π π¦ = 1 π
β’ Obtaining the other is trivial: π π¦ = 0 π = 1 β π(π¦ = 1 |π )
Parametric Posterior Model
We need a model for the posterior distribution, depending on the feature vector (of course) and neatly parameterized.
The linear classifier is a good starting point. We know its parametrization very well:
We thus model the posterior as a function of the linear classifier:
Posterior from classification result: βscaled distanceβ to decision plane
π π¦ = 1 π, π½ = π π; π½
π π; π, π€0 = πππ + π€0
π π¦ = 1 π, π, π€0 = π(πππ + π€0)
15
Logistic Function
To use the unbounded distance to the decision plane in a probabilistic setup, we need to map it into the interval [0, 1]
This is very similar as we did in neural nets: activation function
The logistic function π π₯ squashes a value π₯ β β to 0, 1
π π₯ = 1
1 + eβπ₯
The logistic function is a smooth, soft threshold π π₯ β 1 π₯ β β π π₯ β 0 π₯ β ββ
π 0 = 1
2
The Logistic Function
17The Logistic βRegressionβ
18The Logistic Regression Posterior
We model the posterior distribution for classification in a two- classes-setting by applying the logistic function to the linear classifier:
π π¦ = 1 π₯ = π π π₯
π π¦ = 1 π, π, π€0 = π(πππ + π€0) = 1
1 + πβ(πππ+π€0)
This a location-dependent model of the posterior distribution, parametrized by a linear hyperplane classifier.
19
Logistic Regression is a Linear Classifier
The logistic regression posterior leads to a linear classifier:
π π¦ = 1 π, π, π€0 = 1
1 + exp β(πππ + π€0) π π¦ = 0 π, π, π€0 = 1 β π π¦ = 1 π, π, π€0
π π¦ = 1 π, π, π€0 > 1
2 β
Classification boundary is at:
π¦ = 1 classification; π¦ = 0 otherwise
π π¦ = 1 π, π, π€0 = 1 2
Interpretation: Logit
Is the choice of the logistic function justified?
β’ Yes, the logit is a linear function of our data:
Logit: log of the odds ratio: ln π
1βπ
β’ But other choices are valid, too
They lead to other models than logistic regression, e.g. probit regression
β Generalized Linear Models (GLM)
21
ln π(π¦ = 1|π)
π(π¦ = 0|π) = πππ + π€0 The linear function (~distance from decision plane) directly expresses our classification certainty, measured by the
βodds ratioβ:
double distance β squared odds e.g. 3: 2 β 9: 4
πΈ[π¦] = πβ1 πππ + π€0
The Logistic Regression
22β’ So far we have made no assumption on the data!
β’ We can get
r(x)
from a generative model or model it directly as function of the data (discriminative)Logistic Regression:
Model: The logit r(x) = logπ(π¦=1|π)
π(π¦=0|π)
=
log1βππis a linear function of the data
( )
01
log ,
1
d
i i i
r x p w x w w x
p =
= = + =
β
ο₯
( = ) ( = ο³ )
( = ) ( = ο³ ) = 1
Training a Posterior Distribution Model
The posterior model for classification requires training. Logistic
regression is not just a post-processing of a linear classifier. Learning of good parameter values needs be done with respect to the
probabilistic meaning of the posterior distribution.
β’ In the probabilistic setting, learning is usually estimation
We now have a slightly different situation than with Bayes: We do not need class densities but a good posterior distribution.
β’ We will use Maximum Likelihood and Maximum-A-Posteriori estimates of our parameters π, π€0
Later: This also corresponds to a cost function of obtaining π, π€0
23
Maximum Likelihood Learning
The Maximum Likelihood principle can be adapted to fit the posterior distribution (discriminative case):
β’ We choose the parameters π, π€0 which maximize the posterior distribution of the training set πΏ with labels π:
π, π€0 = arg max
π,π€0 π Y π; π, π€0
= arg max
π,π€0 Οπβπ π π¦ π; π, π€0 (iid)
25
Logistic Regression: Maximum Likelihood Estimate of w (1) To simplify the notation we use
w, x
instead of π, π€0The discriminative (log) likelihood function for our data
( ) ( )
1 N
i i i
P Y X P y x
= ο
=( ) ( 1 ) (
y0 )
1 y y(1 )
1 yP y x P y x P y x
βp p
βο = = = = β
1
(1 )
1i i
N
i
y y
i i
p p
=
= ο β
β( )
log P Y X =
( )
1
log log 1
1
N
i
i i
i i
y p p
=
p
ο¦ οΆ
= ο₯ ο§ ο¨ β ο· οΈ + β
( ) ( ) ( )
1
log 1 log 1
N
i i i i
i
y p y p
=
+ β β
ο₯ ( 1 ) ( )
TP y = x = ο³ w x P y ( = 0 x ) = β 1 ο³ ( ) w x
TWith and
βcross-entropyβ cost function
26
log-likelihood function continued
( ) ( )
log L Y X , οΊ log P Y X = ( )
1
log log 1
1
N
i
i i
i i
y p p
=
p
ο¦ οΆ
+ β
ο§ β ο·
ο¨ οΈ
ο₯
( ) ( )
1
log , log 1
N
T
i i
i
T i
L Y X y w x e
w x=
= ο₯ β +
Maximize the log-likelihood function with respect to
w
Maximum Likelihood Estimate of w (2)
( )
11 T
T
i w x
p w x
ο³ eβ
= =
Remember + and linear Logit lππ1 β πππ
π
= π€ππ₯
( ) ( )
1
log , log 1
N
T
i i
i
T i
L Y X y w x e
w xw w
=οΆ οΆ
= β +
οΆ οΆ ο₯
1
1
N
T T
i i i
i
T i
T i
w x w x
y x e x
e
=
= β
ο₯ +
Maximum Likelihood Estimate of w (3)
27
Derivative of a Dot Product
π
ππ = π»π° = π
ππ€1 , π
ππ€2 , β¦ , π
ππ€π
π
ππ πππ = π
ππ€1 πππ, π
ππ€2 πππ, β¦ , π
ππ€π πππ
π
ππ€π πππ = π
ππ€π ΰ·
π=0 π
π€ππ₯π = π₯π Gradient operator
Per component
( ) ( )
1
log , log 1
N
T
i i
i
T i
L Y X y w x e
w xw w
=οΆ οΆ
= β +
οΆ οΆ ο₯
=
!0
1
1
N
T T
i i i
i
T i
T i
w x w x
y x e x
e
=
= β
ο₯ +
( )
( )
1 N
T T
i i i
i
y ο³ w x x
=
= ο₯ β
β’ Non-linear equation in
w
: no closed form solution.β’ The function
Log L
is concave therefore a unique maximum exists.Maximum Likelihood Estimate of w (3)
1
1 1
T i
T T
i i
w x
w x w x
e
e eβ
+ = +
29
Iterative Reweighted Least Squares
The concave log π π|πΏ can be maximized iteratively with the Newton-Raphson algorithm: Iterative Reweighted Least Squares
ππ+1 β ππ β π―β1 π
ππ ln π π|πΏ; ππ
Derivatives and evaluation always with respect to ππ
Hessian: Concave Likelihood
π― = π2
πππππ ln π π|π
π
ππ
π
πππ ln π π π = β ΰ·
π
ππππππ ππππ 1 β π ππππ = βπΏπΊπΏπ
We use an old trick to keep it simple:
π β π€0
π , π β 1 π
The Hessian is negative definite:
β’ The sample covariance matrix Οπ πππππ is positive definite
β’ π ππππ 1 β π ππππ is always positive
The optimization problem is said to be convex and has thus a optimal solution which can be iteratively calculated.
32
Iterative Reweighted Least Squares
The concave log π π|πΏ can be maximized iteratively with the Newton-Raphson algorithm: Iterative Reweighted Least Squares
ππ+1 β ππ β π―β1 π
ππ ln π π|πΏ; ππ
Derivatives and evaluation always with respect to ππ
Method results in an iteration of reweighted least-squares steps π€π+1 = ππππ β1π π π§
π§ = πππ€π + πβ1 π β π π€π
β’ Weighted least-squares with π as target: ππππ β1π π π§
Example: Logistic Regression
35
0.25 0.75
Solid line: classification (π = 0.5)
Probabilistic result: posterior of classification everywhere
Dashed lines: π = 0.25, π = 0.75 lines
The posterior probability
decays/increases with distance to the decision boundary
0.125 0.875
Linearly Separable
β’ Maximum Likelihood learning is problematic in the linearly separable case: π€ diverges in length
β leads to classification with infinite certainty
β’ Classification is still right but posterior estimate is not
Prior Assumptions
β’ Infinitely certain classification is likely an estimation artefact:
We do not have enough training samples
β maximum likelihood estimation leads to problematic results
β’ Solution: MAP estimate with prior assumptions on π€
37
π π€ = π π€|0, π2πΌ
π π¦|π, π, π€0 = ππ¦ 1 β π 1βπ¦ π, π€0 = arg max
π,π€0 π Y π; π, π€0 π π
= arg max
π,π€0 π π ΰ·
πβπ
π π¦ π, π, π€0
Smaller π€ are preferred (shrinkage)
Likelihood model is unchanged
MAP Learning
ln π π ΰ·
πβπ
π π¦ π, π, π€0 =
ΰ·( π¦π ππππ + π€0 β ln 1 + exp ππππ + π€0 ) β 1
2π2 π 2
π
ππ ln π π|π = ΰ·
π
( π¦π β π ππππ + π€0 πππ ) β 1
π2 ππ =! 0 We need: π
ππ π 2 = 2ππ
Bayesian Logistic Regression
Idea: In the separable case, there are many perfect linear classifiers which all separate the data. Average the classification result and accuracy using all of these classifiers.
β’ Optimal way to deal with missing knowledge in Bayes sense
Bishop PRML Bishop PRML 39
Logistic Regression and Neural Nets
β’ The standard single neuron with the logistic activation is logistic regression if trained with the same cost function (cross-entropy)
But training with least-squares results in a different classifier
β’ Multiclass logistic regression with soft-max corresponds to what is called a soft-max layer in ANN. It is the standard multiclass output in most ANN architectures.
π π¦ = 1 π, π, π€ = π πππ + π€
π₯1 π₯2 π₯3
π
πΊ π
Non-Linear Extension
β’ Logistic regression is often extended to non-linear cases:
Extension through adding additional transformed features
β’ Combination terms: π₯ππ₯π
β’ Monomial terms: π₯π2
Standard procedure in medicine: inspect resulting π€ to find important factors and interactions π₯ππ₯π (comes with statistical information).
β’ Usage of kernels is possible: training and classification can be
formulated with dot products of data points. The scalar products can be βreplacedβ by kernel expansions with the kernel trick.
43
π β
π π₯1π₯2
π₯22
Kernel Logistic Regression
β’ Equations of logistic regression can be reformulated with dot products:
ππ»π = ΰ·
π=1 π
πΌπππππ β ΰ·
π=1 π
πΌππ ππ, π
β’ No Support Vectors: kernel evaluations with all training points
π π¦ = 1 π = π ΰ·
π=1 π
πΌππ ππ, π
Discriminative vs. Generative
Comparison of logistic regression to naΓ―ve Bayes
Ng, Andrew Y., and Michael I. Jordan. "On Discriminative vs. Generative
classifiers: A comparison of logistic regression and naive Bayes." Advances in NIPS 14, 2001.
Conclusion:
β’ Logistic regression has a lower asymptotic error
β’ NaΓ―ve Bayes can reach its (higher) asymptotic error faster
General over-simplification (dangerous!): use a generative model with few data (more knowledge) and a discriminative model with a lot of training data (more learning)
45
Logistic Regression: Summary
46ο΄ A probabilistic, linear method for classification!
ο΄ Discriminative method (Model for posterior)
ο΄ Linear model for the Logit
ο΄ The posterior probability is given by the logistic function of the Logit:
ο΄ ML-estimation of is unique but non-linear
ο΄ Logistic regression is a very often used method