• Keine Ergebnisse gefunden

Regression Exercise

N/A
N/A
Protected

Academic year: 2022

Aktie "Regression Exercise"

Copied!
48
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Regression Exercise

Christopher Nowzohour

09.04.2014

(2)

Regression: Line Fitting

y=Xβ+ y

X

β

(n×1)-vector of observations of dependent variable (n×p)-matrix of observations of independent variables (one column per variable, first columnt constant) (p×1)-vector of parameters

(n×1)-vector of errors

Goals:

1 Prediction: Accurately predict y for newX

2 Statistical Inference: How confident are we about the parameter values β?

3 Causal Inference: Can we change y by changingX?

I Careful – need extra assumptions to make causal statements (e.g. no hidden variables, known causal direction)

I Otherwise: Confounding,Simpson’s Paradox, ...

Christopher Nowzohour Regression Exercise 09.04.2014 2 / 9

(3)

Regression: Line Fitting

y=Xβ+ y

X

β

(n×1)-vector of observations of dependent variable (n×p)-matrix of observations of independent variables (one column per variable, first columnt constant) (p×1)-vector of parameters

(n×1)-vector of errors Goals:

1 Prediction: Accurately predict y for newX

2 Statistical Inference: How confident are we about the parameter values β?

3 Causal Inference: Can we change y by changingX?

I Careful – need extra assumptions to make causal statements (e.g. no hidden variables, known causal direction)

I Otherwise: Confounding,Simpson’s Paradox, ...

(4)

Regression: Line Fitting

y=Xβ+ y

X

β

(n×1)-vector of observations of dependent variable (n×p)-matrix of observations of independent variables (one column per variable, first columnt constant) (p×1)-vector of parameters

(n×1)-vector of errors Goals:

1 Prediction: Accurately predict y for newX

2 Statistical Inference: How confident are we about the parameter values β?

3 Causal Inference: Can we change y by changingX?

I Careful – need extra assumptions to make causal statements (e.g. no hidden variables, known causal direction)

I Otherwise: Confounding,Simpson’s Paradox, ...

Christopher Nowzohour Regression Exercise 09.04.2014 2 / 9

(5)

Regression: Line Fitting

y=Xβ+ y

X

β

(n×1)-vector of observations of dependent variable (n×p)-matrix of observations of independent variables (one column per variable, first columnt constant) (p×1)-vector of parameters

(n×1)-vector of errors Goals:

1 Prediction: Accurately predict y for newX

2 Statistical Inference: How confident are we about the parameter values β?

3 Causal Inference: Can we change y by changingX?

I Careful – need extra assumptions to make causal statements (e.g. no hidden variables, known causal direction)

I Otherwise: Confounding,Simpson’s Paradox, ...

(6)

Regression: Line Fitting

y=Xβ+ y

X

β

(n×1)-vector of observations of dependent variable (n×p)-matrix of observations of independent variables (one column per variable, first columnt constant) (p×1)-vector of parameters

(n×1)-vector of errors Goals:

1 Prediction: Accurately predict y for newX

2 Statistical Inference: How confident are we about the parameter values β?

3 Causal Inference: Can we change y by changingX?

I Careful – need extra assumptions to make causal statements (e.g. no hidden variables, known causal direction)

I Otherwise: Confounding,Simpson’s Paradox, ...

Christopher Nowzohour Regression Exercise 09.04.2014 2 / 9

(7)

Regression: Line Fitting

y=Xβ+ y

X

β

(n×1)-vector of observations of dependent variable (n×p)-matrix of observations of independent variables (one column per variable, first columnt constant) (p×1)-vector of parameters

(n×1)-vector of errors Goals:

1 Prediction: Accurately predict y for newX

2 Statistical Inference: How confident are we about the parameter values β?

3 Causal Inference: Can we change y by changingX?

I Careful – need extra assumptions to make causal statements (e.g. no hidden variables, known causal direction)

(8)

Fitting criteria: three examples

What are “good” parameter estimates β?b

1 Small squared residuals (L2 regression / least squares):

βbL2 = arg min

β

ky−Xβk22= arg min

β n

X

i=1

(yi −xi·β)2

2 Small absolute residuals (L1 regression / robust regression):

βbL1 = arg min

β

ky−Xβk1= arg min

β n

X

i=1

|yi −xi ·β|

3 Maximum likelihood:

βbML = arg max

β n

X

i=1

logf(yi −xi·β)

Christopher Nowzohour Regression Exercise 09.04.2014 3 / 9

(9)

Fitting criteria: three examples

What are “good” parameter estimates β?b

1 Small squared residuals (L2 regression / least squares):

βbL2 = arg min

β

ky−Xβk22= arg min

β n

X

i=1

(yi −xi·β)2

2 Small absolute residuals (L1 regression / robust regression):

βbL1 = arg min

β

ky−Xβk1= arg min

β n

X

i=1

|yi −xi ·β|

3 Maximum likelihood:

βbML = arg max

β n

X

i=1

logf(yi −xi·β)

(10)

Fitting criteria: three examples

What are “good” parameter estimates β?b

1 Small squared residuals (L2 regression / least squares):

βbL2 = arg min

β

ky−Xβk22= arg min

β n

X

i=1

(yi −xi·β)2

2 Small absolute residuals (L1 regression / robust regression):

βbL1 = arg min

β

ky−Xβk1= arg min

β n

X

i=1

|yi −xi ·β|

3 Maximum likelihood:

βbML = arg max

β n

X

i=1

logf(yi −xi·β)

Christopher Nowzohour Regression Exercise 09.04.2014 3 / 9

(11)

Fitting criteria: three examples

What are “good” parameter estimates β?b

1 Small squared residuals (L2 regression / least squares):

βbL2 = arg min

β

ky−Xβk22= arg min

β n

X

i=1

(yi −xi·β)2

2 Small absolute residuals (L1 regression / robust regression):

βbL1 = arg min

β

ky−Xβk1= arg min

β n

X

i=1

|yi −xi ·β|

3 Maximum likelihood:

βbML = arg max

β n

Xlogf(yi −xi·β)

(12)

Finding optimal parameters β b

1 Small squared residuals (L2 regression / least squares):

∇ky−XβbL2k22=−2XT(y−XβbL2)=! 0 HenceβbL2 = (XTX)−1XTy

2 Small absolute residuals (L1 regression / robust regression):

I No analytic solution possible :-(

I But numerical optimization works in practice (e.g. gradient descent)

3 Maximum likelihood:

I If∼ Nn(0, σ2In×n), for someσ >0: βbML=βbL2 !

I In general: can be difficult (→numerical optimization)

Christopher Nowzohour Regression Exercise 09.04.2014 4 / 9

(13)

Finding optimal parameters β b

1 Small squared residuals (L2 regression / least squares):

∇ky−XβbL2k22=−2XT(y−XβbL2)=! 0 HenceβbL2= (XTX)−1XTy

2 Small absolute residuals (L1 regression / robust regression):

I No analytic solution possible :-(

I But numerical optimization works in practice (e.g. gradient descent)

3 Maximum likelihood:

I If∼ Nn(0, σ2In×n), for someσ >0: βbML=βbL2 !

I In general: can be difficult (→numerical optimization)

(14)

Finding optimal parameters β b

1 Small squared residuals (L2 regression / least squares):

∇ky−XβbL2k22=−2XT(y−XβbL2)=! 0 HenceβbL2= (XTX)−1XTy

2 Small absolute residuals (L1 regression / robust regression):

I No analytic solution possible :-(

I But numerical optimization works in practice (e.g. gradient descent)

3 Maximum likelihood:

I If∼ Nn(0, σ2In×n), for someσ >0: βbML=βbL2 !

I In general: can be difficult (→numerical optimization)

Christopher Nowzohour Regression Exercise 09.04.2014 4 / 9

(15)

Finding optimal parameters β b

1 Small squared residuals (L2 regression / least squares):

∇ky−XβbL2k22=−2XT(y−XβbL2)=! 0 HenceβbL2= (XTX)−1XTy

2 Small absolute residuals (L1 regression / robust regression):

I No analytic solution possible :-(

I But numerical optimization works in practice (e.g. gradient descent)

3 Maximum likelihood:

I If∼ Nn(0, σ2In×n), for someσ >0: βbML=βbL2 !

I In general: can be difficult (→numerical optimization)

(16)

Finding optimal parameters β b

1 Small squared residuals (L2 regression / least squares):

∇ky−XβbL2k22=−2XT(y−XβbL2)=! 0 HenceβbL2= (XTX)−1XTy

2 Small absolute residuals (L1 regression / robust regression):

I No analytic solution possible :-(

I But numerical optimization works in practice (e.g. gradient descent)

3 Maximum likelihood:

I If∼ Nn(0, σ2In×n), for someσ >0: βbML=βbL2 !

I In general: can be difficult (→numerical optimization)

Christopher Nowzohour Regression Exercise 09.04.2014 4 / 9

(17)

Finding optimal parameters β b

1 Small squared residuals (L2 regression / least squares):

∇ky−XβbL2k22=−2XT(y−XβbL2)=! 0 HenceβbL2= (XTX)−1XTy

2 Small absolute residuals (L1 regression / robust regression):

I No analytic solution possible :-(

I But numerical optimization works in practice (e.g. gradient descent)

3 Maximum likelihood:

I If∼ Nn(0, σ2In×n), for someσ >0: βbML=βbL2 !

I In general: can be difficult (→numerical optimization)

(18)

Typical Assumptions

In descending order of importance:

1 Our sample (X,y) is representative of the population

2 X has full column rank (n≥p and no collinear predictors)

3 Unbiased errors: E[i] = 0 ∀i

4 Uncorrelated errors: E[ij] = 0 ∀i,j (i 6=j)

5 Exactly measured (but possibly still random) covariates X

6 Constant error variance: E[2i] =σ2 ∀i

7 Jointly Gaussian errors: ∼ N

Assumptions 3,4,6,7 are often summarized as ∼ Nn(0, σ2In×n)

Christopher Nowzohour Regression Exercise 09.04.2014 5 / 9

(19)

Typical Assumptions

In descending order of importance:

1 Our sample (X,y) is representative of the population

2 X has full column rank (n≥p and no collinear predictors)

3 Unbiased errors: E[i] = 0 ∀i

4 Uncorrelated errors: E[ij] = 0 ∀i,j (i 6=j)

5 Exactly measured (but possibly still random) covariates X

6 Constant error variance: E[2i] =σ2 ∀i

7 Jointly Gaussian errors: ∼ N

Assumptions 3,4,6,7 are often summarized as ∼ Nn(0, σ2In×n)

(20)

Typical Assumptions

In descending order of importance:

1 Our sample (X,y) is representative of the population

2 X has full column rank (n≥p and no collinear predictors)

3 Unbiased errors: E[i] = 0 ∀i

4 Uncorrelated errors: E[ij] = 0 ∀i,j (i 6=j)

5 Exactly measured (but possibly still random) covariates X

6 Constant error variance: E[2i] =σ2 ∀i

7 Jointly Gaussian errors: ∼ N

Assumptions 3,4,6,7 are often summarized as ∼ Nn(0, σ2In×n)

Christopher Nowzohour Regression Exercise 09.04.2014 5 / 9

(21)

Typical Assumptions

In descending order of importance:

1 Our sample (X,y) is representative of the population

2 X has full column rank (n≥p and no collinear predictors)

3 Unbiased errors: E[i] = 0 ∀i

4 Uncorrelated errors: E[ij] = 0 ∀i,j (i 6=j)

5 Exactly measured (but possibly still random) covariates X

6 Constant error variance: E[2i] =σ2 ∀i

7 Jointly Gaussian errors: ∼ N

Assumptions 3,4,6,7 are often summarized as ∼ Nn(0, σ2In×n)

(22)

Typical Assumptions

In descending order of importance:

1 Our sample (X,y) is representative of the population

2 X has full column rank (n≥p and no collinear predictors)

3 Unbiased errors: E[i] = 0 ∀i

4 Uncorrelated errors: E[ij] = 0 ∀i,j (i 6=j)

5 Exactly measured (but possibly still random) covariates X

6 Constant error variance: E[2i] =σ2 ∀i

7 Jointly Gaussian errors: ∼ N

Assumptions 3,4,6,7 are often summarized as ∼ Nn(0, σ2In×n)

Christopher Nowzohour Regression Exercise 09.04.2014 5 / 9

(23)

Typical Assumptions

In descending order of importance:

1 Our sample (X,y) is representative of the population

2 X has full column rank (n≥p and no collinear predictors)

3 Unbiased errors: E[i] = 0 ∀i

4 Uncorrelated errors: E[ij] = 0 ∀i,j (i 6=j)

5 Exactly measured (but possibly still random) covariates X

6 Constant error variance: E[2i] =σ2 ∀i

7 Jointly Gaussian errors: ∼ N

Assumptions 3,4,6,7 are often summarized as ∼ Nn(0, σ2In×n)

(24)

Typical Assumptions

In descending order of importance:

1 Our sample (X,y) is representative of the population

2 X has full column rank (n≥p and no collinear predictors)

3 Unbiased errors: E[i] = 0 ∀i

4 Uncorrelated errors: E[ij] = 0 ∀i,j (i 6=j)

5 Exactly measured (but possibly still random) covariates X

6 Constant error variance: E[2i] =σ2 ∀i

7 Jointly Gaussian errors: ∼ N

Assumptions 3,4,6,7 are often summarized as ∼ Nn(0, σ2In×n)

Christopher Nowzohour Regression Exercise 09.04.2014 5 / 9

(25)

Typical Assumptions

In descending order of importance:

1 Our sample (X,y) is representative of the population

2 X has full column rank (n≥p and no collinear predictors)

3 Unbiased errors: E[i] = 0 ∀i

4 Uncorrelated errors: E[ij] = 0 ∀i,j (i 6=j)

5 Exactly measured (but possibly still random) covariates X

6 Constant error variance: E[2i] =σ2 ∀i

7 Jointly Gaussian errors: ∼ N

Assumptions 3,4,6,7 are often summarized as ∼ Nn(0, σ2In×n)

(26)

Typical Assumptions

In descending order of importance:

1 Our sample (X,y) is representative of the population

2 X has full column rank (n≥p and no collinear predictors)

3 Unbiased errors: E[i] = 0 ∀i

4 Uncorrelated errors: E[ij] = 0 ∀i,j (i 6=j)

5 Exactly measured (but possibly still random) covariates X

6 Constant error variance: E[2i] =σ2 ∀i

7 Jointly Gaussian errors: ∼ N

Assumptions 3,4,6,7 are often summarized as ∼ Nn(0, σ2In×n)

Christopher Nowzohour Regression Exercise 09.04.2014 5 / 9

(27)

Properties of β b

L2

If we have∼ Nn(0, σ2In×n), then the following hold:

1 Unbiasedness: E[βbL2] =β

2 Minimal variance among all unbiased estimators (Gauss-Markov Theorem)

3 βbL2 ∼ Np(β, σ2(XTX)−1), andβbL2 is independent of bσ2

I t-tests for components of βbL2 possible

I F-test for the whole of βbL2 possible

I Confidence interval forE[y0|x0] and prediction interval fory0possible (wherey0is a new observation at x0)

(28)

Properties of β b

L2

If we have∼ Nn(0, σ2In×n), then the following hold:

1 Unbiasedness: E[βbL2] =β

2 Minimal variance among all unbiased estimators (Gauss-Markov Theorem)

3 βbL2 ∼ Np(β, σ2(XTX)−1), andβbL2 is independent of bσ2

I t-tests for components of βbL2 possible

I F-test for the whole of βbL2 possible

I Confidence interval forE[y0|x0] and prediction interval fory0possible (wherey0is a new observation at x0)

Christopher Nowzohour Regression Exercise 09.04.2014 6 / 9

(29)

Properties of β b

L2

If we have∼ Nn(0, σ2In×n), then the following hold:

1 Unbiasedness: E[βbL2] =β

2 Minimal variance among all unbiased estimators (Gauss-Markov Theorem)

3 βbL2 ∼ Np(β, σ2(XTX)−1), andβbL2 is independent of bσ2

I t-tests for components of βbL2 possible

I F-test for the whole of βbL2 possible

I Confidence interval forE[y0|x0] and prediction interval fory0possible (wherey0is a new observation at x0)

(30)

Properties of β b

L2

If we have∼ Nn(0, σ2In×n), then the following hold:

1 Unbiasedness: E[βbL2] =β

2 Minimal variance among all unbiased estimators (Gauss-Markov Theorem)

3 βbL2 ∼ Np(β, σ2(XTX)−1), andβbL2 is independent of bσ2

I t-tests for components of βbL2 possible

I F-test for the whole of βbL2 possible

I Confidence interval forE[y0|x0] and prediction interval fory0possible (wherey0is a new observation at x0)

Christopher Nowzohour Regression Exercise 09.04.2014 6 / 9

(31)

What happens if assumptions fail?

1 Non-representative sample: cannot infer about population

2 XTX non invertible: cannot computeβbL2

3 Biased errors:

I βbL2 will be biased

I Transformations? More predictors?

4 Correlated errors:

I Wrong p-values & confidence intervals

I Estimator less precise (higher variance)

I Generalized Least Squares

5 Noisy covariates: βbL2 will be biased

6 Non-constant error variance:

I Estimator less precise (higher variance)

I Generalized Least Squares, Transformations?

7 Non-normal errors:

I Only weak version of Gauss-Markov Theorem

I βbL2 is only approximately Gaussian (under weak assumptions onX), therefore slightly wrong p-values & confidence intervals

I Transformations?

(32)

What happens if assumptions fail?

1 Non-representative sample: cannot infer about population

2 XTX non invertible: cannot computeβbL2

3 Biased errors:

I βbL2 will be biased

I Transformations? More predictors?

4 Correlated errors:

I Wrong p-values & confidence intervals

I Estimator less precise (higher variance)

I Generalized Least Squares

5 Noisy covariates: βbL2 will be biased

6 Non-constant error variance:

I Estimator less precise (higher variance)

I Generalized Least Squares, Transformations?

7 Non-normal errors:

I Only weak version of Gauss-Markov Theorem

I βbL2 is only approximately Gaussian (under weak assumptions onX), therefore slightly wrong p-values & confidence intervals

I Transformations?

Christopher Nowzohour Regression Exercise 09.04.2014 7 / 9

(33)

What happens if assumptions fail?

1 Non-representative sample: cannot infer about population

2 XTX non invertible: cannot computeβbL2

3 Biased errors:

I βbL2 will be biased

I Transformations? More predictors?

4 Correlated errors:

I Wrong p-values & confidence intervals

I Estimator less precise (higher variance)

I Generalized Least Squares

5 Noisy covariates: βbL2 will be biased

6 Non-constant error variance:

I Estimator less precise (higher variance)

I Generalized Least Squares, Transformations?

7 Non-normal errors:

I Only weak version of Gauss-Markov Theorem

I βbL2 is only approximately Gaussian (under weak assumptions onX), therefore slightly wrong p-values & confidence intervals

I Transformations?

(34)

What happens if assumptions fail?

1 Non-representative sample: cannot infer about population

2 XTX non invertible: cannot computeβbL2

3 Biased errors:

I βbL2 will be biased

I Transformations? More predictors?

4 Correlated errors:

I Wrong p-values & confidence intervals

I Estimator less precise (higher variance)

I Generalized Least Squares

5 Noisy covariates: βbL2 will be biased

6 Non-constant error variance:

I Estimator less precise (higher variance)

I Generalized Least Squares, Transformations?

7 Non-normal errors:

I Only weak version of Gauss-Markov Theorem

I βbL2 is only approximately Gaussian (under weak assumptions onX), therefore slightly wrong p-values & confidence intervals

I Transformations?

Christopher Nowzohour Regression Exercise 09.04.2014 7 / 9

(35)

What happens if assumptions fail?

1 Non-representative sample: cannot infer about population

2 XTX non invertible: cannot computeβbL2

3 Biased errors:

I βbL2 will be biased

I Transformations? More predictors?

4 Correlated errors:

I Wrong p-values & confidence intervals

I Estimator less precise (higher variance)

I Generalized Least Squares

5 Noisy covariates: βbL2 will be biased

6 Non-constant error variance:

I Estimator less precise (higher variance)

I Generalized Least Squares, Transformations?

7 Non-normal errors:

I Only weak version of Gauss-Markov Theorem

I βbL2 is only approximately Gaussian (under weak assumptions onX), therefore slightly wrong p-values & confidence intervals

I Transformations?

(36)

What happens if assumptions fail?

1 Non-representative sample: cannot infer about population

2 XTX non invertible: cannot computeβbL2

3 Biased errors:

I βbL2 will be biased

I Transformations? More predictors?

4 Correlated errors:

I Wrong p-values & confidence intervals

I Estimator less precise (higher variance)

I Generalized Least Squares

5 Noisy covariates: βbL2 will be biased

6 Non-constant error variance:

I Estimator less precise (higher variance)

I Generalized Least Squares, Transformations?

7 Non-normal errors:

I Only weak version of Gauss-Markov Theorem

I βbL2 is only approximately Gaussian (under weak assumptions onX), therefore slightly wrong p-values & confidence intervals

I Transformations?

Christopher Nowzohour Regression Exercise 09.04.2014 7 / 9

(37)

What happens if assumptions fail?

1 Non-representative sample: cannot infer about population

2 XTX non invertible: cannot computeβbL2

3 Biased errors:

I βbL2 will be biased

I Transformations? More predictors?

4 Correlated errors:

I Wrong p-values & confidence intervals

I Estimator less precise (higher variance)

I Generalized Least Squares

5 Noisy covariates: βbL2 will be biased

6 Non-constant error variance:

I Estimator less precise (higher variance)

I Generalized Least Squares, Transformations?

7 Non-normal errors:

I Only weak version of Gauss-Markov Theorem

I βbL2 is only approximately Gaussian (under weak assumptions onX), therefore slightly wrong p-values & confidence intervals

I Transformations?

(38)

What happens if assumptions fail?

1 Non-representative sample: cannot infer about population

2 XTX non invertible: cannot computeβbL2

3 Biased errors:

I βbL2 will be biased

I Transformations? More predictors?

4 Correlated errors:

I Wrong p-values & confidence intervals

I Estimator less precise (higher variance)

I Generalized Least Squares

5 Noisy covariates: βbL2 will be biased

6 Non-constant error variance:

I Estimator less precise (higher variance)

I Generalized Least Squares, Transformations?

7 Non-normal errors:

I Only weak version of Gauss-Markov Theorem

I βbL2 is only approximately Gaussian (under weak assumptions onX), therefore slightly wrong p-values & confidence intervals

I Transformations?

Christopher Nowzohour Regression Exercise 09.04.2014 7 / 9

(39)

Confidence and Prediction intervals / bands

95%-Confidence band: Area that includes true regression line E[y|x] with 95% probability.

95%-Prediction band: Area that includes new observations (X,y) with 95% probability.

(40)

Confidence and Prediction intervals / bands

95%-Confidence band: Area that includes true regression line E[y|x]

with 95% probability.

95%-Prediction band: Area that includes new observations (X,y) with 95% probability.

Christopher Nowzohour Regression Exercise 09.04.2014 8 / 9

(41)

Confidence and Prediction intervals / bands

95%-Confidence band: Area that includes true regression line E[y|x]

with 95% probability.

95%-Prediction band: Area that includes new observations (X,y) with 95% probability.

(42)

Confidence and Prediction intervals / bands

95%-Confidence band: Area that includes true regression line E[y|x]

with 95% probability.

95%-Prediction band: Area that includes new observations (X,y) with 95% probability.

Christopher Nowzohour Regression Exercise 09.04.2014 8 / 9

(43)

Diagnostic Plots

Tukey-Anscombe Plot: Residuals against fitted values Check for bias in errors

Check for correlated errors

Check for non-constant error variance

QQ-Plot: Theoretical Gaussian quantiles against empirical quantiles Check for non-Gaussian errors

(44)

Diagnostic Plots

Tukey-Anscombe Plot: Residuals against fitted values

Check for bias in errors Check for correlated errors

Check for non-constant error variance

QQ-Plot: Theoretical Gaussian quantiles against empirical quantiles Check for non-Gaussian errors

Christopher Nowzohour Regression Exercise 09.04.2014 9 / 9

(45)

Diagnostic Plots

Tukey-Anscombe Plot: Residuals against fitted values Check for bias in errors

Check for correlated errors

Check for non-constant error variance

QQ-Plot: Theoretical Gaussian quantiles against empirical quantiles Check for non-Gaussian errors

(46)

Diagnostic Plots

Tukey-Anscombe Plot: Residuals against fitted values Check for bias in errors

Check for correlated errors

Check for non-constant error variance

QQ-Plot: Theoretical Gaussian quantiles against empirical quantiles Check for non-Gaussian errors

Christopher Nowzohour Regression Exercise 09.04.2014 9 / 9

(47)

Diagnostic Plots

Tukey-Anscombe Plot: Residuals against fitted values Check for bias in errors

Check for correlated errors

Check for non-constant error variance

QQ-Plot: Theoretical Gaussian quantiles against empirical quantiles

Check for non-Gaussian errors

(48)

Diagnostic Plots

Tukey-Anscombe Plot: Residuals against fitted values Check for bias in errors

Check for correlated errors

Check for non-constant error variance

QQ-Plot: Theoretical Gaussian quantiles against empirical quantiles Check for non-Gaussian errors

Christopher Nowzohour Regression Exercise 09.04.2014 9 / 9

Referenzen

ÄHNLICHE DOKUMENTE

It covers classification of instrumental variables, general justification of finite sample approach, namely Wilks expansion, matrix concentration inequalities and a regularization

Let R and r be the circumradius and inradius of triangle ABC ; the incircle touches the sides of the triangle at three points which form a triangle of perimeter p.. Suppose that q

from Nottingham, U.K., describes an outbreak of invasive group A streptococcus infection (GAS) on an ear, nose and throat ward, where contaminated patient curtains were found to be

Mathematical Foundations of Computer Vision Example Solutions – Assignment 6.. Solution of

We now head for an adequate formulation enabling the numerical solution of the problem by standard numerical software. Stacking the unknowns

Numerical Algorithms for Visual Computing III: Optimisation.. Michael Breuß and Kai Uwe Hagenburg

Play around with various values of epsilon, and compare your results with the ones from the pure Lax-Wendroff discretisation wi- thour additional diffusion term. Did you expect

Sketch the characteristics together with the spatial grid in the x-t-domain, and determine graphically a stable time step size.. Discuss the relation of what you found to