Correlation and Regression
© 2007 Prentice Hall
17-1
1) Overview 2) Product-Moment Correlation 3) Partial Correlation 4) Nonmetric Correlation 5) Regression Analysis 6) Bivariate Regression 7) Statistics Associated with Bivariate Regression Analysis 8) Conducting Bivariate Regression Analysis i. Scatter Diagram ii. Bivariate Regression Model
i. iiii.. iiiii. i. iv. v. vi.
Estimation of of Pa Parameters Stan St anda dard rdiz ized ed Regr Regres essi sion on Coef Coeffi fici cien entt Sign Si gnif ific ican ance ce Te Test stin ing g Stren St rength gth and Sig Signifi nifican cance ce of Asso Associa ciatio tion n Predictio ion n Accura raccy Assumptions
1) Multiple Regression 2) St Stat atis isti tics cs As Assoc socia iate ted d with with Mu Mult ltipl iple e Reg Regre ress ssio ion n 3) Con ond duc ucttin ing g Mul Multi tip ple Re Reg gre ress ssio ion n i. Par arti tial al Regr gre ess ssio ion n Coe oeff ffic icie ient ntss ii.. Stre ii ren ngth of Ass ssoc ocia iattio ion n iiiii. i. Si Sign gnif ific ican ance ce Te Test stin ing g iv.. Exa iv xami mina nati tion on of of Resi Residu dual alss
12)) St 12 Step epwi wise se Re Regr gres essi sion on 13) Mu Mult ltic icol olli line near arit ity y 14)) Re 14 Relat lative ive Imp Import ortanc ance e of Pre Predic dictor torss 15) Cro rosss Val Valid ida ati tion on 16)) Re 16 Regr gres essi sion on with with Dum Dummy my Vari Variab able less 17)) Ana 17 Analysi lysiss of Varianc Variance e and Covar Covarianc iance e with with Regres Regressio sion n 18) Summary
The , r , summarizes the strength of association between two metric (interval or ratio scaled) variables, say X X and and Y .
It is an index used to determine whether a linear or straight-line relationship exists between X X and and Y .
As it was originally proposed by Karl Pearson, it is also known as the Pearson correlation coefficient . It is also referred to as simple correlation , bivariate correlation , or merely the correlation coefficient .
From a sample of n of n observations, observations, X X and and Y , the product moment correlation, r , can be calculated as: n
Σ= 1
( X i - X ) ( Y i - Y )
i n
r =
Σ= 1
( X i - X )
2
n
Σ
( Y i - Y ) 2
i=1
i
Division of the numerator and denominator by (n -1) gives
n
Σ
i=1 n
r =
Σ= 1 i
=
( X i - X ) ( Y i - Y ) n -1
2 ( X i - X ) n -1
C O V x y
n
Σ
i=1
2 ( Y i - Y ) n -1
r varies r varies between -1.0 and +1.0.
The correlation coefficient between two variables var iables will be the same regardless of their underlying units of measurement.
Table 17.1 Respondent No Attitude Toward the City
Duration of Residence
Importance Attached to Weather
1
6
10
3
2
9
12
11
3
8
12
4
4
3
4
1
5
10
12
11
6
4
6
1
7
5
8
7
8
2
2
4
9
11
18
8
10
9
9
10
11
10
17
8
12
2
2
5
The correlation coefficient may be calculated as follows:
X
= (10 + 12 + 12 + 4 + 12 + 6 + 8 + 2 + 18 + 9 + 17 + 2)/12 = 9.333
Y
= (6 + 9 + 8 + 3 + 10 + 4 + 5 + 2 + 11 + 9 + 10 + 2)/12 = 6.583
n
Σ
i =1
X i - X )(Y i - Y ) ( X
= + + + + + = + + =
(10 -9.33)(6-6.58) + (12-9.33)(9-6.58) (12-9.33)(8-6.58) + (4-9.33)(3-6.58) (12-9.33)(10-6.58) + (6-9.33)(4-6.58) (8-9.33)(5-6.58) + (2-9.33) (2-6.58) (18-9.33)(11-6.58) + (9-9.33)(9-6.58) (17-9.33)(10-6.58) + (2-9.33)(2-6.58) -0.3886 + 6.4614 + 3.7914 + 19.0814 9.1314 + 8.5914 + 2.1014 + 33.5714 38.3214 - 0.7986 + 26.2314 + 33.5714 179.6668
n
X i - X )2 Σ=1 ( X
i
n
Σ
i =1
= (10-9.33)2 + (12-9.33)2 + (12-9.33)2 + (4-9.33)2 + (12-9.33)2 + (6-9.33)2 + (8-9.33)2 + (2-9.33)2 + (18-9.33)2 + (9-9.33)2 + (17-9.33)2 + (2-9.33)2 = 0.4489 + 7.1289 + 7.1289 + 28.4089 + 7.1289+ 11.0889 + 1.7689 + 53.7289 + 75.1689 + 0.1089 + 58.8289 + 53.7289 = 304.6668
(Y i - Y )2 = (6-6.58)2 + (9-6.58)2 + (8-6.58)2 + (3-6.58)2 + (10-6.58)2+ (4-6.58)2 + (5-6.58)2 + (2-6.58)2 + (11-6.58)2 + (9-6.58)2 + (10-6.58)2 + (2-6.58)2 = 0.3364 + 5.8564 + 2.0164 + 12.8164 + 11.6964 + 6.6564 + 2.4964 + 20.9764 + 19.5364 + 5.8564 + 11.6964 + 20.9764 = 120.9168
Thus,
r =
179.6668 (304.6668) (120.9168)
= 0.9361
r
2
=
E x p l a in e d v a r i a t i o n T o ta l v a r i a ti o n =
S S x S S y
= T o t a l v a r i a t io n - E r r o r v a r i a t i o n T o ta l v a r ia tio n
=
S S y - S S S S y
e rr o r
When it is computed for a population rather than a sample, the product moment correlation is denoted by ρ , the Greek letter rho. The coefficient r r is is an estimator of ρ .
The statistical significance of the relationship between two variables measured by using r r can can be conveniently tested. The hypotheses are:
H0 : ρ = 0 H1 : ρ ≠ 0
The test statistic is: 1/ 2 -2 n t = r 1 - r 2 which has a t t distribution distribution with n n -- 2 degrees of freedom. For the correlation coefficient calculated based on the data given in Table 17.1, 1/ 2 12-2 t = 0.9361 1 - (0.9361)2 = 8.414 and the degrees of freedom = 12-2 = 10. From the the t distribution t distribution table (Table 4 in the Statistical Appendix), the critical value of t of t for for a two-tailed test and α = 0.05 is 2.228. Hence, the null hypothesis of no no relationship between X X and and Y Y is is rejected.
Fig. 17.1
A measures the association between two variables after controlling for, or adjusting for, the effects of one or more additional variables.
r x y . z =
r x y - (r x z ) (r y z ) 2 2 r 1 - r 1 x z y z
Partial correlations have an order order associated associated with them. The order indicates how many variables are being adjusted or controlled. The simple correlation coefficient, r , has a zeroorder, as it does not control for any additional variables while measuring the association between two variables.
The coefficient r is a first-order partial correlation coefficient, as it controls for the effect of one additional variable, Z . A second-order partial correlation coefficient controls for the effects of two variables, a thirdorder for the effects of three variables, and so on. The special case when a partial correlation is larger than its respective zero-order correlation involves a suppressor effect.
The represents the correlation between Y and X when the linear effects of the other independent variables have been removed from X but not from Y. The part part correlation coefficient, ry(x.z) is calculated as r follows: r r x y y z x z r y ( x . z ) = 2 1 - r x z
The partial correlation coefficient is generally viewed as more important than the part correlation coefficient.
If the nonmetric variables are ordinal and numeric, nu meric, Spearman's rho, ρ s , and Kendall's tau, τ , are two measures of which can be used to examine the correlation between them. Both these measures use rankings rather than the absolute values of the variables, and the basic concepts underlying them are quite similar. Both vary from -1.0 to +1.0 (see Chapter 15). In the absence of ties, Spearman's ρ s yields a closer approximation to the Pearson product moment correlation coefficient, ρ , than Kendall's τ . In these cases, the absolute magnitude of τ tends to be smaller than Pearson's ρ . On the other hand, when the data contain a large number of tied ranks, Kendall's seems more appropriate.
examines associative relationships between a metric dependent variable and one or more independent variables in the following ways: Determine whether the independent variables explain a significant variation in the dependent variable: whether a relationship exists.
Determine how much of the variation in the dependent variable can be explained by the independent variables: strength of the relationship. Determine the structure or form of the relationship: the mathematical equation relating the independent and dependent variables. Predict the values of the dependent variable. Control for other independent variables when evaluating the contributions of a specific variable or set of variables. Regression analysis is concerned with the nature and degree of association between variables and does not imply or assume any causality.
. The basic β 0 is β 1Y = regression equation + X + e e ,, where Y Y = = dependent or criterion variable, X = independent or β0 1 predictor variable, = intercept of βthe line, = slope of the line, and e is the error term associated with the i th observation.
. The strength of association is measured by the coefficient of determination, r . It varies between 0 and 1 and signifies the proportion of the total variation in Y Y that that is accounted for by the variation in X .
. The estimated or predicted value of Y of Y is Y = a a + + b x , where Y is the predicted value of Y Y ,, and a a and and b are estimators of β β 0
1
. The estimated parameter b is b is usually referred to as the non-standardized regression coefficient.
. A scatter diagram, or scattergram, is a plot of the values of two variables for all the cases or observations.
. This statistic, SEE, is the standard deviation of the actual Y Y values values from the predicted Y values.
The standard deviation of b of b , SE SE ,, is called the standard error.
. Also termed the beta coefficient or beta weight, this is the slope obtained by the regression of Y of Y on on X when the data are standardized.
. The distances of all the points from the regression line are squared and added together to arrive at the sum of o f squared errors, which is a measure of total error,Σe j2.
. A A t t statistic statistic with n n -- 2 degrees of freedom can be used to test the null hypothesis that no linear relationship exists between X X and and Y , or H : = 0, where t=b over SEb
A , or , is a plot of the values of two variables for all the cases or observations. The most commonly used technique for fitting a straight line to a scattergram is the .
In fitting the line, the least-squares procedure 2 Σ e minimizes the sum of squared errors, j .
Fig. 17.2 Plot the Scatter Diagram Formulate the General Model Estimate the Parameters Estimate Standardized Regression Coefficients Test for Significance Determine the Strength and Significance of Association Check Prediction Accuracy Examine the Residuals Cross-Validate the Model
In the bivariate regression model, the general form of a straight line is: Y Y = = β 0 + β 1X where Y = Y = dependent or criterion variable X = X = independent or predictor variable β 0= intercept of the line β 1= slope of the line The regression procedure adds an error term to account for the probabilistic or stochastic nature of the relationship: Y i = β 0 + β 1 X + e where e is the error term associated with the i i th th observation.
Fig. 17.3
e d u t i t t A
Duration of Residence
Fig. 17.4
Line 1
Line 2
9
Line 3
Line 4
6
3
2.25 4.5
6.75
9
11.25 13.5 15.75 18
Fig. 17.5
β0 + β1X
Y Y J eJ
eJ
Y J
X1
X2
X3
X4
X5
X
In most cases, β 0 and β 1 are unknown and are estimated from the sample observations using the equation a + + b x i Y = a where Y is the estimated or predicted value of Y of Y ,, and a and a and b b are are estimators of β 0 and β 1 , respectively. b=
COV xy S x2
n
Σ =
X i - X )(Y i - Y ) ( X
i=1 n
Σ
2
( X i - X )
i=1
n
Σ X iY i - nX Y =
i=1 n
Σ X i2 - nX 2
The intercept, a , may then be calculated using: a = a = Y - b X For the data in Table 17.1, the estimation of parameters may be illustrated as follows: 12
Σ X i Y i
i =1
= (10) (6) + (12) (9) + (12) (8) + (4) (3) + (12) (10) + (6) (4) + (8) (5) + (2) (2) + (18) (11) + (9) (9) + (17) (10) + (2) (2) = 917 12
Σ X i 2 i =1
= 102 + 122 + 122 + 42 + 122 + 62 + 82 + 22 + 182 + 92 + 172 + 22 = 1350
It may be recalled from earlier calculations of the simple correlation that: X = 9.333 Y = 6.583 Given n n = = 12, b b can can be calculated as: b=
917 - (12) (9.333) ( 6.583) 1350 - (12) (9.333) 2
= 0.5897 a = a = Y - b X = 6.583 - (0.5897) (9.333) = 1.0793
is the process by which the raw data are transformed into new variables that have a mean of 0 and a variance of 1 (Chapter 14). When the data are standardized, the intercept assumes a value of 0. The term or is used to denote the standardized regression coefficient. B = B = r
There is a simple s imple relationship between the standardized and non-standardized regression coefficients:
The statistical significance of the linear relationship between X X and and Y Y may may be tested by examining the hypotheses: H0 : β 1 = 0 H1 : β 1 ≠ 0
A t statistic A t statistic with n n -- 2 degrees of freedom can be b t = used, where
S E b
SE denotes the standard deviation of b of b and and is called the
.
Using a computer program, the regression of attitude on duration of residence, using the data shown in Table 17.1, yielded the results shown in Table 17.2. 17.2. The intercept, intercept, a , equals 1.0793, and the slope, b , equals equals 0.5897. 0.5897. Therefore, the estimated equation is:
Y
Attitude ( ) = 1.0793 + 0.5897 (Duration of residence) The standard error, or standard deviation of b of b is is estimated as 0.07008, and the value of the t t statistic statistic as t t = = 0.5897/0.0700 = 8.414, with n n -- 2 = 10 degrees of freedom. From Table 4 in the Statistical Appendix, weαsee that the critical value of t of t with with 10 degrees of freedom and = 0.05 is 2.228 for a two-tailed test. Since the calculated value value of t of t is is larger than the critical value, the null hypothesis is rejected.
The total variation, SS y , may be decomposed into the variation accounted for by the regression line, SS reg , and the error or residual variation, SS error or SS res , as follows: SS y = SS reg + SS res where
n
SS y = iΣ=1 (Y i - Y )2
n
SSr eg = Σ (Y i - Y )2 i=1 n
SSr es = Σ (Y i - Y i)2 i=1
Fig. 17.6 Y
l a t o n T o a t i o r i y a V S S
X1
X2
Residual Variation SSres Explained Variation SSreg Y
X3
X4
X5
X
The strength of association may then be calculated as follows: S S r eg r = S S y 2
S S y - S S r es = S S y
To illustrate the calculations of r of r 2, let us consider again the effect of attitude toward the city city on the duration of residence. residence. It may be recalled from earlier calculations of the simple correlation coefficient that: n
S S y =
Σ
(Y i - Y )2
i =1
= 120.9168
The predicted values (Y ) can be calculated using the regression equation: Attitude ( Y ) = 1.0793 + 0.5897 (Duration of residence) For the first observation in Table 17.1, this value is: ( Y ) = 1.0793 + 0.5897 x 10 = 6.9763. For each successive observation, the predicted values are, in order, 8.1557, 8.1557, 3.4381, 8.1557, 4.6175, 5.7969, 2.2587, 11.6939, 6.3866, 11.1042, and 2.2587.
Therefore,
n
S S reg =
Σ
(Y i - Y )
2
i =1
= (6.9763-6.5833) 2 + (8.1557-6.5833 ( 8.1557-6.5833)) 2 + (8.1557-6.5833) 2 + (3.4381-6.5833 ( 3.4381-6.5833)) 2 + (8.1557-6.5833) 2 + (4.6175-6.5833 ( 4.6175-6.5833)) 2 + (5.7969-6.5833) 2 + (2.2587-6.5833 ( 2.2587-6.5833)) 2 + (11.6939 -6.5833) 2 + (6.3866-6.5833) 2 + (11.1042 -6.5833) 2 + (2.2587-6.5833) 2 =0.1544 + 2.4724 + 2.4724 + 9.8922 + 2.4724 + 3.8643 + 0.6184 + 18.7021 + 26.1182 + 0.0387 + 20.4385 + 18.7021 = 105.9524
n
S S res =
Σ
(Y i - Y i )
2
i =1
= + + +
(6-6.9763)2 (3-3.4381)2 (5-5.7969)2 (9-6.3866)2
+ + + +
(9-8.1557)2 + (8-8.1557)2 (10-8.1557)2 + (4-4.6175)2 (2-2.2587)2 + (11-11.6939)2 (10-11.1042)2 + (2-2.2587)2
= 14.9644 It can be seen that SS y = SS reg + SS res . Furthermore, r 2
= SS reg / /SS SS y = 105.9524/120.9168 = 0.8762
Another, equivalent test for examining the significance of the linear relationship between X X and and Y (significance Y (significance of b of b ) is the test for the significance of the coefficient of determination. The hypotheses in this case are: H0: R 2 pop = 0 H1: R 2 pop > 0
The appropriate test statistic is the F F statistic: statistic: F =
S S reg S S res /(n-2)
which has an F F distribution distribution with 1 and n n -- 2 degrees of freedom. The F test is a generalized form of the t t test test (see Chapter 15). If a random variable is t t distributed distributed with n n degrees degrees of freedom, then t 2 is F distributed with 1 and n n degrees degrees of freedom. Hence, the F F test test for testing the significance of the coefficient of determination is equivalent to testing the following hypotheses: H0 : β 1 = 0 H0 : β 1 ≠ 0
or H0 : ρ = 0 H ρ≠ 0
From Table 17.2, it can be seen that: r 2 = 105.9522/(105.9522 + 14.9644) = 0.8762 Which is the same as the value calculated earlier. The value of the F statistic F statistic is: F = F = 105.9522/(14.964 105.9522/(14.9644/10) 4/10) = 70.8027 with 1 and 10 degrees of freedom. The calculated F F statistic statistic exceeds the critical value of 4.96 determined from Table 5 in the Statistical Appendix. Therefore, the relationship is is significant at α= 0.05, corroborating the results of the t t test. test.
Table 17.2 Multiple R R 2 Adjusted R 2 Standard Error
Regression Residual F = 70.80266
Variable Duration (Constant)
0.93608 0.87624 0.86387 1.22329
df
Sum of Squares
1 10
105.95222 105.95222 14.96444 1.49644 Significance of F F = 0.0000
b 0.58972 1.07932
SEb 0.07008 0.74335
Mean Square
Beta (ß)
T
0.93608
8.414 1.452
Significance of T 0.0000 0.1772
To estimate estimate the accuracy of predicted predicted values,Y , it is useful to calculate the standard error of estimate, SEE. n
SEE =
2
∑ (Y i − Y ˆ i ) i =1
n−2
or SEE =
SS
res res
n−2
or more generally, if there are k k independent independent variables, SEE =
SS
res res
n − k − 1
For the data given in Table 17.2, the SEE is estimated as follows: SEE = 14. 14.9644/(12-2) 9644/(12-2)
= 1.22329
The error term term is normally distributed. For each fixed value of X of X , the distribution of Y of Y is is normal.
The means of all these normal distributions of Y of Y , given X , lie on a straight line with slope b .
The mean of the error term is 0.
The variance of the error term is constant. constant. This variance does not depend on the values valu es assumed by X .
The error terms terms are uncorrelated. In other other words, words, the observations have been drawn independently.
The general form of the is as follows: Y = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3+ . . . + β k X k + e which is estimated by the following equation:
Y
= a a + + b b X X + b b X X + b b X X + + . . . + b b X X
As before, the coefficient a a represents represents the intercept, in tercept, but the b 's 's are now the partial regression coefficients.
. R 2 , coefficient of multiple determination, is adjusted for the number of independent variables and the sample size to account for the diminishing returns. After the first few variables, the additional independent variables do not make much contribution. . The strength of association in multiple regression is measured by the square of the multiple correlation coefficient, R 2 , which is also called the coefficient of multiple determination. . The F F test test is used to test the null hypothesis that the coefficient of multiple determination in the population, R 2 pop , is zero. This is equivalent to testing the null hypothesis. hypothesis. The test statistic has an F F distribution distribution with k k and and (n (n -- k k -- 1) degrees of freedom.
. The significance of a partial regression coefficient, β i , of of X X i may be tested using an incremental F F statistic. statistic. The incremental F F statistic statistic is based on the increment in the explained sum of squares resulting from the addition of the independent variable X i to the regression equation after all the other independent variables have been included.
. The partial regression coefficient, b 1, denotes the change in the predicted value, Y , per unit change in X 1 when the other independent variables, X 2 to X k , are held constant.
To understand the meaning of a partial regression coefficient, let us consider a case in which there are two independent variables, so that: Y =
a + a + b 1X 1 + b 2X 2
First, note that the relative magnitude of the partial regression coefficient of an independent variable is, in general, different from that of its bivariate regression coefficient. The interpretation of the partial regression coefficient, b 1, is that it represents the expected change in Y Y when when X 1 is changed by one unit but X 2 is held constant or otherwise controlled. Likewise, b 2 represents the expected change in Y for Y for a unit change in X 2, when X 1 is held constant. Thus, calling b 1 and b 2 partial regression coefficients is
It can also be seen that the combined effects of X of X 1 and X 2 on Y are additive. In other words, if X if X 1 and X 2 are each changed by one unit, the expected change in Y Y would would be (b (b 1+b 2). Suppose one was to remove the effect of X of X 2 from X 1. This could be done by running a regression of X of X 1 on X 2. In other words, X one would estimate the a + + b X 2 and calculate X equation 1 = a the residual X r = (X (X 1 - 1). The partial regression coefficient, b 1, is then equal to theY bivariate regression coefficient, b r , obtained from the equation = a a + + b r X r .
Extension to the case of k of k variables variables is straightforward. The partial regression coefficient, b 1, represents the expected change in Y Y when when X 1 is changed by one unit and X 2 through X k are held constant. It can also be interpreted as the bivariate regression coefficient, b , for the regression of Y of Y on on the residuals of X of X 1, when the effect of X of X 2 through X k has been removed from X 1.
The relationship of the standardized to the non-standardized coefficients remains the same as before: B 1 = b 1 (S x 1 / /Sy Sy ) B k = b k (S xk / /S S y )
The estimated regression equation is: Y ( ) = 0.33732 + 0.48108 X 1 + 0.28865 X 2 or
Table 17.3 Multiple R R 2 Adjusted R 2 Standard Error
Regression Residual F = 77.29364
Variable IMPORTANCE DURATION (Constant)
0.97210 0.94498 0.93276 0.85974
df
Sum of Squares
2 9
114.26425 57.13213 6.65241 0.73916 Significance of of F F = 0.0000
b 0.28865 0.48108 0 33732
SEb 0.08608 0.05895 0 56736
Mean Square
Beta (ß)
T
0.31382 0.76363
3.353 8.160 0 595
Significance of T 0.0085 0.0000 0 5668
SS y = SS reg + SS res where n
S S y =
Σ
2 ( Y i - Y )
i=1 n
S S reg =
Σ
( Y i - Y )
2
i =1 n
S S res =
Σ i =1
( Y i - Y i )
2
The strength of association is measured by the square of the multiple correlation coefficient, R 2 , which is also called the coefficient of multiple determination. 2
R =
S S reg S S y
R 2 is adjusted for the number of independent variables and the sample size by using the following formula:
(1 - R 2 ) k (1 Adjusted R = R n - k - 1 2
2
H0 : R 2 pop = 0 This is equivalent to the following null hypothesis: H0: β 1 = β 2 = β 3 = . . . = β k = 0
The overall test can be conducted by using an F F statistic: statistic: S S reg / k k F = S S res /(n - k - 1)
2 R k / k = (1 - R 2 )/(n- k - 1)
which has an F F distribution distribution with k k and and (n (n -- k k -1) -1) degrees of freedom.
Testing for the significance of the βi's can be done in a manner similar to that in the bivariate case by using t t tests. tests. The significance of the partial coefficient for importance attached to weather may be tested by the following equation: t =
b S E b
which has a t t distribution distribution with n n -- k k -1 -1 degrees of freedom.
A is the difference between the observed value of Y of Y and the value predicted by the regression equation Y i . Scattergrams of the residuals, in which the residuals are plotted against the predicted values, Y , time, or predictor variables, provide useful insights in examining the appropriateness of the underlying assumptions and regression model fit. The assumption of a normally distributed di stributed error term can be examined by constructing a histogram of the residuals. The assumption of constant variance of the error term can be examined by plotting the residuals r esiduals against the predicted values of the dependent variable, Y i .
A plot of residuals against time, or the sequence of observations, will throw some light on the assumption that the error terms are uncorrelated.
Plotting the residuals against the independent variables provides evidence of the appropriateness or inappropriateness of using a linear model. model. Again, the plot should result in a random pattern.
To examine whether any additional variables should be included in the regression equation, one could run a regression of the residuals on the proposed variables.
If an examination of the residuals indicates that the assumptions underlying linear regression are not met, the researcher can transform the variables in an attempt to satisfy the assumptions.
Fig. 17.7
s l a u d i s e R
Predicted Y Values
Fig. 17.8
s l a u d i s e R
Time
Fig. 17.9
s l a u d i s e R
Predicted Y Values
The purpose of is to select, from a large number of predictor variables, a small subset of variables that account for most of the variation in the dependent or criterion variable. In this procedure, the predictor variables enter or are removed from the regression equation one at a time. There are several approaches to stepwise regression.
. Initially, there are no predictor variables in the regression equation. Predictor variables variables are entered entered one at a time, only if they meet certain criteria specified in terms of F of F ratio. ratio. The order in which the variables are included is based on the contribution to the explained variance.
. Initially, all the predictor variables are included in the regression regression equation. Predictors are then removed one at a time based on the F F ratio ratio for removal.
. Forward inclusion is combined with the removal of predictors that no longer meet the specified criterion at each step.
arises when intercorrelations among the predictors are very high. Multicollinearity can result in several problems, including: The partial regression coefficients may not be estimated precisely. The standard errors are likely to be high. The magnitudes as well as the signs of the partial regression coefficients may change from sample to sample. It becomes difficult to assess the relative importance of the independent variables in explaining the variation in the dependent variable. Predictor variables may be incorrectly included or removed in stepwise regression.
A simple procedure for adjusting for multicollinearity consists of using only one of the variables in a highly correlated set of variables.
Alternatively, the set of independent variables can be transformed into a new set of predictors that are mutually independent by using techniques such as principal components analysis.
More specialized techniques, such as ridge regression and latent root regression, can also be used.
Unfortunately, because the predictors are correlated, there is no unambiguous measure of relative importance of the predictors in regression analysis. anal ysis. However, several approaches are commonly used to assess the relative importance of predictor variables.
. If the partial regression coefficient of a variable is not significant, as determined by an incremental F F test, test, that variable is judged to be unimportant. An exception to this rule is made if there are strong theoretical reasons for believing that the variable is important. . This measure, r , represents the proportion of the variation in the dependent variable explained by the independent variable in a bivariate relationship.
. This measure, R , is the coefficient of determination between the dependent variable and the independent variable, controlling for the effects of the other independent variables.
. This coefficient represents an increase in R when a variable is entered into a regression equation that already contains the other independent variables.
. The most commonly used measures are the absolute values of the beta weights, |B | , or the squared values, B .
. The order in which the predictors enter or are removed from the regression equation is used to infer their relative importance.
The regression model is estimated using the entire data set.
The available data are split into two parts, the estimation sample and the validation sample . The estimation sample generally contains 50-90% of the total sample.
The regression model is estimated using the data from the estimation sample only. This model is compared to the model estimated on the entire sample to determine the agreement in terms of the signs and magnitudes of the partial regression coefficients.
The estimated model is applied to the data in the validation sample to predict the values of the dependent variable, Y , for the observations in the validation sample.
The observed values Y Y ,, and the predicted values, Y , in the validation sample are correlated to determine the simple r . This measure, r , is compared to R for the total sample and to R for the estimation sample to assess the degree of shrinkage.
Product Usage Category Nonusers............... Light Users........... Medium Users....... Heavy Users..........
Original Variable Co de 1 2 3 4
Dummy Variable Code D1 1 0 0 0
D2 0 1 0 0
D3 0 0 1 0
a + + b b D D + b b D D + b b D D Y = a
In this case, "heavy users" has been selected as a reference category and has not been directly included in the regression equation. The coefficient b is the difference in predicted Y for nonusers, as compared to heavy users.
In regression with dummy variables, the predicted Y for each category is the mean of Y of Y for for each category. Product Usage Category Nonusers............... Light Users........... Medium Users....... Heavy Users..........
Predicted Value
Y a + a + b a + a + b a + a + b a
Mean Value
Y a + b a + a + a + b a + a + b a
Given this equivalence, it is easy to see further relationships between dummy variable regression and one-way ANOVA. Dummy Variable Regression n
S S res =
2
Σ
(Y i - Y i )
= SS
Σ
(Y i - Y )
2
= SS
i =1 n
S S re reg g =
One-Way ANOVA
i =1
R
= η
Overall F test
= F test
SS SS
The CORRELATE program computes Pearson product moment correlations and partial correlations with significance levels. Univariate statistics, covariance, and cross-product deviations may also be requested. Significance levels are included in the output. To select these procedures using SPSS for Windows click:
Scatterplots can be obtained by clicking: REGRESSION calculates bivariate and multiple regression equations, associated statistics, and plots. It allows for an easy examination of residuals. residuals. This procedure procedure can be be run by clicking:
1.
Select ANALYZE from the SPSS menu bar.
2.
Click CORRELATE and then BIVARIATE..
3.
Move “Attitude[attitude]” in to the VARIABLES box.. Then move “Duration[duration]” ]” in to the VARIABLES box..
4.
Check PEARSON under CORRELATION COEFFICIENTS.
5.
Check ONE-TAILED under TEST OF SIGNIFICANCE.
6.
Check FLAG SIGNIFICANT CORRELATIONS.
7.
Click OK.
1.
Select ANALYZE from the SPSS menu bar.
2.
Click REGRESSION and then LINEAR.
3.
Move “Attitude[attitude]” in to the DEPENDENT box..
4.
Move “Duration[duration]” in to the INDEPENDENT(S) box..
5.
Select ENTER in the METHOD box.
6.
Click on STATISTICS and check ESTIMATES under REGRESSION COEFFICIENTS.
7.
Check MODEL FIT.
8.
Click CONTINUE.
9.
Click OK.