Title: Multiple Regression
1Multiple Regression
218.1 Introduction
- In this chapter we extend the simple linear
regression model, and allow for any number of
independent variables. - We expect to build a model that fits the data
better than the simple linear regression model.
3- We will use computer printout to
- Assess the model
- How well it fits the data
- Is it useful
- Are any required conditions violated?
- Employ the model
- Interpreting the coefficients
- Predictions using the prediction equation
- Estimating the expected value of the dependent
variable
418.2 Model and Required Conditions
- We allow for k independent variables to
potentially be related to the dependent variable - y b0 b1x1 b2x2 bkxk e
Coefficients
Random error variable
5The simple linear regression model allows for one
independent variable, x y b0 b1x e
y
y b0 b1x
y b0 b1x
y b0 b1x
y b0 b1x
Note how the straight line becomes a plain,
and...
y b0 b1x1 b2x2
y b0 b1x1 b2x2
y b0 b1x1 b2x2
y b0 b1x1 b2x2
y b0 b1x1 b2x2
X
y b0 b1x1 b2x2
1
y b0 b1x1 b2x2
The multiple linear regression model allows for
more than one independent variable. Y b0 b1x1
b2x2 e
X2
6y
y b0 b1x2
b0
X
1
y b0 b1x12 b2x2
a parabola becomes a parabolic surface
X2
7- Required conditions for the error variable e
- The error e is normally distributed with mean
equal to zero and a constant standard deviation
se (independent of the value of y). se is
unknown. - The errors are independent.
- These conditions are required in order to
- estimate the model coefficients,
- assess the resulting model.
818.3 Estimating the Coefficients and Assessing
the Model
- The procedure
- Obtain the model coefficients and statistics
using a statistical computer software.
- Diagnose violations of required conditions. Try
to remedy problems when identified.
- Assess the model fit and usefulness using the
model statistics.
- If the model passes the assessment tests, use it
to interpret the coefficients and generate
predictions.
9Example 18.1 Where to locate a new motor inn?
- La Quinta Motor Inns is planning an expansion.
- Management wishes to predict which sites are
likely to be profitable. - Several areas where predictors of profitability
can be identified are - Competition
- Market awareness
- Demand generators
- Demographics
- Physical quality
10Margin
Profitability
Competition
Market awareness
Customers
Community
Rooms
Nearest
Office space
College enrollment
Income
Disttwn
Median household income.
Distance to downtown.
Distance to the nearest La Quinta inn.
Number of hotels/motels rooms within 3 miles
from the site.
11- Data was collected from randomly selected 100
inns that belong to La Quinta, and ran for the
following suggested model - Margin b0 b1Rooms b2Nearest b3Office
b4College - b5Income b6Disttwn
12This is the sample regression equation
(sometimes called the prediction equation)
MARGIN 72.455 - 0.008ROOMS -1.646NEAREST
0.02OFFICE 0.212COLLEGE - 0.413INCOME
0.225DISTTWN
Let us assess this equation
13- Standard error of estimate
- We need to estimate the standard error of
estimate - Compare se to the mean value of y
- From the printout, Standard Error 5.5121
- Calculating the mean value of y we have
- It seems se is not particularly small.
- Can we conclude the model does not fit the data
well?
14- Coefficient of determination
- The definition is
- From the printout, R2 0.5251
- 52.51 of the variation in the measure of
profitability is explained by the linear
regression model formulated above. - When adjusted for degrees of freedom, Adjusted
R2 1-SSE/(n-k-1) / SS(Total)/(n-1) - 49.44
15- Testing the validity of the model
- We pose the question
- Is there at least one independent variable
linearly related to the dependent variable? - To answer the question we test the hypothesis
-
- H0 b1 b2 bk 0
- H1 At least one bi is not equal to zero.
- If at least one bi is not equal to zero, the
model is valid.
16- To test these hypotheses we perform an analysis
of variance procedure. - The F test
- Construct the F statistic
- Rejection region
- FgtFa,k,n-k-1
MSRSSR/k
Variation in y SSR SSE. Large F results
from a large SSR. Then, much of the variation in
y is explained by the regression model. The
null hypothesis should be rejected thus, the
model is valid.
MSESSE/(n-k-1)
Required conditions must be satisfied.
17Example 18.1 - continued
- Excel provides the following ANOVA results
MSR/MSE
MSE
SSE
MSR
SSR
18Example 18.1 - continued
- Excel provides the following ANOVA results
Conclusion There is sufficient evidence to
reject the null hypothesis in favor of the
alternative hypothesis. At least one of the bi
is not equal to zero. Thus, at least one
independent variable is linearly related to y.
This linear regression model is valid
Fa,k,n-k-1 F0.05,6,100-6-12.17 F 17.14 gt 2.17
Also, the p-value (Significance F)
3.03382(10)-13 Clearly, a 0.05gt3.03382(10)-13,
and the null hypothesis is rejected.
19- Let us interpret the coefficients
- This is the intercept, the value
of y when all the variables take the value zero.
Since the data range of all the independent
variables do not cover the value zero, do not
interpret the intercept. - In this model, for each
additional 1000 rooms within 3 mile of the La
Quinta inn, the operating margin decreases on the
average by 7.6 (assuming the other variables are
held constant).
20- In this model, for each
additional mile that the nearest competitor is to
La Quinta inn, the average operating margin
decreases by 1.65 - For each additional 1000 sq-ft
of office space, the average increase in
operating margin will be .02. - For additional thousand students
MARGIN increases by .21. - For additional 1000 increase in
median household income, MARGIN decreases by .41 - For each additional mile to the
downtown center, MARGIN increases by .23 on the
average
21- Testing the coefficients
- The hypothesis for each bi
- Excel printout
Test statistic
d.f. n - k -1
22- Using the linear regression equation
- The model can be used by
- Producing a prediction interval for the
particular value of y, for a given set of values
of xi. - Producing an interval estimate for the expected
value of y, for a given set of values of xi. - The model can be used to learn about
relationships between the independent variables
xi, and the dependent variable y, by interpreting
the coefficients bi
23 Example 18.1 - continued. Produce predictions
- Predict the MARGIN of an inn at a site with the
following characteristics - 3815 rooms within 3 miles,
- Closet competitor 3.4 miles away,
- 476,000 sq-ft of office space,
- 24,500 college students,
- 39,000 median household income,
- 3.6 miles distance to downtown center.
MARGIN 72.455 - 0.008(3815) -1.646(3.4)
0.02(476) 0.212(24.5) -
0.413(39) 0.225(3.6) 37.1
2418.4 Regression Diagnostics - II
- The required conditions for the model assessment
to apply must be checked. - Is the error variable normally distributed?
- Is the error variance constant?
- Are the errors independent?
- Can we identify outliers?
- Is multicollinearity a problem?
Draw a histogram of the residuals
Plot the residuals versus the time periods
25 Example 18.2 House price and multicollinearity
- A real estate agent believes that a house selling
price can be predicted using the house size,
number of bedrooms, and lot size. - A random sample of 100 houses was drawn and data
recorded. - Analyze the relationship among the four variables
26- Solution
- The proposed model isPRICE b0 b1BEDROOMS
b2H-SIZE b3LOTSIZE e - Excel solution
The model is valid, but no variable is
significantly related to the selling price !!
27- when regressing the price on each independent
variable alone, it is found that each variable is
strongly related to the selling price. - Multicollinearity is the source of this problem.
- Multicollinearity causes two kinds of
difficulties - The t statistics appear to be too small.
- The b coefficients cannot be interpreted as
slopes.
28 Remedying violations of the required conditions
- Nonnormality or heteroscedasticity can be
remedied using transformations on the y variable. - The transformations can improve the linear
relationship between the dependent variable and
the independent variables. - Many computer software systems allow us to make
the transformations easily.
29- A brief list of transformations
- y log y (for y gt 0)
- Use when the se increases with y, or
- Use when the error distribution is positively
skewed - y y2
- Use when the s2e is proportional to E(y), or
- Use when the error distribution is negatively
skewed - y y1/2 (for y gt 0)
- Use when the s2e is proportional to E(y)
- y 1/y
- Use when s2e increases significantly when y
increases beyond some value.
30 Example 18.3 Analysis, diagnostics,
transformations.
- A statistics professor wanted to know whether
time limit affect the marks on a quiz? - A random sample of 100 students was split into 5
groups. - Each student wrote a quiz, but each group was
given a different time limit. See data below.
Analyze these results, and include diagnostics
31The model tested MARK b0 b1TIME e
The errors seem to be normally distributed
This model is useful and provides a good fit.
32The standard error of estimate seems to increase
with the predicted value of y.
Two transformations are used to remedy this
problem 1. y logey 2. y 1/y
33Let us see what happens when a transformation is
applied
The original data, where Mark is a function of
Time
The modified data, where LogMark is a function
of Time"
Loge23 3.135
40, 3.135
40,23
40, 2.89
Loge18 2.89
40,18
34The new regression analysis and the diagnostics
are
The model tested LOGMARK b0 b1TIME e
Predicted LogMark 2.1295 .0217Time
This model is useful and provides a good fit.
35The errors seem to be normally distributed
The standard errors still changes with the
predicted y, but the change is smaller than
before.
36Let TIME 55 minutes LogMark 2.1295
.0217Time 2.1295 .0217(55) 3.323 To find
the predicted mark, take the antilog antiloge3.3
23 e3.323 27.743
How do we use the modified model to predict?
3718.5 Regression Diagnostics - III
- The Durbin - Watson Test
- This test detects first order auto-correlation
between consecutive residuals in a time series - If autocorrelation exists the error variables are
not independent
Residual at time i
38Positive first order autocorrelation occurs when
consecutive residuals tend to be similar.
Then, the value of d is small (less than 2).
Positive first order autocorrelation
Residuals
0
Time
Negative first order autocorrelation
Negative first order autocorrelation occurs when
consecutive residuals tend to markedly differ.
Then, the value of d is large (greater than 2).
Residuals
0
Time
39- One tail test for positive first order
auto-correlation - If dltdL there is enough evidence to show that
positive first-order correlation exists - If dgtdU there is not enough evidence to show that
positive first-order correlation exists - If d is between dL and dU the test is
inconclusive. - One tail test for negative first order
auto-correlation - If dgt4-dL, negative first order correlation
exists - If dlt4-dU, negative first order correlation does
not exists - if d falls between 4-dU and 4-dL the test is
inconclusive.
40- Two-tail test for first order auto-correlation
- If dltdL or dgt4-dL first order auto-correlation
exists - If d falls between dL and dU or between 4-dU and
4-dL the test is inconclusive - If d falls between dU and 4-dU there is no
evidence for first order auto-correlation
dL
dU
2
0
4-dU
4-dL
4
41 Example 18.4
- How does the weather affect the sales of lift
tickets in a ski resort? - Data of the past 20 years sales of tickets, along
with the total snowfall and the average
temperature during Christmas week in each year,
was collected. - The model hypothesized was
- TICKETSb0b1SNOWFALLb2TEMPERATUREe
- Regression analysis yielded the following
results
42The model seems to be very poor
- The fit is very low (R-square0.12),
- It is not valid (Signif. F 0.33)
- No variable is linearly related to Sales
Diagnosis of the required conditions resulted
with the following findings
43Residual vs. predicted y
The error distribution
The error variance is constant
The errors may be normally distributed
Residual over time
The errors are not independent
44Test for positive first order auto-correlation n
20, k2. From the Durbin-Watson table we
have dL1.10, dU1.54. The statistic
d0.59 Conclusion Because dltdL , there is
sufficient evidence to infer that positive first
order auto-correlation exists.
Using the computer - Excel
Tools gt data Analysis gt Regression (check the
residual option and then OK) Tools gt Data
Analysis Plus gt Durbin Watson Statistic gt
Highlight the range of the residuals from the
regression run gt OK
The residuals
45The modified regression model TICKETSb0
b1SNOWFALL b2TEMPERATURE b3YEARSe
The autocorrelation has occurred over
time. Therefore, a time dependent variable added
to the model may correct the problem
- All the required conditions are met for this
model. - The fit of this model is high R2 0.74.
- The model is useful. Significance F 5.93
E-5. -
- SNOWFALL and YEARS are linearly related to
ticket sales. - TEMPERATURE is not linearly related to ticket
sales.