This page shows an example of the ** listcoef**
command with footnotes explaining the output using the **elemapi2 **data
file.

We first use the **elemapi2** data file and then first
perform a regression analysis and include the **beta** option.

use http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2regress api00 ell meals yr_rnd mobility acs_k3 acs_46 full emer enroll, betaSource | SS df MS Number of obs = 395 -------------+------------------------------ F( 9, 385) = 232.41 Model | 6740702.01 9 748966.89 Prob > F = 0.0000 Residual | 1240707.78 385 3222.61761 R-squared = 0.8446 -------------+------------------------------ Adj R-squared = 0.8409 Total | 7981409.79 394 20257.3852 Root MSE = 56.768 ------------------------------------------------------------------------------ api00 | Coef. Std. Err. t P>|t| Beta -------------+---------------------------------------------------------------- ell | -.8600707 .2106317 -4.08 0.000 -.1495771 meals | -2.948216 .1703452 -17.31 0.000 -.6607003 yr_rnd | -19.88875 9.258442 -2.15 0.032 -.0591404 mobility | -1.301352 .4362053 -2.98 0.003 -.0686382 acs_k3 | 1.3187 2.252683 0.59 0.559 .0127287 acs_46 | 2.032456 .7983213 2.55 0.011 .0549752 full | .609715 .4758205 1.28 0.201 .0637969 emer | -.7066192 .6054086 -1.17 0.244 -.0580132 enroll | -.012164 .0167921 -0.72 0.469 -.0193554 _cons | 778.8305 61.68663 12.63 0.000 . ------------------------------------------------------------------------------

The ** listcoef** command can then be used after the **regress**
command to show several types of standardized regression coefficients.

listcoefregress (N=395): Unstandardized and Standardized Estimates Observed SD: 142.32844^{a}^{b}SD of Error: 56.768104^{c}--------------------------------------------------------------------------- api00^{d}| b^{e}t^{f}P>|t|^{f}bStdX^{g}bStdY^{h}bStdXY^{i}SDofX^{j}---------+----------------------------------------------------------------- ell | -0.86007 -4.083 0.000 -21.2891 -0.0060 -0.1496 24.7527 meals | -2.94822 -17.307 0.000 -94.0364 -0.0207 -0.6607 31.8960 yr_rnd | -19.88875 -2.148 0.032 -8.4174 -0.1397 -0.0591 0.4232 mobility | -1.30135 -2.983 0.003 -9.7692 -0.0091 -0.0686 7.5069 acs_k3 | 1.31870 0.585 0.559 1.8117 0.0093 0.0127 1.3738 acs_46 | 2.03246 2.546 0.011 7.8245 0.0143 0.0550 3.8498 full | 0.60972 1.281 0.201 9.0801 0.0043 0.0638 14.8924 emer | -0.70662 -1.167 0.244 -8.2569 -0.0050 -0.0580 11.6851 enroll | -0.01216 -0.724 0.469 -2.7548 -0.0001 -0.0194 226.4732 ---------------------------------------------------------------------------

**Footnotes**

**a**. This is
the number of observations that were used in the regression and hence, in
the calculation of the coefficients given in the listcoef output.

**b.** This is the observed standard
deviation; in other words, the standard deviation of the y-variable (also known
as the dependent variable), in this case, api00.

**c.** This is the standard deviation of
the error (in other words, the standard deviation of the error term). You will
notice that it is the same as the Root MSE listed in the regression
output. This term is also known as the standard error of prediction.

**d.** This is the dependent
variable. Listed below it are all of the independent variables in the
model.

**e.** These are the unstandardized
regression coefficients. They are the same coefficients that are listed in
the regression output in the column labeled coef.

**f.** These are the same t-tests and
p-values that are listed in the regression output. The columns in both
outputs are labeled the same. These
columns provide the t value and 2 tailed p value used in testing the null
hypothesis that the coefficient/parameter is 0. If you use a 2
tailed test, then you would compare each p value to your preselected value
of alpha. Coefficients having p values less than alpha are
significant. For example, if you chose alpha to be 0.05,
coefficients having a p value of 0.05 or less would be statistically
significant (i.e. you can reject the null hypothesis and say that the
coefficient is significantly different from 0). If you use a 1
tailed test (i.e. you predict that the parameter will go in a particular
direction), then you can divide the p value by 2 before comparing it to
your preselected alpha level. With a 2 tailed test and alpha of
0.05, you can reject the null hypothesis that the coefficient for ell is equal to 0. The coefficient of
-.86 is significantly different from 0. Using a 2 tailed test and alpha of 0.01, the p value of
0.000 is smaller than 0.01 and the coefficient for ell would still be
significant at the 0.01 level. Had you predicted that this coefficient
would be positive (i.e. a one tail test), you would be able to divide the
p value by 2 before comparing it to alpha. This would yield a one
tailed p value of 0.000, which is less than 0.01 and then you could
conclude that this coefficient is greater than 0 with a one tailed alpha
of 0.01.

The coefficient for meals is significantly
different from 0 using alpha of 0.05 because its p value of 0.000 is
smaller than 0.05.

The coefficient for yr_rnd (-19.89) is
significantly different from 0 because its p value is definitely smaller
than 0.05 and even 0.01.

The coefficient for mobility is significantly
different from 0 using alpha of 0.05 because its p value of 0.003 is
smaller than 0.05.

The coefficient for acs_k3 is not significantly different
from 0 using alpha of 0.05 because its p value of .559 is greater than
0.05.

The coefficient for acs_46 is significantly
different from 0 using alpha of 0.05 because its p value of 0.011 is
smaller than 0.05.

**g.** These are the regression
coefficients with the x-variables (the independent variables) in standard
deviations and the y-variable (the dependent variable) in its original
units.

**h.** These are the regression
coefficients with the x-variables (the independent variables) in original
units and the y-variable (the dependent variable) in standard deviations.

**i.** These are the regression
coefficients with both the x-variable (the independent variable) and the
y-variable (the dependent variable) in standard deviations. You will
notice that these are the same values given in the Beta column of the
regression output.

**j.** This is the standard deviation of
the x-variables (the dependent variables).