Simple Regression in Stata

Jeremy Albright

Posted on
Stata regression correlation

This tutorial shows how to fit a simple regression model (that is, a linear regression with a single independent variable) using Stata. The details of the underlying calculations can be found in our simple regression tutorial. The data used in this post come from the More Tweets, More Votes: Social Media as a Quantitative Indicator of Political Behavior study from DiGrazia J, McKelvey K, Bollen J, Rojas F (2013), which investigated the relationship between social media mentions of candidates in the 2010 and 2012 US House elections with actual vote results. The replication data in Stata format can be downloaded from our github repo.

In this example, we will assess the relationship between the percentage of social media posts that mention a Congressional candidate and how well the candidates did in the next election. The variables of interest are:

  • vote_share (dependent variable): The percent of votes for a Republican candidate
  • mshare (independent variable): The percent of social media posts for a Republican candidate

Both variables are measured as percentages ranging from zero to 100.

Data Visualization

It is always a good idea to begin any statistical modeling with a graphical assessment of the data. This allows you to quickly examine the distributions of the variables and check for possible outliers. The following code returns a histogram for the vote_share variable, our outcome of interest.

label var vote_share "Vote Share"
label var mshare "Tweet Share"

histogram vote_share, freq kdensity 

Stata Histogram and Kernel Density Plot for Vote Share

We start off labeling our variables so that the figure displays informative labels rather than the variable name. The freq option requests that the y-axis show the frequency with which the binned values occur in the data, the default is to show the density. The kdensity option provides a kernel density line, which is a smoothed version of the histogram. The variable’s values (x-axis) fall within the range we expect. There is some negative skew in the distribution.

We can do the same thing for our independent variable.

histogram mshare, freq kdensity

Stata Histogram and Kernel Density Plot for Tweet Share

We again see that the values fall into the range we expect. Note that there are also spikes at zero and 100. These indicate races where a single candidate received either all of the share of Tweets or none of the share of Tweets.

It is also helpful to look at the bivariate association between the two variables. This allows us to see whether there is visual evidence of a relationship, which will help us assess whether the regression results we ultimately get make sense given what we see in the data. The syntax to use is the following.

graph twoway (scatter vote_share mshare, msize(vsmall)) ///
    (lfit vote_share mshare)

Stata Scatterplot and Linear Fit

Here we are looking at a scatterplot of our observations, and we’ve also requested the best linear fit (i.e. the regression line) to better see the positive relationship. The msize option makes the dots in the figure “very small”, which arguably looks a little nicer given the number of observations. The /// allows the code to span multiple lines if we are running this from a do file. There is a clear, positive association between these variables.

Running the Regression

The following syntax runs the regression.

regress vote_share mshare

This returns:


      Source |       SS           df       MS      Number of obs   =       406
-------------+----------------------------------   F(1, 404)       =    141.17
       Model |  30146.6233         1  30146.6233   Prob > F        =    0.0000
    Residual |  86273.9002       404  213.549258   R-squared       =    0.2589
-------------+----------------------------------   Adj R-squared   =    0.2571
       Total |  116420.523       405  287.458083   Root MSE        =    14.613

------------------------------------------------------------------------------
  vote_share |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
      mshare |   .2685346   .0226011    11.88   0.000     .2241041    .3129651
       _cons |   37.04239   1.345062    27.54   0.000     34.39819    39.68658
------------------------------------------------------------------------------

The box at the top left provides us with an ANOVA table that gives 1) the sum of squares (SS) for the model, often called the regression sum of squares, 2) the residual sum of squares, and 3) the total sum of squares. Dividing the SS column by the df (degrees of freedom) column returns the mean squares in the MS column. These values go into calculating the \(F\)-statistic, \(R^2\), adjusted \(R^2\), and Root Mean Square Error shown in the top right of the output.

Looking at the top right, we see that the number of observations used to fit the model was 406. The \(F\)-statistic tests the null hypothesis that the independent variable does not help explain any variance in the outcome. We clearly reject the null hypothesis with \(p < 0.001\), as seen by Prob > F = 0.0000. The R-squared value tells us that the independent variable explains 25.89% of the variation in the outcome. The adjusted \(R^2\) provides a slightly more conservative estimate of the percentage of variance explained, 25.71%. The Root MSE is the square root of the residual MS from the top left table, \(\sqrt{213.549258} = 14.613\). This value gives a summary of how much the observed values vary around the predicted values, with better models having lower RMSEs. It is also used in the formula for the standard error of the coefficient estimates, shown in the next table.

The final table tells us the results of the regression model. For each increase of one on the mshare variable, the vote share increases by 0.269. The standard error tells us how much sample-to-sample variability we should expect. Dividing the coefficient by the standard error gives us the \(t\)-statistic used to calculate the \(p\)-value. Here we see that the mshare coefficient estimate is easily significant, \(p < 0.001\). The 95% confidence interval for the coefficient is also presented.

The bottom line in the table gives us the estimate for the intercept in the simple regression equation. This is the vote share we expect when Tweet share equals zero. Here we see that the predicted value is 37.04, which coincides with what we saw above in the scatterplot, and has a 95% confidence interval of [34.398, 39.687]. The estimated constant value is significantly different from zero, \(p < 0.001\), though this test is of less interest to us compared to assessing the significance of the regression line slope.

Fun Facts about Simple Regression

In a simple regression only (that is, when there is just a single independent variable), the \(R^2\) is exactly equal to the Pearson correlation between the two variables. To see this, run:

pwcorr vote_share mshare, sig

We use the pwcorr (pairwise correlation) with the sig option to get a \(p\)-value with the estimate.

             | vote_s~e   mshare
-------------+------------------
  vote_share |   1.0000 
             |
             |
      mshare |   0.5089   1.0000 
             |   0.0000
             |

The correlation between Tweet share and vote share is 0.5089. If we square this, we get

\[ 0.5089^2 = 0.2589, \]

which is the same as the \(R^2\) value from the regression.

Also in simple regression only, the model \(F\)-test is the same as the test for the single independent variable. A \(t\)-statistic with \(k\) degrees of freedom is equal to an \(F\)-statistic with 1 and \(k\) degrees of freedom. When there are no other predictors in the model, the square root of \(F\) will equal the \(t\) for our coefficient,

\[ \sqrt{141.17} = 11.88. \]

For more detailed information on where these numbers come from, consult our simple regression tutorial.