[r] What is the difference between Multiple R-squared and Adjusted R-squared in a single-variate least squares regression?

Could someone explain to the statistically naive what the difference between Multiple R-squared and Adjusted R-squared is? I am doing a single-variate regression analysis as follows:

 v.lm <- lm(epm ~ n_days, data=v)
 print(summary(v.lm))

Results:

Call:
lm(formula = epm ~ n_days, data = v)

Residuals:
    Min      1Q  Median      3Q     Max 
-693.59 -325.79   53.34  302.46  964.95 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  2550.39      92.15  27.677   <2e-16 ***
n_days        -13.12       5.39  -2.433   0.0216 *  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

Residual standard error: 410.1 on 28 degrees of freedom
Multiple R-squared: 0.1746,     Adjusted R-squared: 0.1451 
F-statistic: 5.921 on 1 and 28 DF,  p-value: 0.0216 

This question is related to r statistics regression

The answer is


Note that, in addition to number of predictive variables, the Adjusted R-squared formula above also adjusts for sample size. A small sample will give a deceptively large R-squared.

Ping Yin & Xitao Fan, J. of Experimental Education 69(2): 203-224, "Estimating R-squared shrinkage in multiple regression", compares different methods for adjusting r-squared and concludes that the commonly-used ones quoted above are not good. They recommend the Olkin & Pratt formula.

However, I've seen some indication that population size has a much larger effect than any of these formulas indicate. I am not convinced that any of these formulas are good enough to allow you to compare regressions done with very different sample sizes (e.g., 2,000 vs. 200,000 samples; the standard formulas would make almost no sample-size-based adjustment). I would do some cross-validation to check the r-squared on each sample.


The R-squared is not dependent on the number of variables in the model. The adjusted R-squared is.

The adjusted R-squared adds a penalty for adding variables to the model that are uncorrelated with the variable your trying to explain. You can use it to test if a variable is relevant to the thing your trying to explain.

Adjusted R-squared is R-squared with some divisions added to make it dependent on the number of variables in the model.


The Adjusted R-squared is close to, but different from, the value of R2. Instead of being based on the explained sum of squares SSR and the total sum of squares SSY, it is based on the overall variance (a quantity we do not typically calculate), s2T = SSY/(n - 1) and the error variance MSE (from the ANOVA table) and is worked out like this: adjusted R-squared = (s2T - MSE) / s2T.

This approach provides a better basis for judging the improvement in a fit due to adding an explanatory variable, but it does not have the simple summarizing interpretation that R2 has.

If I haven't made a mistake, you should verify the values of adjusted R-squared and R-squared as follows:

s2T <- sum(anova(v.lm)[[2]]) / sum(anova(v.lm)[[1]])
MSE <- anova(v.lm)[[3]][2]
adj.R2 <- (s2T - MSE) / s2T

On the other side, R2 is: SSR/SSY, where SSR = SSY - SSE

attach(v)
SSE <- deviance(v.lm) # or SSE <- sum((epm - predict(v.lm,list(n_days)))^2)
SSY <- deviance(lm(epm ~ 1)) # or SSY <- sum((epm-mean(epm))^2)
SSR <- (SSY - SSE) # or SSR <- sum((predict(v.lm,list(n_days)) - mean(epm))^2)
R2 <- SSR / SSY 

Examples related to r

How to get AIC from Conway–Maxwell-Poisson regression via COM-poisson package in R? R : how to simply repeat a command? session not created: This version of ChromeDriver only supports Chrome version 74 error with ChromeDriver Chrome using Selenium How to show code but hide output in RMarkdown? remove kernel on jupyter notebook Function to calculate R2 (R-squared) in R Center Plot title in ggplot2 R ggplot2: stat_count() must not be used with a y aesthetic error in Bar graph R multiple conditions in if statement What does "The following object is masked from 'package:xxx'" mean?

Examples related to statistics

Function to calculate R2 (R-squared) in R pandas: find percentile stats of a given column What exactly does numpy.exp() do? Find p-value (significance) in scikit-learn LinearRegression How to plot ROC curve in Python Pandas - Compute z-score for all columns Calculating percentile of dataset column How to normalize an array in NumPy to a unit vector? How to find row number of a value in R code np.mean() vs np.average() in Python NumPy?

Examples related to regression

how to use the Box-Cox power transformation in R Find p-value (significance) in scikit-learn LinearRegression Run an OLS regression with Pandas Data Frame fitting data with numpy Adding a regression line on a ggplot Quadratic and cubic regression in Excel Extract regression coefficient values How to force R to use a specified factor level as reference in a regression? What is the difference between Multiple R-squared and Adjusted R-squared in a single-variate least squares regression?