This function updates the coefficient vector of a multiple linear regression.
Arguments
- mu0
The mean vector of the normal prior distribution for the coefficient vector.
- Tau0
The precision matrix (i.e. inverted covariance matrix) of the normal prior distribution for the coefficient vector.
- XSigX
The matrix \(\sum_{n=1}^N X_n'\Sigma^{-1}X_n\). See below for details.
- XSigU
The vector \(\sum_{n=1}^N X_n'\Sigma^{-1}U_n\). See below for details.
Value
A vector, a draw from the normal posterior distribution of the coefficient vector in a multiple linear regression.
Details
This function draws from the posterior distribution of \(\beta\) in the linear utility equation $$U_n = X_n\beta + \epsilon_n,$$ where \(U_n\) is the (latent, but here assumed to be known) utility vector of decider \(n = 1,\dots,N\), \(X_n\) is the design matrix build from the choice characteristics faced by \(n\), \(\beta\) is the unknown coefficient vector (this can be either the fixed coefficient vector \(\alpha\) or the decider-specific coefficient vector \(\beta_n\)), and \(\epsilon_n\) is the error term assumed to be normally distributed with mean \(0\) and (known) covariance matrix \(\Sigma\). A priori we assume the (conjugate) normal prior distribution $$\beta \sim N(\mu_0,T_0)$$ with mean vector \(\mu_0\) and precision matrix (i.e. inverted covariance matrix) \(T_0\). The posterior distribution for \(\beta\) is normal with covariance matrix $$\Sigma_1 = (T_0 + \sum_{n=1}^N X_n'\Sigma^{-1}X_n)^{-1}$$ and mean vector $$\mu_1 = \Sigma_1(T_0\mu_0 + \sum_{n=1}^N X_n'\Sigma^{-1}U_n)$$. Note the analogy of \(\mu_1\) to the generalized least squares estimator $$\hat{\beta}_{GLS} = (\sum_{n=1}^N X_n'\Sigma^{-1}X_n)^{-1} \sum_{n=1}^N X_n'\Sigma^{-1}U_n$$ which becomes weighted by the prior parameters \(\mu_0\) and \(T_0\).
Examples
### true coefficient vector
beta_true <- matrix(c(-1,1), ncol=1)
### error term covariance matrix
Sigma <- matrix(c(1,0.5,0.2,0.5,1,0.2,0.2,0.2,2), ncol=3)
### draw data
N <- 100
X <- replicate(N, matrix(rnorm(6), ncol=2), simplify = FALSE)
eps <- replicate(N, rmvnorm(mu = c(0,0,0), Sigma = Sigma), simplify = FALSE)
U <- mapply(function(X, eps) X %*% beta_true + eps, X, eps, SIMPLIFY = FALSE)
### prior parameters for coefficient vector
mu0 <- c(0,0)
Tau0 <- diag(2)
### draw from posterior of coefficient vector
XSigX <- Reduce(`+`, lapply(X, function(X) t(X) %*% solve(Sigma) %*% X))
XSigU <- Reduce(`+`, mapply(function(X, U) t(X) %*% solve(Sigma) %*% U, X, U, SIMPLIFY = FALSE))
beta_draws <- replicate(100, update_reg(mu0, Tau0, XSigX, XSigU), simplify = TRUE)
rowMeans(beta_draws)
#> [1] -1.071996 0.986084