Uncertainty in choice modeling
From DDL Wiki
Econometric models of consumer choice have been used to drive engineering optimization models toward profitable designs. However, there is uncertainty in every model that must be accounted for. There are various statistical methods available for quantifying the uncertainty in these choice models in order to determine how the uncertainty affects design decisions. The method of determining uncertainty depends on the method used to optimize the econometric model. For most choice models, the method of maximum likelihood (ML) is used. However, an introduction into the uncertainty associated with simpler ordinary least squares (OLS) methods yields insight into statistical methods such as hypothesis testing and determining confidence intervals, which are also useful for maximum likelihood estimation.
Ordinary Least Squares Estimation
A simple two variable linear regression equation is modeled by the parameters and such that,
where is considered to be the error in the estimate. We seek to find the solution that minimizes the square of the error term, . In other words, we want to find the values that satisfy: , which can also be written as:
Because the goal is to create the best model by minimizing the square of the error term, this method is called ordinary least squares (OLS) estimation. The uncertainty in the model is measured in terms of the individual uncertainties of the values found. The uncertainty in the values is measured in terms of standard error, calculated by the equations below, for a two variable linear regression model.
Once the standard errors are found, hypothesis testing can be used to determine the validity of and . The t-statistic is most widely used as a means of quantifying the uncertainty in choice modeling. The reason the t-statistic is used is due to the fact that it is applicable for small sample sizes, where the assumption of normality and the associated z-test is incorrect. Additionally, the t distribution approaches the normal distribution as the number of degrees of freedom increases. For , the t-statistic is given by:
where β2 is called the null hypothesis. It is common practice to set β2 = 0 and solve for t. This value of t is compared to tα / 2, which comes from the t-distribution and is usually looked up in a table of t-statistics based on the confidence level desired (1 - α) and the degrees of freedom in the problem. The degrees of freedom is equivalent to n-k, where n is the number of data points available and k is the number of parameters used to estimate the model. Since the discussion above is limited to a two variable model, k=2 in this case. If | t | > tα / 2, then the null hypothesis is rejected and β2 is considered to be "statistically significant". If | t | < tα / 2, then the null hypothesis is not rejected and β2 = 0 is a sufficient hypothesis so β2 is considered to be "statistically insignificant." The t-test can be used in the same way to measure the validity of each parameter in the model. Hypothesis testing is basically a way of determing whether the parameter being tested should be included in the model or not.
Another way to quantify the error in a model is to use confidence intervals to determine the true range into which the value of each parameter should fall. Under the assumption that the population error term is normally distributed, the estimated standard errors can be used to create confidence intervals and conduct hypothesis tests about the population parameters β1 and β2. A 100(1-α)% confidence interval for β is measured by:
A confidence interval is a measure of the probability that the parameters obtained from a sample distribution, , can predict the true value of the paramter, β, within a range that is specified from the CDF of the t distribution. It is common practice to present the data in the form of a 95% confidence interval. Below is a quick derivation from the calculation of the t statistic to the construction of a 95% confidence interval.
The width of the confidence interval is dependent on the standard error of the parameter. It is important to point out that the larger the standard error, the wider the confidence interval is and the less confident we are in our value for the parameter. Hypothesis testing can also be done with the use of confidence intervals. If the null hypothesis falls outside of the range specified by the confidence then it is rejected. As previously mentioned, the null hypothesis is often times set equal to zero to determine the statistical significance of a parameter in the model. So, if zero does not fall within the range of the confidence interval, then the null hypothesis is rejected and the parameter in question is considered to be statistically significant to the model.
Maximum Likelihood Estimation
A Maximum likelihood estimator (MLE) is a statistical method to fit an assumed functional form in a probabilistic model to observed data.
The maximum likelihood approach is commonly used to fit simple discrete choice models such as the logit model; however, it can be impractical for fitting discrete choice models with greater complexity, and Bayesian estimation is typically called upon for such cases. Modeling real world data by estimating maximum likelihood offers a way of tuning the free parameters of the model to provide an optimum fit. The biggest benefit of the MLE method over OLS techniques is that MLE is not limited to just linear models. For many discrete choice modeling studies, the MLE method is preferred.
The Likelihood equation is given by:
where θ is the parameter used to describe the model and f(yi;θ) describes the probability density function for a random variable y, conditioned on the set of parameters θ. The likelihood equation L(θ), is then joint density of n independent and identically distributed observations, or the product of the individual densities. Because it is mathematically simpler, the log of the likelihood equation is used. This formula is given by:
Asymptotic Properties of Maximum Likelihood Estimation
The MLE is asymptotically unbiased, i.e., its bias tends to approach zero as the number of samples increases to infinity.
The MLE is asymptotically efficient, i.e., it achieves the Cramer Rao Lower Bound when the number of samples tends to infinity. This means that, asymptotically, no unbiased estimator has lower mean squared error than the MLE.
The MLE is asymptotically normal. As the number of samples increases, the distribution of the MLE tends to the Gaussian distribution with mean θ and covariance matrix equal to the inverse of the Fisher information matrix.
Uncertainty in Maximum Likelihood Estimation
The Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ upon which the likelihood function of θ, L(θ) = f(X;θ), depends. The likelihood function is the joint probability of the data, the Xs, conditional on the value of θ, as a function of θ. Since the expected value of the score  is zero, the variance is simply the second moment of the score, the derivative of the log of the likelihood function with respect to θ. Hence the Fisher information matrix can be written
If the following condition is met,
then the information matrix can be written as
For N parameters in the model, so that θ is a Nx1 vector the Fisher information takes the form of an NxN matrix as shown below.
Like in the OLS method, uncertainty for MLE is measured by first finding the standard error of each parameter in the model. Due to the asymptotic normality property of MLE, the covariance matrix is equal to the inverse of the Fisher information matrix. The covariance matrix can be written as
By pulling the variances from the covariance matrix and taking the square root, the standard errors of the parameters can be found. The variances come from the diagonal of the covariance matrix, such that Var(θi) is the element in the ith row and ith column of the covariance matrix. Once the variances are found, hypothesis testing and confidence intervals can be determined using the standard errors of the parameters as in the case of ordinary least squares regression.
- Greene, W.H., 2003, Econometric analysis, Prentice Hall, Upper Saddle River, NJ
- Gujarati, D.H., 2004, Basic Econometrics, McGraw Hill, Boston, MA
- Train, K. (2003) Discrete Choice Methods with Simulation, Cambridge University Press.