CALD Seminar

  • Alan Montgomery

Automatic Data Analysis using Model Based Priors for Marketing Datasets

The availability of point-of-sale data for consumer packaged goods at the store-level has created an interest by retailers and manufacturers to use this data for creating elasticity based pricing and promotion strategies. Elasticities provide estimates of the effect of a price or promotional change on sales and are derived from sales forecasting models. The most effective micromarketing and segmented pricing strategies require unique price elasticity estimates for individual products and stores. The problem is that even simple categories contain dozens of products and national chains can have thousands of stores, which means conventional modeling techniques for constructing regression models can be very labor intensive, and conventional estimation techniques at such a micro-level can result in very poor elasticity estimates that vary widely and are incorrectly signed. Two suggestions that have been made by previous researchers are to use simpler, more parsimonious models or to improve the estimates of more complex models at a micro-level by "shrinking" them towards an average of the estimates. In this talk I discuss a new estimation technique which combine these approaches. I show that this technique can substantially improve the price elasticity estimates and can be implemented in an automated fashion.Teh basic motivation of our techniques is to use simple parametric models as the basis of a prior. Instead of enforcing the restrictions implied by simpler model directly, they are instead used as the basis of a stochastic prior. This stochastic prior results in target model towards which our price elasticity estimates are drawn or "shrunk" towards. We employ an adaptive prior that can learn from the data how much shrinkage to perform and can differentially shrink sets of parameters. This technique provides some discipline to the parameter estimates, but at the same time if the data does not agree with out simple model, the prior will be ignored. Our approach combines the benefits of economic modeling with statistical shrinkage. Application of our new methods to simulated and real store scanner data show significant improvements over existing Bayesian and non-Bayesian methods in terms of predictive accuracy and result in more plausible and reliable price elasticity estimates. This technique may be employed in other problems where complex semi-parameteric or nonparameteric models are beneficial, but a lack of data can lead to overfitting so that simple parameteric models work better. A lesson from this analysis is that the researcher does not need to chosse one technique or the other. Instead the strengths of the simple parameteric model can form a basis of a prior for the more complex model. If the analyst places a strong prior on the parameteric model being correct, then the results will mirror those of the parameteric model. Conversely, a weak prior results in results similar to those of the complex model in the traditional framework. For example, a problem that may occur in decision trees are that few cases may occur in each of the nodes. The small number of cases can lead to quite volatile forecasts. The use of the proposed methodology can reduce the variability of these forecasts by shrinking them towards a logistic regression. The adaptivity of the prior can guarantee that as the information provided to the system increases, the system can move quickly away from any priors that are not helpful.
For More Information, Please Contact: 
Catherine Copetas,