Why Didn’t My Conjoint Project Get Nice Price Curves?
Two reasons it doesn't work out
First, respondent quality matters. It matters a lot. Poor respondents are typically a little less price sensitive. If less than 2% of the sample are poor respondents that is enough to wreck your price curves. It has little impact a little impact on units forecast. And you probably won't even notice a difference in the Price vs Demand curves. Price vs Revenue ends up looking like a ski jump instead of a rainbow with the peak in a nice realistic point.
Second, poor respondents are brutally difficult to identify. In a choice exercise respondents tell you which option they would choose. The data is coded with an number (1-5). Looking at the coded responses with your eyes doesn't help at all.
The image below shows how impossible it is to visually sort the bad respondents from the rest.
What's worse is you might think that if someone consistently answers differently than everyone else, that would be a signal that they are not providing useful information. But on the flip side, they could just have a different preference structure. We hope that people are different and that is not a viable strategy for labeling respondents as poor.
The solution
The only way to consistently identify bad respondents is with a model-based approach.
Some academic researchers provided the key. Howell and Ebbes, showed in their paper on identifying the information content from research subjects that you can analytically derive (with a statistical model) which respondents are providing information in their choice tasks using a finite mixture model within the gibbs sampler estimation on the scale factor. Yes it is complex.
Red analytics has developed and been using a proprietary, enhanced version of the detection algorithm inspired by the work of Howell and Ebbs since 2016.