How to Create Online Surveys That Yield More Accurate Results
After spending years in psychology research, I understand better than anyone that scales can be really boring, but more importantly, that it doesn’t have to be that way.
Personality based surveys are designed to understand brands and consumer behavior in a way that taps into the unconscious decision making process through psychological traits, tastes, preferences, and styles that govern the kinds of things people like. There are a number of different kinds of surveys we employ to get at this, some measure personality, others measure more concrete tastes (like how eco-conscious someone is), while others are designed to delve into the components that compose each individual’s style in a number of areas such as fashion and music. Each survey is the result of a lengthy research process in which the survey is designed, tested, and validated. This process assures that every survey measures what it purports to measure (i.e., it is valid), and does so in a consistent and reliable way (i.e., it has high reliability).
Our personality scales beat the current state of the art in all of these validity criteria, here I spell out how.*
1. “...the accuracy of answers depends on the clarity of questions...”
Our survey questions are almost entirely constructed of concrete questions about specific things. Most survey items that one would find in the psychological literature tend to be fairly abstract (e.g., “I go straight for the goal”, or “I’m good at turning plans into action”). These are good as far as they go, but they would be better if they could be more concrete (e.g., “I am quicker than my co-workers at most tasks”). Most of our questions are specific and concrete (e.g., “what would you prefer to eat for breakfast?”).
2. “...whether the respondents have a good base in experience for answering the questions...”
The abstract nature of items and responses on typical psychological scales means people often must first abstract away from concrete experiences to answer them. This requires meta-cognition (thinking about thinking: e.g., “Do I typically feel goal-oriented”), which takes people away from their specific experiences. The mind works best with concrete examples and ideas, and our questions are designed to be concrete, rather than abstract, so that people can answer them with a minimal amount of meta-cognition.
3. “...whether the form in which the answers are to be given is appropriate...”
Again, the abstract nature of responses on typical psychological scales (e.g., agree to disagree on a 1-7 scale) tend to be abstract. The items are designed this way because researchers need the numerical scales in order to plug into their statistical models, and quantify responses. We have developed a proprietary technique for survey question development, which allows us to get the same desirable numerical properties out of categorical responses (i.e., responses are categories, rather than numerical ratings, such as “Prius”, “Camry”, and “Hummer” as responses to “Which kind of car would you
prefer to drive?”). The categorical responses are more concrete than 1-7 response scales, and thus more appropriate to the way our minds work.
4. “...whether the respondents regard the questions themselves as meriting a serious and thoughtful response...”
Our questions were designed to be engaging because we wanted people to take them voluntarily online. Researchers typically have to pay people to take the abstract scales they have devised, but we have had over a 90% completion rate with our scales, without payment. (can we include number of participants as well as number of questions each person took...we should emphasize that 90% completion is not just for one survey, but for ALL of them). That data point in and of itself demonstrates that our questions are more engaging, and people therefore give them a more serious and thoughtful response.
All of this means our scales are likely to be more valid (i.e., accurately measure the traits we are after) than traditional psychological survey scales.
*From Pace, R. C. (1984). Measuring the quality of college student experiences: An account of the development and use of the college student experiences questionnaire. Los Angeles: Higher Education Research Institute.