Why surveys are the best way to make mistakes

If a flight ticket to Mars cost $100, would you join the journey?

 

I’m sure most of us would answer “yes” to this question. Does it mean that we have now validated a new potential service—$100 flight tickets to Mars?

I guess, like me, you’re receiving all those startup survey requests. I’m not talking about “after-event” satisfaction surveys or surveys sent to collect data on service level satisfaction. I’m referring to surveys sent by startups and companies that want to validate an idea or get insights on how to sell their product, what clients want to buy, what employees think about management, and other business valuation points. Well, making business decisions based on surveys is not really the best idea. Here is why.

The basic truth is that surveys don’t reveal what people will do in a certain case, but what people say they will do in that case, and those two things are usually different. In other words, when we answer surveys, we do two things: we imagine and we lie, sometimes even to ourselves.

Back to my initial question—if a flight ticket to Mars cost $100, most people would say that they would join the journey. But how many of them would actually do so after learning about the risks, the duration, the isolation, and the fact that the flight is one-way? Probably none.

This brings me to another problem with surveys—most surveys are conducted in the wrong way, and when I say “wrong way,” I mean scientifically wrong: they are usually biased by the opinions of the survey compiler, they often lack full information (like in regard to the flight to Mars), and they are prone to include questions that are not comparable or not analytically measurable. I will extend on this now:

Bias: Most people don’t have the scientific skills to write surveys – we will compile the questions in a way that reflects what we think about the subject matter, and about how we address it. For example, consider a SaaS startup asking about the preferred features of their product:

Leading questions can suggest a desired answer: “Don’t you think our new user interface makes your workflow much easier compared to others?” This question assumes that the new user interface is superior, leading respondents to agree irrespective of their true feelings.

Double-barreled questions can confuse respondents: “Do you agree our software speeds up your workflow and offers excellent customer service?” This question combines two different issues (software speed and customer service quality), which might force respondents to provide a positive or negative response that doesn’t accurately reflect their opinion on both aspects separately.

Questions framed with negative connotations can skew results: “How problematic do you find the frequent updates of our software?” Framing updates as “problematic” predisposes respondents to focus on the negatives, possibly exaggerating issues that might not have been a significant concern.

These examples show how the phrasing of survey questions can significantly influence the data collected, potentially leading to biased outcomes and misguided business decisions based on those results.

Lack of full information: Consider the following survey question: “Would you like the ability to track changes automatically in your projects?” Respondents might assume this feature simply tracks any changes without understanding the scope or the specifics of what “tracking changes” entails, potentially thinking it tracks all changes in detail or just the final ones, which could lead them to either overvalue or undervalue the feature based on their assumptions. A more informed survey question would be: “Would you like the ability to track all edits and updates automatically in your projects, with detailed logs of each change, accessible directly within the software interface?” This question clarifies that the tracking includes “all edits and updates” and that these logs would be detailed and accessible within the software itself, helping respondents make a more informed decision based on how the feature actually works. Without the full information, respondents might agree to the feature based on an incomplete understanding, leading the startup to believe there is high demand for a potentially less comprehensive feature than intended. Conversely, with full information, respondents are fully aware of what the feature entails and can make a decision that reflects their true interest and needs, resulting in more accurate feedback that aligns with what the feature is designed to do, thus helping the startup refine its offering or marketing strategy based on clear customer insights. This example demonstrates how providing complete and detailed information in survey questions can significantly affect the accuracy and usefulness of the responses, leading to better-informed business decisions for the SaaS startup.

Incomparable questions: consider a survey evaluating user satisfaction, where one question asks, “On a scale of 1 to 5, how would you rate the ease of use of our software?” followed by another question, “On a scale of 1 to 5, how likely are you to recommend our software to others?” Although both questions use a 1-to-5 scale, they measure fundamentally different dimensions: user interface usability and likelihood of recommending (which could be influenced by factors unrelated to usability, such as price or customer service). Analyzing these responses as if they were directly comparable because they share a numerical scale can lead to misleading conclusions about user preferences and product strengths. This approach neglects the nuanced differences between what each scale measures, thereby skewing any derived insights about the product’s overall performance or market position. This example underscores the necessity for careful question design to ensure that data collected are analytically meaningful and accurately reflective of the metrics they intend to measure.

All the best,

Eran

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *