I was out to eat with my family a few nights ago. The food was brought to our table right when my daughter started telling a story about something that happened at school. The food runner put our plates down and left, but we wanted to let my daughter finish telling the story before eating. It was a bit longer and more drawn out than expected, but eventually we got there.
Just as we all picked up our forks, the server stopped by and asked, “How’s everything tasting so far?” I looked up and said, “Everything is great!” But not one bite of food had been taken yet.
I think I was caught off guard by the question and just blurted out a positive response to avoid awkwardly pointing out the obvious; that there was no way for me to know how everything was tasting because we hadn’t even started yet. It was an understandable slip-up for a server during a busy dinner rush. They were probably on autopilot, stopping by all their tables asking the same check-in question. We laughed about it and moved on with our evening.
As a business owner in the world of customer insights, I often relate my personal life to my professional life. This was no exception. After all, I was asked by a brand representative to provide feedback about my experience. So, it’s definitely relatable to the work we do here at MacKenzie gathering customer feedback about brand experiences. And that back-and-forth with the server underscored a misstep we commonly see with online feedback surveys – the wrong question being asked given the timing and context.
Thinking about it from the server’s perspective, the approach made sense. Wait about 5 minutes after the food is delivered, then stop by to ask how everything is tasting. In our case, my daughter’s story threw off that timeline. But the server couldn’t have known that, and they had already decided what they were going to ask before arriving at our table. So, a question that usually makes sense didn’t align with our specific situation.
In the restaurant, we were able to flag someone down to update our feedback and address any issues. For online feedback collection, that’s not always the case. The brand survey gets delivered as-is, and that’s it. If the questions aren’t timely and relevant to the customer, they likely won’t respond at all. Or, worse yet, they do respond and resulting insights are inaccurate and unreliable – like me saying the food was great even though I hadn’t tried it.
A good comparison would be a product satisfaction survey. Imagine that it’s been a few weeks since a customer received their purchased item, and the brand wants to know how things are going. So, they send a product satisfaction survey. If they jump right into asking product-related questions, they’re assuming the product has been opened and used enough for the customer to have strong opinions. That’s probably a safe assumption, but they can’t know for sure.
In some cases, the customer’s timeline may have been thrown off for one reason or another and it’s still in the original package. Or maybe it was a gift, and that person has no experience with the product whatsoever. One way to address this issue is starting a product satisfaction survey by clarifying the respondent’s situation. Ask a simple qualifier question to find out how often, or if at all, the product has been used. This will help make sure the ensuing questions are timely and relevant. Then, based on their response to that initial question, the customer can be sent down one of several question tracks that align with their experience.
Just like the restaurant server, brands can get stuck on autopilot with the feedback questions they ask. Assumptions are made based on what’s worked in the past, and one standard approach is applied in every situation moving forward. But with personalization and relevant interactions being so important to modern customers, it’s wise for brands to build a strategy around a variety of possible scenarios. That way, each interaction (or survey question) will fit within the context and align with where customers are at along their journey.