Products go to die in the Valley of Meh.
Brands turn into commodities in the Valley of Meh.
Rating polls fail in the Valley of Meh.
Meh is hard to capture.
Meh is hard to put a number on.
Meh is hard to explain.
What was exciting yesterday is “Meh!” today.
What is “Meh!” today may not get a chance to redeem itself tomorrow.
Once you enter the Valley of Meh, rescue may be well nigh impossible.
Whatever you build,
Whatever you write,
Whatever you sell,
Make sure it does not go to die…
In the Valley of Meh.
"The existence of a performance standard, and the association of a numerical scale with a misfit variable, does not mean that the misfit is any more keenly felt in the ensemble when it occurs. There are of course many, many misfits for which we do not have such a scale. Some typical examples are "boredom in an exhibition," "comfort for a kettle handle," "security for a fastener or a lock," "human warmth in a living room," "lack of variety in a park." No one has yet invented a scale for unhappiness or discomfort or uneasiness, and it is therefore not possible to set up performance standards for them. Yet these misfits are among the most critical which occur in design problems. The importance of these nonquantifiable variables is sometimes lost in the effort to be 'scientific.'"
As product and service builders, we often tend to focus on what we can measure immediately at the expense of what's hard to measure and takes a longer time to show up in any significant way. Because what can be measured today can be optimised today.
But that leads to all sorts of problems.
Just yesterday, I was ordering food from Swiggy and even before the guy delivered the food to me, I got a message saying,
"Your order #137693029052 was delivered superfast in 18 minutes!"
The guy arrived a whole 5 minutes later.
Although this never fails to create anxiety —
"What if he delivered it to someone else?"
"What if I get scammed out of my food?"
"In the best case, what if I get a refund and have to wait another 30 minutes for my food?"
— being well-versed with how incentive structures at logistics companies work and having experienced this with other logistics companies in the past, I knew that the delivery person's performance was probably being measured on faster deliveries and the food was going to come.
I had a similar case with BlueDart where the parcel arrived a whole hour after I got a message saying it had been delivered, and that too for a very expensive item. So, this kind of misreporting is almost a norm in logistics, unless OTP verification is made mandatory at the time of delivery.
But imagine companies talking about good customer experience and then creating this level of anxiety for the customer, right before delivery — simply because the incentives weren't well thought of and they decided to measure just one quantitative parameter: delivery time.
And this just doesn't happen in logistics. It happens in some degree at every startup.
Let’s say that you want to measure the performance of your team. From an operations perspective, there are a ton of things you could measure that would give you a leading indicator of your team’s performance:
- If you're in sales, you measure number of deals closed per week.
- If you're in engineering, you measure number of tasks completed per week.
- If you're in customer support, you track the number of tickets successfully handled per week.
- If you're in logistics, you track average delivery time per order.
Picking a single quantitative metric like this is a terrible idea.
Consider what happens when your team decides to focus on that metric versus everything else.
- Your sales team optimises for the number of deals closed, and completely ignores retention. They close customers who aren't the right fit and will churn in a month or two.
- Your software engineering team optimises the number of problems solved, but by sacrificing code quality and creating even more technical debt and problems in the long run.
- Your customer support team optimises the number of support tickets handled per week, at the expense of delivering a good brand experience to your customers.
- Your delivery team optimises for time taken per delivery, at the expense of breaking traffic rules, rash driving, and misreporting.
The reason this happens is that it's easy for teams to forget the purpose and end goal of a metric, and instead act solely to optimise the metric itself. Over time, the original purpose of measuring whatever you are measuring gets lost. This is especially true if the metric is tied to individual employee performance bonuses.
And the sad thing about the resulting side-effects is that there is no way to quantitatively capture this information. But over a longer time period, you will see customer dissatisfaction grow, problems pile up, and your brand take a hit.
How do you solve this problem?
By clubbing a quantitative metric with a qualitative one.
- If you measure the number of deals closed, present it alongside retention.
- If you measure the number of engineering problems solved or bugs fixed, club it with the average number of bugs per release.
- If you measure the number of support tickets handled, pair it with your NPS.
- If you measure average delivery time per order, pair it with number of traffic rules violated, customer complaints received around mishandling items, and pair it with OTP verification on delivery so that the metric does not get gamed.
If you don't account for hard-to-measure qualitative factors that affect long term product and brand perception along with straightforward quantitative metrics, you will risk slipping into the Valley of Meh.
And we all know how hard it is to recover from the Valley of Meh.