What Your Customers Are Telling You That You’re Not Hearing

I have spent the last seventeen years in commercial sales, mostly in electrical wholesale and renewables. In every business I have worked in, there has been a quiet pattern. The owner or general manager believes they have a strong handle on what customers think. The frontline staff believe the boss has no idea. They are usually both partly right, and the gap between them is where good decisions go to die.

This article is about that gap. It is about the difference between collecting customer feedback and actually hearing it. It is also about why most small businesses I have worked with, sold into, or run a P&L for, do a poor job of converting customer signal into action, even when they think they are doing a fine job.

Listening is not the same as hearing

I want to draw a distinction that sounds obvious but matters in practice. Listening is the act of receiving feedback. Hearing is the act of understanding it well enough to decide what to do about it. A business that has a contact form, a Google Reviews page, and an annual customer survey is listening. Most are not hearing.

The reason is structural. Feedback arrives in fragments, from different channels, at different times, in front of different people. A customer leaves a one-star review on Tuesday because their order was late. The same customer mentioned the delay to the despatch coordinator on Monday, who noted it on a sticky note that ended up in the bin. The owner reads the review on Friday and replies politely without ever connecting the two events. The pattern, that the new courier is unreliable on Mondays, never surfaces because no one stitched the fragments together.

Reichheld (2003) framed customer feedback in terms of a single loyalty number, but the operational lesson buried in that work is more useful than the score itself: most companies collect a great deal of customer information and act on very little of it. The data has nowhere to go and nobody whose job it is to do something with it.

Passive feedback is a lagging indicator

Public reviews and end-of-transaction surveys are the two channels most small businesses lean on. Both are lagging indicators. By the time you read a bad review, the customer has decided whether to come back. They have probably already told other people. Bain & Company’s research on customer experience has consistently shown a gap between how companies rate themselves on customer experience and how their customers rate them, and the gap is larger when the only signal flowing back is post-transaction (Allen, Reichheld, Hamilton, & Markey, 2005).

The Australian Competition and Consumer Commission has documented similar patterns in its consumer surveys. Most consumers who have a problem with a product or service do not formally complain. They tell friends and family, they switch suppliers, and they leave the original business with no record that anything went wrong (Australian Competition and Consumer Commission, 2023). If your only window into customer dissatisfaction is the small fraction who write a review or fill in a survey, you are seeing a heavily filtered signal.

In wholesale, I have watched this play out at the account level. A regular trade customer who used to spend twelve thousand dollars a month quietly drops to seven, then to three, then to nothing. No phone call, no complaint, no review. Six months later you ask around and find out they switched to a competitor because of one botched delivery they never bothered to mention. The information was there. It was not in a system that anyone could see.

The frontline information gap

The single biggest insight I have taken from running sales teams is that the people closest to the customer hold information the people making decisions never see. The rep who calls on thirty trade customers a week knows which suppliers are circling, which product lines are losing favour, which new builds are about to be tendered. The despatch coordinator knows which deliveries are running late and why. The branch manager on the counter hears the same complaint about a particular product five times in a fortnight. None of that information reliably reaches the person setting strategy unless the business has built a way to move it.

Hammer (1990) wrote about reengineering the corporation in the early nineties, and one of his less-quoted observations was that information asymmetry within an organisation is itself an operational problem, not just a communication one. Decisions made at the top using stale or sanitised information will be worse than decisions made with current frontline signal, regardless of who is making them. More recent work on employee voice in service organisations has reinforced the same idea: when frontline employees believe their input shapes decisions, they capture and pass on more useful customer information; when they believe nothing happens, they stop bothering (Detert & Burris, 2007).

I saw the inverse of this when I led the turnaround at Total Tools Brendale. We took the internal audit score from thirty-five percent to ninety-five percent over two years, and a meaningful share of the work was simply building habits where frontline staff captured what customers were actually saying and bringing it into a regular review. Nothing fancy. Just a structured way for information to travel from the counter to the people who could change something.

Not all feedback needs action

A separate failure mode is the opposite of ignoring feedback: trying to act on all of it. If every customer comment becomes a project, you end up with a backlog that no one believes in and a team that learns to tune out the queue. The skill is triage.

Triage means looking at incoming feedback and asking three questions. Is this a one-off, or part of a pattern? Does it affect retention, referral, or the offer? Is the cost of acting on it justified by the size of the problem? Most feedback fails one of these tests. That is fine. The point is to make the decision deliberately and record it.

In practice, this means having a regular forum, weekly is enough for most small businesses, where someone walks through the captured feedback and assigns each item to one of three buckets: act now, park with reason, or close as no action. Park with reason is the important one. It tells the staff member who logged the feedback that it was seen and considered, even if nothing changes. That is what keeps people logging feedback in the future.

What a structured feedback loop looks like

Let me describe what I have seen work for a business between five and twenty staff, because the dynamics change at scale and I am writing for the size of business I know.

Three components make the loop work. Start with a capture mechanism your staff will actually use: a single mobile-friendly form where any team member can log a customer comment, complaint, idea, or observation in under a minute. Free text plus two or three categorical tags is sufficient. Keep it minimal. Every extra field you add is a reason someone decides not to bother.

The second is a regular review cadence. Weekly works for most businesses. The review is short, twenty to thirty minutes, attended by whoever can make the relevant call. Each item gets a triage decision. Patterns get flagged. Owners or operators get visibility on what is actually coming in from the front, not the version filtered through three layers of summary.

The third is a visible log of what was actioned and what was parked. This is the part most businesses skip and it is the part that determines whether the loop survives past month three. If staff cannot see what happened to the things they logged, they stop logging. The log does not need to be public to customers, although there are arguments for that too. It needs to be visible to the staff who feed the system.

The reason I built businessreview360.au was that I could not find a tool that did this in a way that suited a small Australian business. The market is full of enterprise voice-of-customer platforms that are priced and shaped for a customer experience team that does not exist in a ten-person business. The local-feedback corner of the market is mostly review-aggregator tools, which solve a different problem. BR360 is built around the loop I have described: capture from the front, review on a cadence, decide and record, surface patterns over time. It is not the only way to close the gap, but the gap itself is the point.

A practical starting point

If you take nothing else from this article, take this. Pick one channel where customer feedback is currently arriving but going nowhere. It might be the comments your counter staff hear and forget. It might be the support emails that get answered individually but never aggregated. It might be the reviews you read but never connect back to a specific decision. Pick one, and put a thirty-minute weekly habit around it. Capture, review, decide, record. Do that for eight weeks before you change the system.

You will find two things. First, the volume of useful signal is higher than you expected, because most of it was previously evaporating. Second, a small number of patterns will dominate. Acting on those patterns will move the needle more than any survey programme you could buy. Salesforce’s State of the Connected Customer research has reported for several years that customers expect businesses to act on what they share, and they reward those that visibly do (Salesforce Research, 2022). The reward is not abstract: it shows up in retention, referral, and the willingness of staff to keep contributing to the loop.

The gap between what customers are telling you and what you are hearing is closeable. It is not closed by buying a tool. It is closed by building the habit, picking the right cadence, and being honest about what you are willing to act on. The tool is the scaffold. The habit is the work.

References

Allen, J., Reichheld, F. F., Hamilton, B., & Markey, R. (2005). Closing the delivery gap: How to achieve true customer-led growth. Bain & Company.

Australian Competition and Consumer Commission. (2023). Australian Consumer Survey. ACCC.

Detert, J. R., & Burris, E. R. (2007). Leadership behavior and employee voice: Is the door really open? Academy of Management Journal, 50(4), 869-884.

Hammer, M. (1990). Reengineering work: Don’t automate, obliterate. Harvard Business Review, 68(4), 104-112.

Reichheld, F. F. (2003). The one number you need to grow. Harvard Business Review, 81(12), 46-54.

Salesforce Research. (2022). State of the Connected Customer (5th ed.). Salesforce.

FAQ

How often should I review captured customer feedback?

Weekly is the right cadence for most small businesses. Monthly is too long: patterns get cold, staff stop seeing the link between their input and any outcome, and the queue grows large enough to feel intimidating. Daily is too short and turns into busywork. A weekly twenty to thirty minute review with the right people in the room covers eighty percent of what a small business needs.

What if my staff are not used to logging customer comments?

Expect a slow start. Most teams have years of habit telling them that feedback either disappears into a void or becomes their problem to solve personally. The fastest way to shift the habit is to act visibly on the first few items logged, and to record the parked-with-reason items as well. Staff watch what happens to early submissions and calibrate their effort accordingly. If the first three items you log produce a visible decision, the fourth one will arrive faster.

How do I tell the difference between a pattern and a one-off?

A pattern needs at least three independent signals from different customers or staff members within a defined window, usually a month. One frustrated customer is a one-off until proven otherwise. Two might be coincidence. Three from different sources is a pattern worth investigating. The rule is rough, but it stops you from chasing every individual complaint as if it were systemic, and it stops you from dismissing real issues as isolated.

Do online reviews count as part of this loop?

Yes, but treat them as one input, not the main one. Reviews are a lagging signal: by the time someone writes one, the experience has already happened and the decision to return or not has usually been made. Capture them, look for patterns alongside other channels, but do not let your feedback loop become a review-response programme. The richer signal is what your staff hear in person and what your customers tell you before they get angry enough to leave a review.

What does this look like for a service business with no physical counter?

The principle does not change. Substitute the counter for whatever your customer touchpoint is: support email, sales call, onboarding session, project handover. The staff member who has the conversation logs the relevant feedback, the weekly review triages it, the decision is recorded. The format is the same. The difference is that you may need to be more deliberate about prompting the capture, because there is no physical moment that reminds the staff member to log it.