Beyond Surveys: The Hidden Reason Audience Surveys Don’t Work
TL;DR: Audience surveys don’t work the way coaches think they do. Not because the questions are bad, but because people edit themselves when they know someone’s watching. A coach sends out a survey and gets responses like “I want more confidence” and “I’d love better work-life balance.” Reasonable answers. Also almost completely useless for creating content that stops someone mid-scroll. The same people, writing anonymously at 2am on Reddit, say things like “I cried in the car park again before going into the office and I don’t know what’s wrong with me.” That’s the version that would make someone stop and think, “this person understands.” This article covers why surveys flatten the truth, what actually happens when someone fills one out, where the real language lives, and what to do instead if you want content that connects.
A coach sent out a perfect survey and got perfectly useless answers
Most coaches have been told to survey their audience. Ask what they’re struggling with, build content around the answers, rinse and repeat. It sounds like good practice. The problem is that audience surveys don’t work for the one thing coaches actually need from them: honest emotional language.
I worked with a career coach last year who’d done everything right. Segmented list. Thoughtful questions. Even offered a small incentive for completing it. She got forty-seven responses.
The top answers, paraphrased across the board: “I want more confidence in interviews.” “I’d like to feel less stuck.” “Work-life balance is important to me.”
She turned those into three months of content. Posts about building interview confidence. Tips for feeling unstuck. Work-life balance strategies. Solid material, well-written, genuinely helpful.
Almost nobody engaged with it.
That same month, I was reading a thread in r/careerguidance. Someone had written: “I practise answers in the mirror until 1am and then sit in the interview room and forget how to be a person.” Twenty-eight comments, people piling in with their own versions of the same terror. One reply: “I rehearse so much that I sound like a robot and they can tell, and that makes it worse.”
The survey said “confidence.” The thread said something much more specific. Something you could feel in your chest reading it. The coach was writing about the survey version. Her audience was living the Reddit version.
That gap is why audience surveys don’t work for the kind of insight coaches actually need.
Why audience surveys don’t work the way you’d expect
They don’t lie exactly. They edit.
There’s a well-documented phenomenon in research called social desirability bias. People answer surveys the way they want to be seen, not the way they actually feel. It’s not deliberate deception. It’s instinctive. When someone asks you a question and there’s a name attached to your response, you present the tidied-up version.
Harvard Business Review has published extensively on this. When employees are surveyed about job satisfaction, they report higher satisfaction than their actual behaviour suggests. When patients are surveyed about health habits, they overreport exercise and underreport drinking. When coaching clients are surveyed about what they want, they give you the aspirational version.
“I want more confidence” is what someone says when a coach is watching. “I froze in the meeting and wanted to crawl under the table” is what they write when nobody’s watching.
Both are true. But only one tells you what to write about if you want your content to land.
The performance problem
Surveys create a stage. The moment someone knows their answer is going to a professional, the performance starts. Not consciously. Not maliciously. They just do what humans do: present the version of themselves that feels acceptable.
This is especially pronounced in coaching niches because your audience already suspects they should be handling their problems better. A leadership coach asks “What’s your biggest challenge?” and the client writes “delegating effectively.” The honest answer, the one they’d share anonymously, is closer to: “I don’t trust anyone on my team and I know that’s my problem but I can’t stop.”
The survey gets the sanitised headline. The anonymous forum gets the paragraph underneath.
The vocabulary problem
There’s a second issue, and it’s subtler. Your audience doesn’t have your vocabulary yet.
When they fill out your survey, they’re already trying to speak your language. They’ve read your posts, they’re on your list, they know roughly what you do. So they describe their problems using whatever terminology they’ve absorbed. “Boundary setting.” “Imposter syndrome.” “Mindset.”
Those are your words reflected back at you. Not theirs.
The people who haven’t found you yet, who are still sitting with the unnamed problem, describe it in raw, specific, often messy language. And it’s that language, the pre-coach vocabulary, that your content needs to reach them.
This is what I call the Language Gap. Surveys widen it because they train you to write in your audience’s performance vocabulary instead of their honest one.
What a survey actually measures
I want to be fair to surveys. They’re not useless across the board. They measure some things well.
What surveys are good at:
- Demographic data (age, location, industry)
- Multiple-choice preferences (do you prefer morning or evening sessions?)
- Satisfaction scoring (rate this experience 1 to 10)
- Binary decisions (would you attend an in-person event? Yes/no)
What surveys are terrible at:
- Emotional truth
- Problem language in the person’s own words
- The specific moments that push someone from thinking about change to actually seeking help
- The gap between what someone says they want and what they’d actually pay for
When a coach says “I surveyed my audience,” they usually mean the second category. They wanted to understand pain points, motivations, buying triggers. And that’s exactly where surveys fall apart.
A useful way to think about it: surveys measure what people are willing to admit. Honest conversations, the kind that happen in anonymous forums, capture what people actually experience. The distance between those two things is where your best content lives.
Steve Jobs knew this in 1997
There’s a quote that gets thrown around in product circles: “People don’t know what they want until you show it to them.” Jobs said it. He was talking about why Apple didn’t run focus groups. And while I’m not building the next iPhone, the principle transfers directly.
When Ford asked customers what they wanted in the early 1900s, the (possibly apocryphal) answer was “a faster horse.” People can describe the shape of their discomfort. They cannot, on the spot, describe the shape of the solution. That’s your job.
A wellbeing coach sends a survey asking “What topics would you like me to cover?” and gets back “stress management,” “work-life balance,” “mindfulness techniques.” All perfectly predictable. All already covered by a thousand other coaches. The survey has told her nothing she couldn’t have guessed, because it asked people to do the one thing they’re worst at: articulate a solution to a problem they haven’t fully examined yet.
Meanwhile, in r/burnout, someone writes: “I took a week off and felt worse, not better. The thought of going back made me physically sick. But I don’t think I’m depressed? I just hate who I’ve become at work.” That’s a person who’d tick “stress management” on a survey. But the content that would reach them doesn’t look anything like a stress management tips article. It looks like someone naming the exact flavour of dread they can’t put words to.
The survey answer and the forum post are describing the same problem. One gives you a category. The other gives you a sentence that could open your next piece of writing.
The observer effect in audience research
Physics has a concept called the observer effect. Measuring a thing changes the thing. Electrons behave differently when you’re watching them.
People do the same.
When your audience knows they’re being observed, being surveyed, being asked for input by a coach they respect, the answers shift. Not dramatically. Not dishonestly. Just enough to sand off the edges that make the response feel too vulnerable, too messy, too real.
A grief coach asks: “What would help you most right now?” Survey answer: “Practical coping strategies and a supportive community.” Honest answer, written at 3am in a bereavement group: “I just want someone to tell me it’s okay that I still talk to her in the kitchen.”
The first answer tells you what to build. The second tells you what to say. Content that converts does both, but it starts with the second.
The gratitude effect
There’s another layer to this that rarely gets mentioned. When someone fills out your survey, they’re often grateful you asked. They want to be helpful. They want to give you something useful. So they think carefully about what would be a “good” answer rather than an honest one.
This is especially true for coaches with warm, engaged communities. The warmer the relationship, the more your audience performs competence for you. They minimise their struggles. They present the version they think you’d be proud of. A fitness coach asks “What’s holding you back?” and her community writes “consistency” and “meal planning.” Nobody writes: “I eat crisps in the car so my family doesn’t see.”
The people most likely to respond to your survey are the ones who already feel connected to you. And the people most connected to you are the most likely to perform for you. You’re polling the group with the strongest incentive to give you the edited version.
What happens when you base content on survey data
I’ve watched this play out enough times to recognise the pattern. A coach surveys their audience. They get back a cluster of aspirational answers. They build three months of content around those answers. And then.
The content is technically on-topic. It addresses what people said they wanted. But it sounds like it was written for the person who filled out the survey, not the person who was lying awake at 2am with the actual problem.
It sounds like: “Five strategies for building confidence before your next interview.”
It could sound like: “You practise answers until 1am and then freeze anyway. Here’s what’s actually happening.”
The first is content built from survey data. The second is content built from conversation mining, from reading what people actually write when nobody’s asking.
The difference isn’t quality. Both are well-intentioned. The difference is recognition. The first makes someone think, “That’s a useful article.” The second makes someone think, “How did they know?”
That second reaction is what coaching clients actually want from your content. Not education. Recognition.
The compound cost
It gets worse over time. When survey data drives your content, and that content gets low engagement, the natural response is to survey again. Ask better questions. Try different formats. Send it to a different segment. The result is usually the same, because the problem was never the survey design. It was the assumption that people can hand you their honest emotional language when you’re the one asking.
Each survey-and-content cycle moves you further from the raw language and deeper into the echo chamber of your own vocabulary reflected back at you. This is the Guessing Tax in action, except it feels like research. You think you’re gathering data. You’re actually gathering performance.
I’ve seen coaches spend six months in this cycle before someone points out that the content sounds professional but doesn’t sound like anyone’s experience. The audience growth mistakes article covers the broader pattern, but the survey loop is one of the most common entry points.
Where the honest language actually lives
If surveys give you the edited version, where do you find the unedited one?
Anonymous forums
Reddit is the single richest source. Anonymous by default, which removes the performance layer entirely. Search for subreddits related to the problem your audience faces, not the solution you offer. A burnout coach should be reading r/burnout and r/antiwork, not r/coaching.
Sort by “new” rather than “top.” The polished, highly-upvoted posts are already edited by the crowd. The raw ones, posted at odd hours with three upvotes, are gold.
Late-night posts and comment sections
Time-stamp matters more than most people realise. Posts made between 11pm and 4am tend to be more honest than midday posts. The social filter is thinner when someone can’t sleep and is looking for somewhere to put what they’re feeling.
The comment sections often contain richer language than the original post. Someone shares a general question, and the replies get specific. “Same. I haven’t told my partner how bad it’s got and I don’t know how to start” is the kind of sentence that shows up in a reply, not a headline.
Book reviews
Amazon reviews for books in your niche are phenomenal. When someone reviews a book about anxiety, they don’t just rate the book. They describe why they bought it. “I picked this up because I’d started having panic attacks in Tesco and I couldn’t tell anyone” is a review opener that tells you more about your audience than any survey will.
The one and two-star reviews are especially useful. They describe what the book failed to address, which tells you exactly what your audience was hoping to find.
Facebook groups (peer-led ones)
The distinction matters. Coach-led groups create the same performance effect as surveys. Peer-led groups, where people with the problem support each other without a professional moderating, give you much closer to honest language.
For a detailed walkthrough: Conversation Mining: How to Discover What Your Ideal Clients Say When You’re Not in the Room.
The 2am test
I use a simple test now whenever I’m reviewing content before it goes out.
Would the person who wrote that 2am Reddit post recognise themselves in this sentence?
Not the survey respondent. Not the person who politely listed their challenges for a coach. The person who couldn’t sleep, who typed something raw and real into a text box because they had nowhere else to put it.
If the answer is yes, the content will connect. If the answer is “they’d see it’s about their topic but wouldn’t feel seen by it,” the content is based on survey language, not real language.
This is a rough filter. It’s not scientific. But it catches the gap that survey-based content creates, every time.
“But Pat, isn’t some feedback better than no feedback?”
Yes. I’m not saying throw your survey responses in the bin.
Survey data is useful for structural decisions. If forty people say they’d prefer a group programme over one-to-one, that’s useful. If most of your list is in the UK and your webinar’s at 2pm Eastern, that’s worth knowing.
What I am saying is that survey data is terrible for the thing most coaches use it for: understanding what their audience is going through and how to talk about it.
For that, you need the unobserved version. The version that has no audience and no coach and no polite distance between the pain and the words.
If you’re creating content for coaches instead of clients, survey data will keep you there. Because coaches answer surveys articulately. Your actual clients, the ones who haven’t found you yet, don’t.
What to do instead
You’ve got three options, ranging from a weekend’s work to a fully automated system.
Option 1: Manual conversation mining
Spend a weekend reading thirty or more threads where your audience talks honestly. Reddit, Facebook groups, Amazon book reviews, Quora. Copy the exact sentences that make you stop. Don’t paraphrase. The exact words are the point.
The Weekend Audience Research Sprint walks through this step by step, with time estimates for each block.
Option 2: Pain-language mapping
Take the sentences you’ve collected and group them. You’ll find four or five core problems expressed in dozens of ways. The clusters tell you what to write about. The language within each cluster tells you how to write about it.
Pain-Language Mapping covers the full extraction process.
Option 3: Automated research
This is why I built Pain Point Pulse. It pulls language from online sources, maps pain points, and gives you a report of what your audience actually says when they’re not performing for a coach. The 2am posts you’d never find yourself, surfaced and sorted.
It doesn’t replace the understanding you get from reading threads yourself. But it finds patterns across hundreds of conversations in the time it would take you to read ten.
For a comparison of approaches: Manual vs Automated Audience Research.
The approach that actually builds content people respond to
Whichever option you choose, the underlying shift is the same. You stop asking your audience to describe their problems to you, and you start observing how they describe their problems to each other.
That’s it. That’s the whole thing. The research method changes, but the principle is consistent: unobserved language is more honest than observed language. Content built from honest language outperforms content built from polished language. Not because it’s more manipulative. Because it’s more accurate.
A client once told me that reading through a Reddit thread about her niche felt like eavesdropping. She meant it uncomfortably. I told her it wasn’t eavesdropping, it was listening. Those people posted publicly because they wanted to be heard. She’s just the first person in her industry who actually paid attention.
The full guide to building your research approach from scratch is in the Complete Guide to Audience Research.
The bigger picture: research that respects the truth
I think the reason audience surveys don’t work for coaches comes down to something simple. Surveys assume people know what they want and can articulate it clearly. Both assumptions are shaky.
Most people don’t know what they want until they see it. They know something’s wrong. They know they’re not okay. They can describe the symptoms but not the cause. Asking them to summarise it in a text box while a coach is watching is asking them to do the work that your research is supposed to do for them.
The better approach is to go where the honesty already is. It’s already being written, every night, in forums and threads and comment sections. You don’t need to ask anyone anything. You just need to read what’s already there and take it seriously.
That’s audience research that starts with respect for the mess, the 2am honesty, the unsanitised version of what people actually go through.
It’s also, not coincidentally, the research that produces content people actually respond to.
Frequently asked questions
Can I still use surveys if I change how I write the questions?
Better questions help, but they don’t solve the core issue. Open-ended questions (“Tell me about a moment when…”) get closer to honest language than multiple choice. But the performance effect is still there. The person knows you’re watching. For emotional truth and raw language, pair any survey with conversation mining. Use the survey for logistics and preferences. Use the mined language for content.
What if I’ve already built my content plan around survey results?
Don’t scrap it. Check it. Take your top three content topics and search for them on Reddit. Compare the language in your posts to the language in the threads. Where they match, you’re fine. Where they don’t, you’ve found the gap. Often the topics are right but the framing is wrong. “Interview confidence” might be the right territory, but “I practise until 1am and still freeze” is the right entry point.
Is conversation mining ethical?
Yes, with boundaries. You’re reading public posts and learning how your audience talks. You’re not quoting individuals, screenshotting posts, or infiltrating private spaces under false pretences. You’re using language patterns, not personal stories. The line is clear: use the language, respect the person. Conversation Mining covers the ethical considerations in detail.
What about asking questions in my own Facebook group or community?
It’s better than a survey because the format is more conversational, but the observer effect still applies. People in your group know you’re reading. For the unguarded version, you still need to go where they talk to each other, not to you. Your group is great for testing ideas and building relationships. It’s just not where you’ll find the 2am language.
How many anonymous posts do I need to read before patterns emerge?
For a single coaching niche, thirty threads is the minimum where you’ll start seeing the same phrases and emotional states repeating. By fifty you’ll have a solid language map. After that, five to ten threads per week keeps it current. The Weekend Audience Research Sprint covers how to do the initial burst efficiently.
My survey did get some really honest responses. Doesn’t that disprove this?
Some people are honest on surveys. Particularly the long, detailed, clearly-written-at-midnight responses. Those are valuable. The problem is that they’re the minority, and they get averaged into the rest. When you read survey results as a batch, the honest outliers get flattened by the majority of polished, aspirational responses. If you get one of those raw answers, treat it like the gold it is. But don’t design your entire research process around the hope that people will be that honest when they know you’re reading.
There’s a version of audience research where you ask people what they want and build from what they tell you. And there’s a version where you go and read what they’ve already told someone else, at 2am, when they weren’t editing.
I think we both know which one gives you the better answer.
Pat Kelman. Come and look at this.
Image: Photo by Tima Miroshnichenko on Pexels