If you have ever stared at a mountain of market research data and tried to turn it into actionable product decisions, you already know this feeling: it is roughly the same as being handed a 1,000-piece jigsaw puzzle with no box, in the dark, by someone who already ate three of the pieces.
Why Market Research Data Is Useless Without a Decision Framework
Let me paint you a picture. You have commissioned a survey. You have focus groups. You have got ethnographic research from three different continents, a sentiment analysis tool that cost more than my first car, and a pivot table so complex it has its own postcode. And yet — your product team is still arguing about which feature to build next.
That is not a data problem. That is a decisions problem.
According to a landmark study published in the Journal of Marketing Research — Moorman, C. (1995). “Organizational Market Information Processes: Cultural Antecedents and New Product Outcomes.” — firms that systematically use market information in product development achieve significantly higher new product performance than those that collect data without structured frameworks for acting on it. In other words: collecting data without a decision process is like buying a treadmill and using it as a coat rack. It looks productive. It is not.
The gap between “we have the data” and “we made the decision” is where most product teams live — and suffer. This guide is your step-by-step map out of that gap.
Step 1: Define the Decision Before You CollectMake the Data
I know, I know. You are reading this after you already collected the data. That is fine. We all do it. I once bought futures contracts before I had finished reading the underlying report. We move.
But going forward — and this is critical — the decision you need to make must be defined before a single survey goes live. This is called decision-led research design, and it is the single most important shift you can make in how you approach market intelligence.
Ask yourself: What is the one decision this research needs to inform?
Not seventeen decisions. One. Because the moment you say “Oh, while we are at it, let us also figure out our brand perception, pricing sensitivity, feature prioritisation, and whether people prefer blue or green on our app icon” — you have just built a Frankenstein survey that answers nothing properly and confuses everyone involved.
Rozenkowska (2023) — publishing in the International Journal of Consumer Studies — conducted a systematic literature review of consumer behaviour research across a decade and found that the most actionable studies shared a common trait: they were designed around a specific decision hypothesis, not open-ended curiosity. (Rozenkowska, K., 2023. “Theory of planned behavior in consumer behavior research: A systematic literature review.” International Journal of Consumer Studies, 47(6), 2670–2700.)
Practical Exercise: The Decision Canvas
Before your next research project, fill in this sentence:
“After reviewing the results of this research, we will decide whether to ____ or ____.”
If you cannot fill that in — stop. Go back. Define the decision first. I am serious. I will wait. I have been waiting on the market to make sense since 2008 — I can wait on you.
Step 2: Categorise Your Data by Decision Type
Not all market research data is created equal. There are three core types, and they demand three different responses from your product team.
2a. Descriptive Data (What is happening?)
This is your survey results, usage statistics, demographic breakdowns. It tells you the what. Who bought? How often? What do customers say they want?
Descriptive data is useful. It is also the most overused and least actioned type of data in existence. I once watched a product review meeting — two hours, four departments — where everyone nodded at a pie chart showing 67% of users “somewhat agreed” they would use a hypothetical feature. No one asked what “somewhat agreed” meant. No one asked what the feature would cost to build. They just nodded. At a pie chart. That is not a meeting. That is a very expensive nap.
2b. Diagnostic Data (Why is it happening?)
This is your qualitative research: interviews, usability tests, open-ended responses, focus groups. It tells you the why. And this is where the gold is buried.
A study published in Frontiers in Psychology — Rao, A.R. & Monroe, K.B. (2021). “Impact of Pricing and Product Information on Consumer Buying Behavior.” — demonstrated that combining quantitative pricing data with qualitative interview findings produced product decisions that were 34% more aligned with actual customer willingness-to-pay than quantitative data alone. Thirty. Four. Percent. That is not a rounding error. That is a whole competitive advantage.
2c. Predictive Data (What will happen?)
This is your conjoint analysis, your A/B test projections, your predictive modelling. It tells you the what next — and it requires the most care, because it is the easiest type of data to misread.
Here is a predictive data joke — and yes, I said joke: “Our model predicts that 80% of users will adopt the new feature.” Sounds great. But did the model account for the fact that your onboarding flow is so confusing that 60% of users never make it past screen three? Because if not, your “80% adoption” is really “80% of the 40% who survive long enough to see the feature.” That is 32%. Very different.
Always ask: What assumptions does this prediction rely on? Write them down. Challenge every single one.
Step 3: Build a Prioritisation Matrix — The RICE Framework With a Twist
Now we get to the fun part. You have got your data sorted by type. You need to turn it into a ranked list of product decisions.
The RICE framework — developed by Intercom and widely adopted across product teams globally — scores each potential product decision on four dimensions:
- Reach: How many users does this affect?
- Impact: How significantly does it affect them?
- Confidence: How sure are you, based on your data?
- Effort: How much work does it take to build?
RICE Score = (Reach × Impact × Confidence) ÷ Effort
Here is the twist I want to add from the trading desk: treat Confidence as a function of data quality, not gut feeling. Your Confidence score should go up only when your data is multi-source (both qualitative and quantitative), recent (within six months), and representative (sampled from your actual user base, not just the people who love you enough to respond to surveys).
Because — and I say this with love — your most loyal customers are liars. Not intentionally. But the people who bother to complete your surveys are, by definition, not representative of the median user. They are your enthusiasts. They are the people who would write a strongly-worded letter to their MP if you changed your button colour. The median user does not care. The median user is barely paying attention. Your data needs to reflect that person too.
Step 4: The Translation Layer — From Research Finding to Product Hypothesis
This is the step everyone skips and then wonders why their roadmap does not perform. You have raw findings. They need to be translated into product hypotheses before they can become decisions.
The format is simple:
“We believe that [doing X] for [user segment Y] will result in [outcome Z], because our research shows [evidence A].”
Let me show you what this looks like with a real example.
Raw finding: “Focus group participants said the checkout process feels ‘too complicated’ and ‘takes too long.’”
Bad product response: “Let us simplify checkout.”
That is not a hypothesis. That is a horoscope. “Simplify checkout” could mean a thousand different things to a thousand different engineers, and every single one of them will build something different.
Good product hypothesis: “We believe that reducing the checkout flow from five steps to three steps for returning customers will increase conversion rates by at least 8%, because our qualitative research identified ‘step count’ as the primary friction point, and our session recordings show 43% of drop-offs occur at step three.”
Now you have something testable. Now you have something buildable. Now — and this is the critical part — you have something that, when it either succeeds or fails, teaches you something new.
Moorman and Miner (1997) — in a foundational paper titled “The Impact of Organizational Memory on New Product Performance and Creativity” published in the Journal of Marketing Research, 34(1), 91–106 — found that product teams that formally documented their assumptions and hypotheses before launching a feature were significantly more likely to learn from both successes and failures. Teams that skipped documentation tended to misattribute outcomes — claiming credit for wins that were coincidental and dismissing failures that were actually instructive.
Document your hypotheses. Always. Even when — especially when — you are in a hurry. The market does not care that you are busy.
Step 5: Case Study — How Spotify Turned Listening Data Into a Product Decision Machine
Let us talk about Spotify. I love Spotify. Not just because it plays music, but because it is one of the most data-literate product organisations on the planet — and they have the receipts.
In 2014, Spotify’s product team was sitting on an enormous amount of descriptive data: listening hours, skip rates, playlist saves, search behaviour. The question was not whether they had enough data — they absolutely did. The question was how to translate that data into one specific product decision.
The decision they were wrestling with: Should they invest in algorithm-driven playlist curation or continue leaning on human editorial curation?
Their research process was exemplary. They combined:
- Quantitative data — skip rates on algorithmically-suggested tracks vs. editorial picks
- Qualitative interviews — asking users what “discovery” meant to them emotionally
- A/B testing — running parallel experiences for matched cohorts
The finding? Users said they trusted human curation more, but their behaviour showed they engaged more deeply with algorithmic recommendations — spending 22% more time in algorithmically-curated playlists before skipping.
This is a critical lesson: what people say and what people do are two different data streams. In research methodology, this is called the “attitude-behaviour gap” — and it is one of the most documented phenomena in consumer psychology. (Chandon, P., Morwitz, V.G., & Reinartz, W.J., 2005. “Do Intentions Really Predict Behavior? Self-Generated Validity Effects in Survey Research.” Journal of Marketing, 69(2), 1–14.)
Spotify’s decision? Invest heavily in the algorithm — which became Discover Weekly, launched in 2015. Within weeks, it had over 1.5 billion streams. That is not a small number. That is “I-am-never-letting-anyone-touch-my-data-strategy-again” territory.
The lesson for your product team: always run your qualitative and quantitative data against each other. When they agree — great, full confidence. When they disagree — do not average them out. Investigate the gap. The gap is where the real insight lives.
And yes, I am aware that I just spent three paragraphs talking about playlists instead of trading. Look — even traders need music. How else do you stay calm when a position goes 12% against you before lunch? Exactly.
Step 6: Validate With Small Bets Before Big Commitments
Here is something I know from the trading floor that most product managers need tattooed on their forearm: size your position based on your conviction. If you are 60% confident in a trade, you do not bet the entire fund on it. You take a smaller position, observe the reaction, and scale up if it confirms your thesis.
Product decisions work exactly the same way.
Before you commit a full engineering quarter to something your market research “suggests” customers want, run a small validation experiment. This is what venture capitalists call a “concierge MVP” — a manually-delivered version of the product that tests the core assumption without building the full technology.
Eisenmann, T., Ries, E., & Dillard, S. (2012). in their Harvard Business School working paper “Hypothesis-Driven Entrepreneurship: The Lean Startup” demonstrated that startups using validated learning cycles (build-measure-learn) before full product commitment had a 26% higher probability of product-market fit at launch versus those that built fully before validating.
Translation: Do not build the whole restaurant before you know if people want the food. Cook one dish in your kitchen. Put it in front of ten people. Watch their faces. Not their words — their faces. That is the real data.
I once watched a startup spend eight months building a feature because three focus group participants said they wanted it. Three people! I have seen three people agree on something and be completely wrong. Three people agreed the earth was flat at some point. Do not make eight-month product commitments based on three people.
Step 7: Case Study — How Netflix Used Data to Kill What People Loved
Let us talk about the Netflix star-rating system. For years, Netflix used five-star ratings — just like every other platform. Users rated movies and shows from one to five stars. It felt intuitive. It looked like feedback.
Here is the problem: it was not actually useful feedback for the recommendation engine.
Netflix’s data team, led by VP of Product Todd Yellin, ran a comprehensive analysis of rating behaviour against actual watch-time and found a jarring disconnect. Users were rating documentaries and art films with five stars — signalling high approval — but then actually watching Adam Sandler comedies for four hours on a Saturday night. Their ratings reflected their aspirational self (“I am a cultured person who appreciates cinema”). Their viewing behaviour reflected their actual self (“It is 11pm and I need something that requires zero brainpower”).
The decision Netflix made — based on this research — was to replace star ratings with a thumbs up/thumbs down system in 2017. It was controversial. Users complained loudly. Product teams everywhere sent sympathetic emails.
But here is what happened: engagement with recommendations went up by over 200%. Because the binary system captured real preferences, not performed ones. (Gomez-Uribe, C.A. & Hunt, N. (2015). “The Netflix Recommender System: Algorithms, Business Value, and Innovation.” ACM Transactions on Management Information Systems, 6(4), Article 13.)
The lesson: Do not be afraid to act on data that contradicts what users claim to want. If your behavioural data and stated-preference data diverge, trust the behaviour. Every time. People vote with their actions, not survey responses. I trade on price action, not analyst opinions — same principle.
Step 8: Build a Decision Log and Review Cadence
I am going to say something that is going to sound painfully obvious, and yet fewer than 20% of product teams actually do it. Ready?
Write down every decision you make, why you made it, what data supported it, and what you expected to happen.
Then, three months later, go back and check if it happened.
This is called a Decision Log. It is the product equivalent of a trade journal — something every serious trader keeps religiously, because it is the only way to distinguish skill from luck. If you made a good decision that worked out, a Decision Log tells you why it worked, so you can replicate the conditions. If you made a bad decision that failed, it tells you which assumption was wrong, so you can calibrate your research process.
Without a Decision Log, you are operating on vibes. And look — I have nothing against vibes. Vibes are great for choosing a restaurant. They are terrible for allocating a product roadmap. One of those choices costs you £40. The other costs you a quarter.
Eisenhardt, K.M. (1989). in her seminal paper “Making Fast Strategic Decisions in High-Velocity Environments”, published in Academy of Management Journal, 32(3), 543–576, found that high-performing organisations in fast-moving markets made decisions faster and with higher quality precisely because they maintained structured decision documentation. They were not less decisive. They were more deliberate — which paradoxically made them faster, because they were not relitigating old decisions at every meeting.
A Decision Log does not need to be fancy. A shared Google Doc with a table works fine. Columns: Decision made, Date, Data it was based on, Assumption it relied on, Predicted outcome, Actual outcome (filled in later), Learning.
That is it. A table. Six columns. It will change the intellectual quality of your product organisation within a quarter.
Step 9: The Stakeholder Translation Problem (And How to Solve It)
Here is something nobody tells you about turning market research into product decisions: the hardest part is not the data. The hardest part is the room.
You can have the most beautifully structured research finding — multi-source, statistically significant, hypothesis-led, behavioural-data-backed — and still watch it die in a meeting because the VP of Sales has a different anecdote, the CFO does not understand the confidence interval, and the Head of Brand thinks the data does not account for “what we stand for.”
I have seen this happen so many times I want to write it on a sandwich board and stand outside product conferences. The data does not speak for itself. You have to translate it for every audience.
Here is how.
For C-Suite Executives: Lead With the Decision and the Risk
Do not walk into a boardroom with a slide deck full of charts and expect executives to do the analysis. They will not. They do not have time, and frankly, that is what you are for. Lead with: “Based on our research, I recommend we do X. Here is the evidence. Here is the risk of not doing it. Here is what we will measure to know if we were right.”
That is three things. Decision, evidence, measurement. If you give them more than three things, you will lose them to their phones.
For Engineering Teams: Lead With the User Problem, Not the Solution
Engineers hate being handed solutions. And rightfully so — they know things about implementation complexity that your customer interviews will never surface. Instead of saying “We need to build a three-step checkout,” try: “Our research shows that users experience the most friction at the payment confirmation step — 43% of our drop-offs happen there. What would you build to solve that?”
Watch what happens. You get solutions you never would have thought of. Solutions that are probably better than yours. That is not a loss of control. That is what collaboration actually looks like.
For Marketing Teams: Lead With the Customer Story
Marketers are storytellers by instinct. Give them a character. “Meet Priya — she is 34, she shops on her phone during her lunch break, and she abandoned three carts last month because the checkout felt overwhelming.” That one sentence communicates more than a 40-slide research deck. It gives them something to anchor campaigns to. It makes the data human.
Step 10: Case Study — How Airbnb Rebuilt Its Core Product From Qualitative Data
In 2009, Airbnb was not growing. The product was live. People could list and book homes. And yet — nothing. Flat growth. The founding team was confused.
They decided to go to New York and meet with their hosts in person. Not a survey. Not a focus group. They knocked on doors. They sat in living rooms. They looked at the listings.
What they found was the entire problem: the photos were terrible. Hosts were using blurry smartphone photos, dark rooms, unflattering angles. Users could not tell if a listing was clean, comfortable, or remotely safe. The data they had been looking at — bounce rates, conversion rates — was telling them that something was wrong. The qualitative research told them what was wrong.
Their decision: hire professional photographers and send them to shoot listings for free. This was not a scalable solution. It was a manual, expensive, un-automatable intervention. But it was a test of the hypothesis.
The result? Conversion rates in New York doubled. Doubled. Not improved. Not increased. Doubled.
That insight — that visual quality of listing photography was the primary conversion lever — became the backbone of Airbnb’s entire photography strategy. It is now one of the most documented case studies in product management, cited extensively in lean startup literature. (Ries, E., 2011. The Lean Startup. Crown Business.)
The lesson for you: sometimes the most powerful market research method is the most old-fashioned one. Get off the platform. Talk to people. Look at what they are actually dealing with. Your analytics dashboard cannot show you the look on someone’s face when they try to use your product for the first time. That look is data too.
Step 11: Dealing With Conflicting Data Sources
This will happen. It has already happened to you, has it not? Your survey says customers want Feature A. Your NPS feedback says they want Feature B. Your sales team says customers are leaving because of the absence of Feature C. And your competitor just launched Feature D, and now everyone in the building thinks you need to launch Feature D too.
Welcome to what I call the “Data Cage Fight.” Four data sources enter. One product decision leaves.
Here is how to adjudicate conflicting data, in order of precedence:
- Behavioural data beats stated preference data. Always. What people do outranks what they say.
- Recent data beats old data. A survey from eight months ago is not the same market as today’s. Markets move. Consumer preferences evolve. In trading, we say “the trend is your friend” — but only current trends. Yesterday’s trend might be today’s reversal.
- Representative data beats passionate data. The loudest voices in your customer feedback channel are not your median customer. Weight your data by how representative the source is, not by how loudly someone expressed an opinion. I have seen entire product roadmaps held hostage by three vocal enterprise clients who represented 4% of revenue. Do not let that happen.
- Multi-source agreement beats single-source strength. If your interviews, your quantitative survey, and your behavioural analytics all point to the same thing — even if each signal is moderate — act on that convergence. Convergent validity is one of the most reliable indicators of genuine customer insight. (Campbell, D.T. & Fiske, D.W. (1959). “Convergent and Discriminant Validation by the Multitrait-Multimethod Matrix.” Psychological Bulletin, 56(2), 81–105.)
- If nothing converges — you do not have enough data yet. This is not a failure. This is information. “We do not know enough to decide” is a legitimate and important product state. The response is more research, not a coin flip.
Step 12: Build a Living Research Repository
All the data you collect — every survey, every interview transcript, every session recording, every NPS comment — should live somewhere permanent and searchable. Not in someone’s inbox. Not in a slide deck forwarded six times and then lost. Somewhere permanent and searchable.
This is your Research Repository. Think of it like a trading firm’s data warehouse. Every data point captured, tagged, indexed, queryable. When a new product question comes up, the first question should not be “What research do we need to commission?” — it should be “What research do we already have that speaks to this question?”
Tools like Dovetail, Notion, Confluence, or even a well-organised Airtable can serve this function. The technology is not the point. The discipline is.
Treacy, M. & Wiersema, F. (1993). in their foundational Harvard Business Review paper “Customer Intimacy and Other Value Disciplines” argued that companies achieving sustained competitive advantage through product leadership shared one organisational trait: they built and maintained institutionalised customer knowledge. Not one-off projects. Ongoing, living, evolving knowledge systems. Knowledge, they argued, compounds. Just like interest. Just like any good long-term investment.
Step 13: The Final Decision Gate — Three Questions Before You Commit
You have done the research. You have built your hypotheses. You have run your small validation experiments. You have translated the findings for your stakeholders. You have checked your Decision Log. You are ready to make the call.
Before you do — three questions. Always these three. I ask them before every significant trade, and I ask them before every significant product recommendation I have ever made.
Question 1: If this decision is wrong, how expensive is it to reverse?
Irreversible decisions deserve far more evidence than reversible ones. In the product world, a reversible decision might be changing a button label. An irreversible one might be deprecating an API that 200 enterprise clients depend on. Know which type of decision you are making, and calibrate your evidence threshold accordingly.
Question 2: Who does this decision affect, and have we heard from them?
Not just your most vocal users. Not just your biggest-revenue clients. The full spectrum of people this change touches. If you are changing a core workflow, you need to have spoken to — or at minimum surveyed — users across the adoption curve: early adopters, mainstream users, and the laggards still using the feature the old way. Every one of those people is a customer.
Question 3: What would we need to see in 90 days to know this worked?
Define success before you launch. Not after. Before. Because if you define it after, you will find a metric that says it worked regardless of what actually happened. I have seen this. I have done this. It is not productive and it is not honest and it is the reason why product post-mortems are so often a complete waste of everyone’s afternoon.
Set your success criteria now, write them in your Decision Log, and hold yourself to them in 90 days.
Putting It All Together: Your Step-by-Step Action Plan
Let me summarise the full framework, because you deserve a clean checklist at this point, and frankly, I need to wrap this up before the market opens and I have to go pretend I know what is going to happen to interest rates.
Step 1: Define the decision before you design the research.
Step 2: Categorise your data — descriptive, diagnostic, or predictive — and respond appropriately to each.
Step 3: Prioritise using a structured framework (RICE or similar), with Confidence weighted by data quality.
Step 4: Translate raw findings into formal, testable product hypotheses using the “We believe / for / will result in / because” format.
Step 5: Look for the attitude-behaviour gap — where what users say diverges from what they do.
Step 6: Validate with small bets before large commitments.
Step 7: Use a Decision Log to capture every decision, assumption, and outcome.
Step 8: Translate findings differently for each stakeholder audience — executives, engineers, marketers.
Step 9: When data conflicts, use the five-tier precedence framework.
Step 10: Build a living Research Repository so knowledge compounds over time.
Step 11: Before every major commitment, ask the three final gate questions.
That is the system. It is not glamorous. It is not one of those frameworks with a cool acronym you can embroider on a throw pillow. It is just the work — done methodically, done honestly, done in service of building something that genuinely improves people’s lives.
And that, at the end of the day, is the only trade worth making.
Final Thought: Data Is Not the Answer. Decisions Are.
Every business has more data than it knows what to do with. The companies that win are not the ones with the most data — they are the ones with the clearest process for turning data into decisions, decisions into products, and products into outcomes that customers actually care about.
Market research data is not your answer. It is raw material. The answer is the decision you make with it, the hypothesis you test, the product you ship, and the discipline you bring to learning from what happens next.
Now go make some decisions. I have got a position to manage and a chart giving me very mixed signals — which, coincidentally, is exactly what I told you how to handle about four thousand words ago.
You have got this.
References
- Moorman, C. (1995). Organizational Market Information Processes: Cultural Antecedents and New Product Outcomes. Journal of Marketing Research, 32(3), 318–335. https://doi.org/10.1177/002224379503200302
- Rozenkowska, K. (2023). Theory of planned behavior in consumer behavior research: A systematic literature review. International Journal of Consumer Studies, 47(6), 2670–2700. https://doi.org/10.1111/ijcs.12970
- Rao, A.R. & Monroe, K.B. / Frontiers in Psychology (2021). Impact of Pricing and Product Information on Consumer Buying Behavior With Customer Satisfaction in a Mediating Role. Frontiers in Psychology, 12, 720151. https://doi.org/10.3389/fpsyg.2021.720151
- Moorman, C. & Miner, A.S. (1997). The Impact of Organizational Memory on New Product Performance and Creativity. Journal of Marketing Research, 34(1), 91–106. https://doi.org/10.1509/jmkr.34.1.91
- Chandon, P., Morwitz, V.G., & Reinartz, W.J. (2005). Do Intentions Really Predict Behavior? Self-Generated Validity Effects in Survey Research. Journal of Marketing, 69(2), 1–14. https://doi.org/10.1509/jmkg.69.2.1.60755
- Gomez-Uribe, C.A. & Hunt, N. (2015). The Netflix Recommender System: Algorithms, Business Value, and Innovation. ACM Transactions on Management Information Systems, 6(4), Article 13. https://doi.org/10.1145/2843948
- Eisenhardt, K.M. (1989). Making Fast Strategic Decisions in High-Velocity Environments. Academy of Management Journal, 32(3), 543–576. https://doi.org/10.5465/256434
- Eisenmann, T., Ries, E., & Dillard, S. (2012). Hypothesis-Driven Entrepreneurship: The Lean Startup. Harvard Business School Background Note 812-095. https://www.hbs.edu/faculty/Pages/item.aspx?num=42390
- Ries, E. (2011). The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Business. https://theleanstartup.com/
- Campbell, D.T. & Fiske, D.W. (1959). Convergent and Discriminant Validation by the Multitrait-Multimethod Matrix. Psychological Bulletin, 56(2), 81–105. https://doi.org/10.1037/h0046016
- Treacy, M. & Wiersema, F. (1993). Customer Intimacy and Other Value Disciplines. Harvard Business Review, 71(1), 84–93. https://hbr.org/1993/01/customer-intimacy-and-other-value-disciplines
Disclaimer: This article is for informational and educational purposes only and does not constitute financial or trading advice.



Leave a Reply
You must be logged in to post a comment.