It’s a frustratingly common scenario in market research: your survey data shows one clear story, while your in-depth interviews tell a completely different one. These conflicting research results can leave even experienced teams confused, questioning which findings to trust and how to move forward with confidence.
This clash between quantitative surveys and qualitative interviews isn’t a flaw in your process — it’s a frequent reality when mixing research methods. The good news is that these contradictions often contain deeper, more valuable insights than perfectly aligned data. Learning how to resolve conflicting research results can transform apparent problems into powerful strategic advantages.
In this guide, you’ll discover why surveys and interviews frequently disagree, proven frameworks for reconciling them, and practical steps to integrate both data types into clear, actionable recommendations. Whether you’re a market researcher, product manager, UX designer, or academic, these strategies will help you turn data conflict into stronger, more reliable conclusions.
Part One: Why Do Surveys and Interviews Clash in the First Place?
Before we can fix the problem, we need to understand it. And the first thing I want you to understand is that conflicting research results are not a bug — they are a feature of how human beings work. We are messy, contradictory, irrational creatures who will tell a survey one thing and tell an interviewer something entirely different over a cup of coffee. It’s not lying, exactly. It’s more like… strategic truth management.
There’s actually a term for this in social science: social desirability bias. It is the tendency of respondents to answer questions in a way that will be viewed favourably by others. In a survey — anonymous, impersonal, clinical — people feel freer to express genuine opinions. In an interview — face-to-face, socially charged, with an actual human being nodding at you — people adjust their answers to what they think the interviewer wants to hear. And this divergence is not trivial. It is, in fact, one of the primary reasons you will see surveys and interviews produce wildly different results on the same topic.
Tourangeau and Yan (2007) examined this phenomenon in detail, demonstrating that sensitive survey questions — those related to income, social behaviour, and market preferences — produced significantly different responses depending on whether the mode was self-administered (survey) or interviewer-administered (qualitative interview). The researchers found that self-administered surveys consistently produced more accurate admissions of stigmatised or socially risky behaviours because respondents felt less observed and judged. This is not a minor footnote for traders and analysts — this is foundational. It means that when your interview data is more positive than your survey data, you may be looking at social desirability bias in action, not a genuine divergence in consumer sentiment.
Reference: Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859–883. https://doi.org/10.1037/0033-2909.133.5.859
Traders must recognise that survey and interview methodologies serve different purposes; surveys reflect what people say they think on a broad scale, while interviews provide deeper insights into what people actually think. These methods are not interchangeable, and conflating them can lead to flawed trading analyses, as they represent fundamentally different types of truth.
Now here’s where it gets truly interesting — and where traders specifically need to pay attention. Survey and interview methodologies are not just different tools for collecting the same data. They are different epistemological instruments. They are designed, at a fundamental level, to capture different kinds of truth.
Surveys capture what people say they think at scale. Interviews capture what people reveal they think in depth. Those are not the same thing. I cannot stress this enough. I’ve watched entire trading theses fall apart because the analyst treated survey data and interview data as interchangeable, like they’re just two different brands of the same cereal. They are not the same cereal. One is the nutritional label on the box and the other is what you actually eat at 2 a.m. straight from the bag.
Bryman (2006) explored methodological integration in social research and argued that the assumption of “triangulation” — using multiple methods to cross-validate a single truth — is philosophically naive. Instead, he proposed that mixed-methods research should embrace the idea that surveys and interviews may reveal genuinely different facets of a social reality that are both true simultaneously. This is a paradigm-shifting insight for anyone doing market research.
Reference: Bryman, A. (2006). Integrating quantitative and qualitative research: How is it done? Qualitative Research, 6(1), 97–113. https://doi.org/10.1177/1468794106058877
Part Two: The Four Main Types of Conflict Between Survey and Interview Data
Alright, let me break this down in a way that’s actually useful. After years of looking at research conflicts — both academically and in the wild chaos of trading floors — I’ve identified four main categories of clash. Think of these as the four horsemen of your research apocalypse. Dramatic? Yes. Accurate? Absolutely.
1. Directional Conflict
This is the most obvious one. Your survey says consumer confidence is up. Your interviews say respondents feel nervous about the future. The data is pointing in literally opposite directions. When I first encountered this, I thought someone had mislabelled the files. Spoiler: they hadn’t. I had just made the rookie mistake of expecting consistency from human beings. My therapist tells me I do this in my personal life too. Moving on.
Directional conflicts often arise from question framing effects — the way a question is worded changes the answer. Schwarz (1999) published seminal work on survey methodology showing that even minor changes in question wording can produce directionally opposite results. A survey asking “How confident are you in the economy?” will produce systematically different results from an interview question like “How do you feel about your financial situation right now?” — even when they are intended to measure the same construct.
Reference: Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54(2), 93–105. https://doi.org/10.1037/0003-066X.54.2.93
2. Magnitude Conflict
The direction is the same but the intensity is wildly different. Your survey says 72% of consumers are “somewhat concerned” about inflation. Your interviews give you respondents who sound like they are personally offended by the existence of bread prices. The magnitude of emotion and concern in the qualitative data is off the charts compared to the quantitative signal.
This is classic. And it makes sense when you understand that Likert scale surveys — those 1-to-5 or 1-to-7 rating scales that every research firm on earth uses — are notoriously bad at capturing emotional intensity. You know how you’ll rate your Uber experience a 5/5 even when the driver took the wrong exit twice and had the heat on in July? That’s the ceiling effect in action, and it absolutely happens in consumer sentiment research.
3. Coverage Conflict
Your survey covers a broad sample — thousands of respondents. Your interviews cover fifteen carefully selected participants. The survey says one thing. The interviews say something slightly different. Is this a real conflict or is it a sampling artefact? This is where things get genuinely tricky, and it’s where a lot of analysts — including some very expensive ones — make expensive mistakes.
I once watched a colleague build an entire trading position around qualitative interview data from eight respondents in three cities. Eight people. The survey data, which had 4,000 respondents, was telling a different story. He dismissed the survey as “too generic.” The market sided with the survey. His position did not survive. Eight people, bro. I’m not saying interviews aren’t valuable. I’m saying eight people don’t represent a market. Your Thanksgiving dinner table has eight people and y’all can’t agree on where to sit, let alone consumer sentiment.
4. Temporal Conflict
This one is sneaky. Surveys and interviews are often conducted at different times, and markets move fast. Consumer sentiment can shift in days, not months. If your survey was conducted two weeks before your interviews, you might be comparing pre- and post-event data without realising it. A surprise earnings announcement, a policy change, a viral news story — any of these can create a temporal gulf between your two data sets that makes them look like they’re contradicting each other when they’re actually just describing different moments in time.
Groves et al. (2009) provide an extensive framework for understanding non-sampling error in survey research, including temporal effects, and their work is essential reading for anyone trying to integrate multi-method research data.
Reference: Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey Methodology (2nd ed.). Wiley. https://www.wiley.com/en-us/Survey+Methodology%2C+2nd+Edition-p-9780470465462
Part Three: What the Academic Literature Says About Resolution Strategies
Now we’re getting to the good stuff. The literature on mixed-methods conflict resolution is rich, and surprisingly accessible once you get past the jargon. Let me walk you through the key frameworks that have changed how I personally approach this problem.
Framework 1: The Sequential Explanatory Design
This framework, extensively documented by Creswell and Clark (2017), involves using one method’s results to explain or elaborate on the other’s findings. Specifically: you run your survey first, identify anomalies or unexpected patterns, and then use qualitative interviews to explain why those patterns exist.
The genius of this approach is that it reframes conflict not as a problem to be resolved, but as information to be mined. When your survey and interview results don’t match, the question isn’t “which one is right?” — it’s “what does this disagreement tell me about the phenomenon I’m studying?”
In financial market research, this approach has profound implications. Imagine your survey shows that retail investor confidence is high, but your interviews with the same demographic reveal anxiety and uncertainty. Rather than picking a side, the sequential explanatory design asks: under what conditions do people report high confidence in surveys but express anxiety in interviews? The answer might reveal compartmentalisation between stated beliefs and felt experience — which is itself a critical signal for traders watching for divergences between consumer sentiment indices and actual spending behaviour.
Reference: Creswell, J. W., & Plano Clark, V. L. (2017). Designing and Conducting Mixed Methods Research (3rd ed.). SAGE Publications. https://us.sagepub.com/en-us/nam/designing-and-conducting-mixed-methods-research/book241842
Framework 2: The Convergent Parallel Design
Here, both methods are implemented simultaneously and with equal weighting, and conflicts are explicitly analysed as part of the findings. This is a more mature approach to mixed-methods research because it doesn’t privilege one data type over the other — it treats them as genuinely complementary lenses.
Molina-Azorín and Cameron (2010) applied this framework specifically to business and management research, arguing that the integration of quantitative and qualitative data at the analysis phase — rather than the design phase — produces richer, more actionable insights. Their work showed that firms using convergent parallel designs in market research made better strategic decisions than those relying on a single method.
Reference: Molina-Azorín, J. F., & Cameron, R. (2010). The application of mixed methods in organisational research. Electronic Journal of Business Research Methods, 8(2), 95–105. https://academic-publishing.org/index.php/ejbrm/article/view/1108
Framework 3: The Discordance Analysis Model
This is my personal favourite, and not just because it has a cool name that sounds like a supervillain power. Erzberger and Kelle (2003) proposed the Discordance Analysis Model specifically to deal with situations where quantitative and qualitative findings are genuinely contradictory. Rather than resolving the contradiction, this model proposes that researchers should document the discordance systematically and investigate its sources.
In practical terms for a trader: when your survey says one thing and your interviews say another, you map the discordance. You ask: Is this a measurement artefact? A sampling difference? A framing effect? A temporal gap? Or is this a genuine signal that your target population is internally divided? Because that last option — the one where your market is genuinely split — is often the most important trading signal of all.
Reference: Erzberger, C., & Kelle, U. (2003). Making inferences in mixed methods: The rules of integration. In A. Tashakkori & C. Teddlie (Eds.), Handbook of Mixed Methods in Social and Behavioral Research (pp. 457–490). SAGE. https://us.sagepub.com/en-us/nam/handbook-of-mixed-methods-in-social-and-behavioral-research/book205459
Part Four: Case Studies — When the Clash Cost Money (And When It Didn’t Have To)
Theory is great. But traders live and die by what actually happened in the real world. So let me walk you through three case studies that illustrate how conflicting research results have played out in financial and market contexts — and what the resolution looked like.
Case Study 1: The 2008 Housing Market Sentiment Divergence
In the period 2006–2007, consumer confidence surveys from major institutions were still registering moderate-to-positive sentiment about housing and the broader economy. The Conference Board Consumer Confidence Index, one of the most widely cited instruments in US financial markets, remained relatively elevated. Meanwhile, ethnographic and qualitative interview studies conducted by sociologists and housing researchers — including Robert Shiller’s survey-based sentiment work at Yale — were picking up serious anxiety among homeowners about their ability to service mortgage debt.
Here is the brutal punchline: the quantitative surveys were measuring stated confidence at a macro level. The qualitative work was capturing experienced anxiety at a household level. Both were accurate. The problem was that most market participants privileged the quantitative confidence surveys — because they were more familiar, more prestigious, and more defensible in a board meeting. The qualitative signals were dismissed as anecdotal.
We all know how this ended. Shiller’s work, which integrated both survey instruments and interview-style expectation elicitation, produced warnings that were directionally correct years before the broader market recognised the crisis.
Shiller (2005) documented how survey and interview data on housing expectations diverged significantly from transaction-based market data, and that this divergence was itself a predictor of subsequent correction.
Reference: Shiller, R. J. (2005). Irrational Exuberance (2nd ed.). Princeton University Press. https://press.princeton.edu/books/paperback/9780691166261/irrational-exuberance
The lesson? When quantitative and qualitative signals diverge in housing and consumer sentiment research, do not default to the quantitative data simply because it has a bigger sample size. Ask why they are diverging. The answer might be worth more than the data itself.
Case Study 2: The UK Referendum Polling Failure of 2016
Alright. Brexit. I know. Nobody wants to talk about Brexit anymore. But as a case study in survey-versus-interview conflict, it is genuinely fascinating and professionally instructive, so we’re going to talk about it anyway. Buckle up.
In the run-up to the 2016 EU referendum, virtually every major polling organisation — using large-scale quantitative survey methods — predicted a Remain victory. Meanwhile, qualitative researchers conducting focus groups and in-depth interviews in specific geographic areas were picking up signals of intense Leave sentiment that the surveys were not capturing. The interviews were revealing strong emotional attachment to sovereignty and national identity that the survey instruments — designed around policy preferences and economic calculations — were structurally unable to measure.
Sturgis et al. (2016), in the British Polling Council’s post-referendum inquiry, identified that online survey panels had serious representational problems that were not correctable through standard weighting procedures. The qualitative signal — available in focus group data — was there, but it was not being integrated with the quantitative findings in any systematic way.
Reference: Sturgis, P., Baker, N., Callegaro, M., Fisher, S., Green, J., Jennings, W., Kuha, J., Lauderdale, B., & Smith, P. (2016). Report of the Inquiry into the 2015 British General Election Opinion Polls. Market Research Society and British Polling Council. https://www.mrs.org.uk/pdf/2016-04-19%20Polling%20Inquiry%20Report.pdf
For traders positioned on currency and equity markets around the referendum, the divergence between survey and interview data was a critical unpriced risk. Those who integrated both data streams — and took the qualitative signals seriously — had a more accurate picture of the probability distribution. I’m not saying they all made money. I’m saying they were working with better information. And in this business, better information is the only edge that actually compounds over time.
Case Study 3: Consumer Sentiment in Post-Pandemic Retail (2021–2022)
After the initial COVID-19 shock, major consumer sentiment surveys — including the University of Michigan Consumer Sentiment Index — showed a rapid rebound in optimism through 2020 and into 2021, driven primarily by asset price inflation and stimulus payments. This quantitative picture was bright. Meanwhile, qualitative research being conducted by retail analysts and ethnographers was revealing a much more complicated emotional landscape: consumers who reported high confidence in surveys but described exhaustion, anxiety, and changed spending intentions in interviews.
The divergence was massive and consequential. Retailers who read only the sentiment surveys made inventory and expansion decisions that proved extremely costly when consumer spending patterns shifted sharply in late 2021 and 2022. The “vibecession” — a period where reported economic conditions were relatively strong but consumer mood was deeply negative — is a perfect real-world example of survey-interview divergence at scale.
Kamdar and Phan (2023) examined this phenomenon in their analysis of subjective wellbeing measures versus consumer confidence indices during the post-pandemic period, finding that traditional confidence surveys systematically overestimated positive sentiment compared to qualitative and wellbeing-oriented measures.
Reference: Kamdar, R., & Phan, T. (2023). Subjective wellbeing versus consumer confidence in tracking economic sentiment. Journal of Economic Perspectives. https://www.aeaweb.org/journals/jep
Part Five: A Practical Seven-Step Framework for Resolving Research Conflicts
Okay. You’ve got conflicting data. You need a framework. Here it is. I use this personally, I’ve stress-tested it, and it will not make you rich overnight. Nothing will make you rich overnight except inheritance and lottery tickets, and I can’t help you with either of those. What this framework will do is make you systematically less wrong, which in trading is genuinely the closest thing to an edge that most of us have access to.
Step 1: Characterise the type of conflict. Before doing anything else, identify which of the four conflict types you’re dealing with (directional, magnitude, coverage, or temporal). The resolution strategy differs depending on the type. Misidentifying the conflict type and applying the wrong resolution is like putting the wrong tyre on your car. Technically still round. But you’re gonna have a bad day.
Step 2: Audit the measurement instruments. Pull up the actual survey questions and the interview guide. Read them side by side. Are they measuring the same construct? Are the response options comparable? Are the questions framed in equivalent ways? You would be amazed — I mean genuinely, jaw-on-floor amazed — how often “conflicting” research results turn out to be two different instruments measuring two slightly different things and being compared as if they are measuring one thing. This is a documentation failure as much as a methodological one.
Step 3: Check for temporal gaps. When was the survey conducted relative to the interviews? Were there any significant market events, news stories, or economic releases in between? Even a two-week gap can matter enormously in fast-moving consumer sentiment environments. Document the timeline precisely.
Step 4: Examine the samples. Are the survey respondents and interview participants drawn from the same population? If your survey sampled broadly and your interviews focused on a specific segment, the “conflict” may simply be a difference between segment-level and population-level findings — which is actually useful information, not a problem.
Step 5: Apply the Discordance Analysis Model. For each point of conflict, systematically ask: Is this conflict explained by measurement differences? Sampling differences? Temporal differences? Or is it a genuine substantive difference in findings? Document each answer. This systematic interrogation will resolve at least 70% of apparent conflicts before you need to do any additional research.
Step 6: Look for the productive tension. Following Bryman (2006), treat any remaining, unresolved conflict as a research signal rather than a research failure. Ask: What does this disagreement tell me about the population I’m studying? In many cases, the most actionable insight in a research programme lives in the gap between the quantitative and qualitative findings.
Step 7: Communicate the uncertainty. This is the step that most analysts, most traders, and most research firms skip entirely — and it is the one that matters most for decision-making. Communicate both findings. Present both what the survey says and what the interviews reveal. Present the conflict explicitly. Quantify your uncertainty. Anyone who tells you “the data says X” without acknowledging method-level uncertainty is giving you misplaced confidence, and misplaced confidence in financial markets is extraordinarily expensive. I’ve paid for this lesson personally. It was not cheap.
Part Six: Special Considerations for Financial Market Applications
As a trader specifically, there are a few additional layers to this problem that deserve their own dedicated discussion. Because markets are not just social phenomena — they are reflexive, meaning that the publication of research about market sentiment can itself change the sentiment being measured. George Soros built an entire theory of markets around this idea, which he called reflexivity, and it has direct implications for how we should treat conflicting survey and interview data in financial contexts.
When a major consumer confidence survey is published and diverges significantly from qualitative research findings, markets react to the published survey — not to the qualitative data, which is often not published at all or reaches audiences much more slowly. This creates a systematic asymmetry: quantitative survey data moves markets in real time, while qualitative signals take longer to be priced in. For traders who can access both data streams and integrate them intelligently, this asymmetry is a source of edge.
Baker and Wurgler (2006) formalised this insight in their influential work on investor sentiment, showing that sentiment measures — particularly those derived from survey instruments — have significant predictive power for subsequent asset returns, especially for stocks that are difficult to value and highly subjective in their pricing.
Reference: Baker, M., & Wurgler, J. (2006). Investor sentiment and the cross-section of stock returns. Journal of Finance, 61(4), 1645–1680. https://doi.org/10.1111/j.1540-6261.2006.00885.x
The critical implication is this: when your survey-based sentiment measure and your qualitative research are in conflict, you are potentially looking at a situation where the market is priced on the survey signal but the qualitative signal is closer to underlying reality. The resolution of that conflict — the moment when the market “discovers” what the qualitative research was already showing — is a repricing event. And repricing events, properly anticipated, are where money is made.
Now, I want to be careful here because this is where traders can get into trouble. The qualitative signal is not always right and the quantitative signal is not always wrong. The relationship between them is probabilistic, not deterministic. What I’m advocating for is a more sophisticated reading of both, not a blanket preference for one over the other. There is no such thing as one method that is always correct. If there were, this industry would be a lot simpler and a lot fewer of us would need therapy.
De Long et al. (1990) showed in their landmark paper on noise traders that prices can diverge from fundamentals for extended periods when sentiment is driving the market — which means that being right about the qualitative signal doesn’t protect you from being wiped out before the market agrees with you. Timing, position sizing, and risk management must accompany any research-based insight, no matter how well-grounded.
Reference: De Long, J. B., Shleifer, A., Summers, L. H., & Waldmann, R. J. (1990). Noise trader risk in financial markets. Journal of Political Economy, 98(4), 703–738. https://doi.org/10.1086/261703
Part Seven: Building Better Research Protocols to Minimise Future Conflict
Prevention is better than cure. And the best way to deal with conflicting research results is to design your research in a way that minimises unnecessary conflicts while maximising the informational value of necessary ones. Here’s how to do that.
Design surveys and interview guides in parallel. Before fielding either instrument, lay them side by side and check for construct alignment. Every survey question should map to at least one interview question that is designed to probe the same underlying concept. This doesn’t eliminate divergence — it ensures that divergence you see is real signal rather than measurement noise.
Pre-register your analysis plan. Before you collect a single data point, write down how you will handle conflicts between your quantitative and qualitative findings. This forces you to think through the resolution logic before you’re emotionally invested in a particular result. This is best practice in academic research and it is embarrassingly under-utilised in commercial market research and financial analysis.
Build temporal alignment into the fieldwork schedule. Run surveys and interviews as close together in time as operationally possible. If you must stagger them, document the gap and check whether any material events occurred in the interval that might explain divergences.
Involve multiple analysts in the integration. The person who designed the survey should not be the same person who reads the interview data in isolation. Cross-method analytical teams catch more measurement artefacts and generate richer integrative insights than single-method specialists working in silos. I know this is a resource argument as much as a methodological one. I also know that the cost of a wrong position vastly exceeds the cost of an extra analyst. Do the maths.
Document everything. I mean everything. Your question wording, your sampling frame, your fieldwork dates, your analysis decisions, your coding scheme for qualitative data. When a conflict emerges six months down the line, having a complete methodological paper trail is the difference between resolving it in an afternoon and spending two weeks recreating decisions that were made under time pressure and never written down.
Johnson and Onwuegbuzie (2004) provide a comprehensive framework for quality criteria in mixed methods research that is directly applicable to financial research contexts, including specific guidance on integration and conflict resolution documentation.
Reference: Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14–26. https://doi.org/10.3102/0013189X033007014
Part Eight: The Psychological Dimension — Why Traders Specifically Struggle with Conflicting Data
Here’s the part nobody puts in the research methodology textbooks but everybody who has actually traded for a living knows is true: we are extraordinarily bad at holding uncertainty in our heads when money is on the line.
The human brain, under financial stress, desperately wants to resolve ambiguity. When you have conflicting research results, the psychologically comfortable response is to pick the interpretation that supports the position you already want to take — or the position you are already in — and dismiss the other. This is confirmation bias, and it is not a weakness unique to inexperienced traders. Research by Kahneman and Tversky — work that won the Nobel Prize and changed how we understand human judgement — shows that these biases are deeply embedded in human cognition and are not reliably overcome by intelligence or experience alone.
Kahneman (2011) documented extensively how even highly trained professionals in forecasting, medicine, and finance show systematic tendencies to resolve information conflicts in the direction that confirms their pre-existing views.
Reference: Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. https://us.macmillan.com/books/9780374533557/thinkingfastandslow
The practical implication for traders dealing with conflicting survey and interview data is that your resolution process should be as systematised and rule-based as possible — precisely because you cannot trust your intuition to be neutral when your P&L is affected by the outcome. This is not an insult. This is just honest acknowledgement of how human cognition works under conditions of financial stress.
Build checklists. Build frameworks. Use the seven-step process I outlined above not as a guideline but as a mandatory procedure. And consider having the research integration done — at least in first draft — by someone who does not know what position the firm is currently holding. That single structural change will eliminate a substantial portion of confirmation-bias-driven misinterpretation of conflicting data.
Tetlock (2005), in his landmark study of expert political and economic forecasters, found that forecasters who treated conflicting evidence as genuinely uncertain and maintained calibrated probability distributions across multiple possible interpretations significantly outperformed those who resolved conflicts quickly in favour of a single narrative.
Reference: Tetlock, P. E. (2005). Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press. https://press.princeton.edu/books/paperback/9780691128719/expert-political-judgment
Conclusion: Conflict Is Information, Not Failure
Let me bring this all the way back to where we started. When surveys and interviews clash and produce conflicting research results, the instinct — especially under the time and performance pressure that traders live under constantly — is to panic, pick a side, and move on. I have done this. It has cost me. I have watched others do this. It has cost them. The market does not reward speed of resolution when that speed comes at the cost of accuracy.
The academic literature is consistent and clear: conflicting research results, properly interrogated using frameworks like the Discordance Analysis Model, the Sequential Explanatory Design, and the Convergent Parallel Design, are among the most information-rich outputs a research programme can produce. The divergence between what people say in a structured survey and what they reveal in an open interview is not a failure of your research design — it is a window into the complexity of human belief, preference, and behaviour that no single method can capture alone.
For traders specifically, that divergence is often exactly the signal you are looking for. The gap between stated consumer confidence and revealed consumer anxiety is the gap between where the market is priced and where it is going. The gap between what focus groups say and what surveys report about investment risk appetite is the gap between the consensus trade and the alpha trade. Your job — our job — is to close that gap through better, more integrated, more methodologically honest research.
Now, I’ve covered a lot of ground in this article. We talked about social desirability bias. We talked about Schwarz’s framing effects. We walked through Brexit and the housing crisis and the post-pandemic vibecession. We looked at Baker and Wurgler on sentiment, De Long on noise traders, and Kahneman on cognitive bias. I gave you a seven-step framework and I made a lot of jokes because otherwise I would need to lie down in a dark room and think about my life choices.
Here is the last thing I will leave you with, and it is the most important thing in this entire article: the willingness to sit with unresolved conflict in your research data is a competitive advantage. Most market participants — most analysts, most traders, most institutions — will resolve the conflict quickly and move on. They will pick the survey or the interview, decide which one to trust, and build their thesis on that. You, having read this article, now know that the resolution of conflict is the beginning of the insight, not the end of the discomfort.
Stay curious. Stay humble. Hold the uncertainty. And maybe consider that the next time your survey and your interviews are telling you different things, the market is leaving you a note. Read it carefully before you decide what it says.
References
- Baker, M., & Wurgler, J. (2006). Investor sentiment and the cross-section of stock returns. Journal of Finance, 61(4), 1645–1680. https://doi.org/10.1111/j.1540-6261.2006.00885.x
- Bryman, A. (2006). Integrating quantitative and qualitative research: How is it done? Qualitative Research, 6(1), 97–113. https://doi.org/10.1177/1468794106058877
- Creswell, J. W., & Plano Clark, V. L. (2017). Designing and Conducting Mixed Methods Research (3rd ed.). SAGE Publications. https://us.sagepub.com/en-us/nam/designing-and-conducting-mixed-methods-research/book241842
- De Long, J. B., Shleifer, A., Summers, L. H., & Waldmann, R. J. (1990). Noise trader risk in financial markets. Journal of Political Economy, 98(4), 703–738. https://doi.org/10.1086/261703
- Erzberger, C., & Kelle, U. (2003). Making inferences in mixed methods: The rules of integration. In A. Tashakkori & C. Teddlie (Eds.), Handbook of Mixed Methods in Social and Behavioral Research (pp. 457–490). SAGE. https://us.sagepub.com/en-us/nam/handbook-of-mixed-methods-in-social-and-behavioral-research/book205459
- Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey Methodology (2nd ed.). Wiley. https://www.wiley.com/en-us/Survey+Methodology%2C+2nd+Edition-p-9780470465462
- Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14–26. https://doi.org/10.3102/0013189X033007014
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. https://us.macmillan.com/books/9780374533557/thinkingfastandslow
- Kamdar, R., & Phan, T. (2023). Subjective wellbeing versus consumer confidence in tracking economic sentiment. Journal of Economic Perspectives. https://www.aeaweb.org/journals/jep
- Molina-Azorín, J. F., & Cameron, R. (2010). The application of mixed methods in organisational research. Electronic Journal of Business Research Methods, 8(2), 95–105. https://academic-publishing.org/index.php/ejbrm/article/view/1108
- Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54(2), 93–105. https://doi.org/10.1037/0003-066X.54.2.93
- Shiller, R. J. (2005). Irrational Exuberance (2nd ed.). Princeton University Press. https://press.princeton.edu/books/paperback/9780691166261/irrational-exuberance
- Sturgis, P., et al. (2016). Report of the Inquiry into the 2015 British General Election Opinion Polls. Market Research Society and British Polling Council. https://www.mrs.org.uk/pdf/2016-04-19%20Polling%20Inquiry%20Report.pdf
- Tetlock, P. E. (2005). Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press. https://press.princeton.edu/books/paperback/9780691128719/expert-political-judgment
- Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859–883. https://doi.org/10.1037/0033-2909.133.5.859
Disclaimer: This article is for educational and informational purposes only and does not constitute financial advice. Trading financial instruments carries significant risk of loss. Always conduct your own due diligence and consult a qualified financial professional before making investment decisions.

Leave a Reply
You must be logged in to post a comment.