Why consumers lie in surveys is one of the biggest challenges facing market researchers and marketers today — and it directly creates the persistent intent-action gap that undermines so much consumer research.

People confidently state they’ll choose eco-friendly products, pay more for ethical brands, or adopt healthier habits. Yet their actual purchasing behavior often tells a very different story. This mismatch between what consumers say in surveys and what they do in reality — commonly known as the intent-action gap, say-do gap, or attitude-behavior gap — leads to flawed insights, misguided strategies, and expensive business mistakes.

The reasons behind this phenomenon go far beyond simple dishonesty. Psychological biases such as social desirability bias, the pressure to look good, and our inability to accurately predict our own future behavior all play a major role.

In this post, we explore the real reasons why consumers lie in surveys and, more importantly, share practical, proven techniques to close the intent-action gap. From better survey design and indirect questioning methods to hybrid research approaches that bridge stated intent and actual action, you’ll discover how to collect more honest, reliable, and actionable consumer insights.

If you’re tired of survey data that doesn’t match real-world behavior, keep reading to learn how to fix the intent-action gap in your research.


Part One: The Scope of the Problem

What Is the Intent-Action Gap?

The intent-action gap — also called the intention-behaviour gap in academic literature — is the documented discrepancy between what consumers say they will do in a survey and what they actually do in real life. It is one of the most robust, replicated, and stubbornly persistent findings in consumer psychology.

Think about it from your own life. Have you ever told a restaurant you were “definitely coming back”? Told a brand rep you’d “consider switching”? Filled out a survey saying you’d pay a premium for sustainable packaging? And then… carried on exactly as before? Exactly. You didn’t lie to be malicious. You believed yourself in that moment. That’s what makes this so sneaky.

Research published in Frontiers in Psychology by Mazar et al. (2022) reviewed the extensive literature on intention-behaviour relationships and found that intentions typically explain only 18–23% of variance in actual behaviour across a wide range of behavioural categories. Let that sink in. The survey tool that marketing teams rely on most? It’s capturing less than a quarter of what actually drives action. The other 77-82% is elsewhere — in mood, in context, in price, in the particular Tuesday afternoon vibe when someone walks past your competitor’s window display.

The researchers also found something even more sobering: in experimental studies where researchers deliberately changed people’s intentions through interventions, only small-to-medium changes in behaviour followed, even when intentions shifted significantly. In other words, even if you succeed in changing what someone says they’ll do, real behaviour barely moves. That’s like pushing a car uphill with both hands and the handbrake still on.

I feel like a trader who bought a position based on analyst forecasts, only to discover the analysts were just asking people on the street what they thought. Actually — that’s not far from the truth.


Part Two: Why Consumers Lie (But Don’t Mean To)

Reason #1: Social Desirability Bias — The “Good Person” Problem

The most well-documented culprit is social desirability bias — the tendency for survey respondents to answer in ways that make them look good rather than reflecting their true behaviour or intentions.

Fisher (1993), in a landmark study published in the Journal of Consumer Research, demonstrated that indirect questioning techniques significantly reduce social desirability bias on variables subject to social influence. The study examined structured projective questioning versus direct personal questioning across three separate studies, finding that people consistently present an idealised version of themselves when asked directly.

Imagine asking someone, “Do you recycle regularly?” Most people will say yes. Of course they will. Nobody wants to look like they’re personally responsible for polar bears losing their homes. But when you check actual recycling behaviour? Very different story. Survey says hero. Reality says… they just threw a plastic bottle in the general waste because the bins were confusing.

This is why Larson (2019), in International Journal of Market Research, demonstrated through national panel data that controlling for social desirability bias can fundamentally change which demographic variables are statistically significant — and can have important effects on the size of coefficients researchers use to build strategy. In plain English: when you don’t account for this bias, you’re building your targeting model on a foundation of polite fiction.

And here’s where I start sounding a little too much like a disappointed parent: companies still don’t systematically correct for this. They run the survey. They get the nice-looking data. They publish a press release about how “85% of consumers prefer environmentally responsible brands.” And then they wonder why environmentally responsible packaging doesn’t move product off shelves any faster than the regular stuff.

Now, I know what you’re thinking: “We screen our survey respondents carefully. We use validated scales. We have a proper methodology.” And I respect that. I genuinely do. But here’s the thing about social desirability bias — it doesn’t care about your methodology. It is woven into the social fabric of what it means to answer a stranger’s question about yourself. The moment someone is asked to represent themselves in a survey, the self-presentation machinery switches on automatically. You are no longer getting their true preference. You are getting their managed preference. The version they would post on LinkedIn.


Reason #2: Hypothetical Bias — “In Theory I Would, But…”

The second massive driver of the intent-action gap is hypothetical bias — the well-documented tendency for people’s stated willingness to pay (WTP) or stated future behaviour in a survey scenario to be systematically higher than their real-world behaviour.

A meta-analysis published in the Journal of the Academy of Marketing Science by Miller et al. (2019/2020) synthesised findings from 77 studies in 47 papers, encompassing 24,347 observations for hypothetical WTP and 20,656 for real WTP. The conclusion was unambiguous: hypothetically measured willingness to pay consistently exceeds real willingness to pay, and the gap varies depending on the measurement method used.

In the mobile communications sector specifically, a 2025 study by Heide and colleagues with German consumers found that hypothetical WTP values were generally higher than current actual expenditure, demonstrating that hypothetical bias significantly distorts pricing strategy research for subscription-based and digital products.

Here’s my trading analogy for hypothetical bias: it’s the equivalent of someone saying they’d definitely buy a stock if it dropped 10% — and then the stock drops 10%, and they don’t buy it because now they’re nervous and they’ve read three negative headlines. What people say they’d do in a hypothetical scenario, and what they do when facing the real costs and consequences, are fundamentally different cognitive operations.

Survey respondents aren’t being dishonest. They’re thinking aspirationally. They’re playing a mental simulation of their best self. And their best self is honestly a lot more decisive, ethical, and financially rational than the tired, slightly distracted, slightly price-sensitive version of themselves who actually shows up at the checkout.

It’s basically like telling your friends you’re going to meal prep every Sunday. Are you going to meal prep every Sunday? Pal, be honest with yourself.


Reason #3: The Mere Measurement Effect — Asking Changes the Answer

Here’s one that absolutely no one tells you about in the boardroom: the very act of asking someone about their intentions can change those intentions — temporarily — creating an artificial signal.

Morwitz, Johnson, and Schmittlein (1993), in the Journal of Consumer Research, investigated whether measuring intent actually changes behaviour. Their study found that asking people about their intentions to purchase a product — for example, a car — actually increased the likelihood of purchase among those surveyed, purely through the act of measurement. The survey didn’t just capture reality; it altered reality.

This creates a profound methodological problem. When you survey your target audience about a new product and 72% say they’re likely to buy it, you can’t know how much of that interest existed before you asked. You may have just created enthusiasm through the act of investigation. Your market research is functioning as unintentional advertising. The sample that looks most interested has been partially created by your curiosity about them.

This is also called the question-behaviour effect in psychology literature, and it’s especially pronounced for socially desirable behaviours (healthy eating, exercise, charitable giving, environmentally friendly purchasing) — exactly the categories where modern brand research is most frequently conducted.


Reason #4: Context Collapse — Lab Rats Don’t Shop at Tesco

Respondents answer surveys in a neutral, considered context — maybe sitting at their desk, maybe on their phone with a cup of tea, maybe in an online panel. The real purchasing decision happens in a totally different context: standing in a supermarket aisle with a trolley full of stuff, a toddler making some very poor decisions at the biscuit shelf, and about forty-five seconds before picking up dinner before a 6pm meeting.

Research published on the intention-behaviour gap in the Radiance Insights review highlights that surveys draw out responses steeped in optimism and framed by an idealised version of the future. We are, as a species, relentlessly aspirational creatures. We answer surveys from the perspective of our future self — the one who has time, attention, sufficient budget, and excellent decision-making capacity. Then our present self shows up and runs on autopilot, choosing the familiar brand, the cheapest option, or simply nothing at all.


Part Three: Case Studies in Spectacular Survey Failure

Case Study 1 — The Green Product Paradox

Across dozens of consumer research surveys, sustainable and ethical products consistently attract high stated purchase intentions. Consumers report willingness to pay significant premiums for eco-friendly options. Green products test well in concept. Brand teams get excited.

Then the products launch and sales are underwhelming.

Auger and Devinney (2007), in the Journal of Business Ethics, published a definitive study titled “Do What Consumers Say Matter? The Misalignment of Preferences with Unconstrained Ethical Intentions.” Their research found systematic and significant misalignment between what consumers claim to prefer when asked in unconstrained survey scenarios and what they actually demonstrate through revealed preferences when real purchasing trade-offs are introduced.

The review of ethical consumption by Papaoikonomou, Ryan, and Valverde (2011), published in the Journal of Macromarketing, documents that the gap between intention and behaviour in ethical consumption contexts is both statistically robust and practically massive — across fair trade, organic, and recycling behaviours in multiple national contexts.

Moral of the story: when a consumer tells you they’ll pay more for sustainability, they mean it… in that survey. In the shop, facing an actual price differential and a perfectly fine non-sustainable alternative sitting right next to it, something different happens. Something that could be called, diplomatically, “reassessing their values in light of their bank account.”


Case Study 2 — New Product Concept Testing

Morwitz, Steckel, and Gupta (2007), in the International Journal of Forecasting, examined across multiple settings when purchase intentions actually predict sales. Their key findings were instructive and humbling for the industry: purchase intentions are better predictors for existing products than new products, better at brand level than category level, and meaningfully weaker when the product being tested is novel or high-involvement.

This is a crushing finding for the standard practice of concept testing. The research methodology companies rely on most heavily to decide which new products to develop is, by design, least reliable for the products most dependent on that research. It’s like building your financial model using the indicator that works least well for the asset class you’re trading.

Think about that the next time a brand manager waves a concept test result at you like a golden ticket. “Ninety percent of respondents said they’d consider buying it.” Friend, that number has about the same predictive relationship to actual sales as your horoscope.


Case Study 3 — The Net Promoter Score and Actual Referrals

The Net Promoter Score (NPS) is possibly the most universally deployed — and most misunderstood — stated-intention metric in business history. “How likely are you to recommend us to a friend or colleague?” Respondents give a 9 or 10. They’re classified as Promoters. The company builds its entire customer loyalty strategy around that data.

Studies examining the relationship between stated NPS and actual referral behaviour consistently find the correlation to be modest. The overwhelming majority of people who give a 9 or 10 never actually make a referral. They answered the question from their aspirational self’s perspective. Their actual self is busy and doesn’t particularly go around talking about their insurance provider at dinner parties.

There is nothing technically wrong with the NPS framework — except that it measures intent, which is not the same as behaviour. If you built your financial forecasts on the assumption that every “9 or 10” customer would generate two referrals this year, your model would be spectacularly wrong.


Part Four: The Science of Fixing It

So we’ve established the problem. Consumers aren’t lying — they’re just human, aspirational, contextually unmoored from their survey responses, and operating under a battery of cognitive biases that standard survey design does nothing to correct. Now let’s talk solutions, because that’s where I actually make you money.

Fix #1 — Stated vs. Revealed Preference Research

The gold standard fix is supplementing stated preference data (what people say) with revealed preference data (what people actually do). In practice, this means triangulating your survey findings against:

  • Behavioural data from actual purchases, clickstreams, or usage logs
  • Observational research conducted in natural purchase contexts
  • Field experiments with real economic stakes

The Journal of Marketing Analytics (2026) review of big data in consumer behaviour — analysing 127 peer-reviewed articles from 2012 to 2023 — concluded that contextual triggers captured via digital footprints and IoT data often override stated preferences, and that revealed preference data consistently exposes the limitations of traditional survey-based frameworks.

This doesn’t mean surveys are useless — it means they should never stand alone. Think of surveys as your hypothesis generator and behavioural data as your validation mechanism. Ask people what they think they want in the survey. Then watch what they actually click on, buy, and return. The distance between those two datasets is your intent-action gap, measured empirically. That’s where your real research begins.


Fix #2 — Indirect and Projective Questioning

If you must rely on surveys, design them smarter. The research is clear that indirect questioning reduces social desirability bias on variables subject to social influence.

Fisher’s 1993 research in the Journal of Consumer Research demonstrated this through structured projective techniques — asking respondents what they think other people would do or feel, rather than asking about their own intentions directly. The psychological mechanism is neat: people project their own actual preferences onto hypothetical others, but without the self-presentational pressure to answer in a socially acceptable way.

So instead of asking “Would you buy a premium sustainable version of this product?”, you ask “What do you think most people in your demographic would do when presented with this product?” The answer is often far more honest, because the respondent has no personal image stake in what those hypothetical other people do.

Think of it this way: if I asked you directly, “Are you financially responsible?”, you’d say yes. If I asked you, “What do you think most people your age do when they get their tax return?”, you’d tell me the truth. Same information. Very different question design.


Fix #3 — Certainty Calibration and Willingness-to-Pay Correction

For pricing research and new product development, the Certainty Approach is a well-validated method for correcting hypothetical bias in WTP measurements.

Research by Heide et al. (2025) applied this approach to mobile communications products and found it generated corrected WTP estimates that more accurately reflected actual expenditure patterns. The basic mechanism involves asking respondents to rate their certainty about their stated willingness to pay, and then applying a correction factor that discounts responses where certainty is lower.

This alone can meaningfully reduce the systematic overestimation of WTP that plagues new product pricing research. It won’t make your data perfect — nothing will — but it will move your pricing models considerably closer to reality than “just believe whatever number people write down.”

This is the research equivalent of your friend saying “I’ll definitely be there at 7pm — actually, probably 7:30 to be safe.” The certainty qualifier did real work. Build it into your survey.


Fix #4 — Behaviour-Specific Social Desirability Measures

Rather than using generic social desirability scales applied uniformly across a survey, emerging research advocates for behaviour-specific social desirability controls — essentially, measuring the pressure to appear socially desirable for each particular behaviour being studied, not as a global trait.

A 2024 paper published in the Journal of Tourism Management found that behaviour-specific approaches offer significantly higher ability to detect and alert researchers to the risk of socially desirable responding. In practical terms, this means designing dedicated questions for each high-risk behavioural topic in your survey — ones that probe respondents’ awareness of social norms around that specific behaviour, allowing statistical correction afterwards.

This is more expensive and time-consuming. It is also, quite simply, better research. The choice to save money on methodology and then spend it on marketing a product that fails because the research was flawed is a spectacular false economy. I’ve seen traders do the same thing — save on the data subscription, then lose multiples of that cost on bad trades made with incomplete information.


Fix #5 — Temporal Specificity in Question Design

One of the most actionable (and cheapest) fixes is adjusting how you frame your behavioural intention questions. Vague future-tense questions generate aspirational answers. Temporally specific, contextually grounded questions generate more realistic ones.

Research reviewed by Radiance Insights cites Morwitz et al.’s (1993) guidance: if you’re measuring renewal behaviour, ask specifically “How likely are you to renew your membership online in the next seven days?” rather than “How likely are you to renew?” The specificity forces the respondent out of their aspirational future-self fantasy and into a more concrete evaluation of their actual current situation.

This is the survey equivalent of the difference between “Do you want to be fit?” and “Are you going to go for a run tomorrow at 6am before work?” Everyone says yes to the first. Far fewer say yes to the second — and those who do are actually far more likely to follow through.


Part Five: Building a Research Framework That Actually Works

So here is a practical framework for closing the intent-action gap in your organisation’s research programme. I’m giving you this as a trader who has seen the downstream consequences of bad data flowing into pricing decisions, product development budgets, and marketing allocation models. This is worth more than most consultancies will charge you for.

The Four-Layer Research Stack

Layer 1 — Attitudinal Survey Data (Hypothesis Generation) Use surveys as hypothesis generators, not decision tools. They are excellent at surfacing themes, generating language, and identifying broad attitude clusters. They are terrible at predicting specific behaviours or exact price thresholds. Design them with indirect questioning, certainty calibration, and behaviour-specific desirability controls.

Layer 2 — Behavioural Data Integration (Reality Check) Every significant attitudinal finding should be stress-tested against available behavioural data. If your survey says 68% of customers value next-day delivery, check whether customers who actually receive next-day delivery show higher retention rates in your CRM. If the correlation isn’t there, the stated preference isn’t real — or at least, not real enough to price into your strategy.

Layer 3 — Field Experiments (Validation) For high-stakes decisions (new product launches, pricing changes, new market entry), run real-world experiments before committing budget. This means A/B testing actual pricing on real customers, piloting in limited markets with actual transactions, or using incentivised research where respondents make real choices with real consequences.

Research on the intention-behaviour gap consistently shows that experimental studies report considerably smaller intention-behaviour relationships than correlational surveys. The closer your research design gets to actual behaviour, the lower your apparent enthusiasm numbers will look — and the more accurate your forecasts will be.

The discomfort of seeing your “74% purchase intent” survey number shrink to “16% actual trial rate” in a real field experiment is not the research failing. That is the research working. The concept test was a polite first date. The field experiment is month three, when you’ve seen them eat cereal for dinner and argue with their broadband provider on the phone for forty minutes. Now you know who you’re actually dealing with.

Layer 4 — Longitudinal Tracking (Correction Over Time) Finally, build a feedback loop. Track stated intentions at time of survey. Track actual behaviour over the following 3, 6, and 12 months. Calculate your organisation’s specific intent-action ratio for different product categories, customer segments, and purchase types. This calibration data is one of the most valuable assets a research-mature organisation can build. It tells you, empirically, how much to discount stated intentions in your specific context.


Part Six: The Broader Implications for Market Research

The intent-action gap isn’t just a methodological inconvenience. It’s a fundamental challenge to the epistemological foundations of how modern organisations understand their customers. When your core intelligence-gathering mechanism systematically overestimates enthusiasm, willingness to pay, and likelihood of behaviour change, every downstream decision is distorted.

Product development teams build roadmaps around features that test well but don’t drive retention. Pricing teams set premiums that concept tests support but markets reject. Marketing teams target segments that survey data says are ready to convert, but who were really just telling a stranger what sounded reasonable at the time.

The meta-analytic review in the Journal of the Academy of Marketing Science (2019/2020) is particularly instructive here: across 77 studies, the hypothetical bias in willingness to pay is not merely present — it is systematically present, affecting different product categories, methodologies, and respondent types consistently and predictably. This is not random noise. It is a structural bias built into the standard survey instrument.

That means every research department using standard survey methodology without bias correction is operating a factory that reliably produces overoptimistic intelligence. And in business, overoptimism is expensive. It’s not as dramatic as catastrophic misjudgement. It just costs you a little bit every time, steadily, on every product that underperforms concept test expectations, every pricing band that doesn’t hold, every campaign that delivers half the conversion rate the attitudinal data suggested it should.

The Organisational Incentives That Make This Worse

Here’s something the academic literature doesn’t spend enough time on, but anyone who has worked inside a large organisation already knows: the incentive structure of corporate research systematically rewards optimistic findings and discourages methodological rigour that produces lower numbers.

Think about it. The research team that comes back with “74% stated purchase intent” gets a warm reception, an approving nod from the CMO, and green lights for the next phase. The research team that comes back and says “actually, once we correct for hypothetical bias and social desirability, you’re looking at closer to 28-35% realistic purchase intent, and here’s our uncertainty range” — that team gets asked if they ran the study correctly.

Nobody intends to build these incentives into the organisation. It just happens, organically, because decisiveness is rewarded and uncertainty is uncomfortable. The result is a slow, silent pressure towards research that confirms rather than challenges, that produces the numbers that feel good to report rather than the numbers that are most likely to be true.

Breaking this cultural pattern requires explicit organisational commitment — senior leadership that actively rewards research teams for surfacing uncomfortable truths, and explicit policies about the role of stated preference data versus validated behavioural measures in investment decisions. Without cultural alignment, even the best methodology improvements get quietly deprioritised because nobody wants to be the team that made the board presentation awkward.

I’ve seen it in trading rooms too. The analyst who consistently delivers measured, uncertain, well-calibrated forecasts gets quietly overshadowed by the analyst who makes bold calls with confidence — even if, over time, the calibrated analyst is more reliably correct. Certainty is more emotionally appealing than accuracy. Fixing the intent-action gap means accepting that good research often looks less impressive than bad research — and building organisations that know the difference.


Part Seven: A Message to the Traders in the Room

I work in trading, and the parallels to market research are not subtle. In financial markets, we are extremely disciplined about the distinction between analyst forecasts (stated expectations) and actual earnings (revealed reality). We have entire research disciplines built around understanding why analyst consensus systematically overpredicts revenue, and we build models that correct for those tendencies.

Consumer research needs the same epistemological rigour. The survey is the analyst forecast. The purchase decision is the earnings number. And just like earnings seasons have a way of humbling even the most confident analyst, real consumer behaviour has a way of humbling even the most carefully designed concept test.

I’ve sat in meetings where someone waves a survey showing 74% purchase intent and presents it as if that number will translate, even partially, into actual sales. It won’t. It never does at face value. The industry knows this, and it continues to use the number anyway because it’s the number they have, and having a number feels better than having uncertainty.

But this is precisely backwards. A wrong number delivered with confidence is more dangerous than honest uncertainty. At least uncertainty prompts due diligence. A confident wrong number prompts budget allocation.

The good news, as I keep reminding teams: the gap is measurable, it is partially correctable, and organisations that build systematic correction mechanisms into their research stack gain a durable competitive advantage over those that don’t. Not because they get perfect data — no one gets perfect data — but because they get calibrated data, which is the only kind worth strategising around.


Conclusion: Close the Gap or Pay for It

The intent-action gap in consumer research is not a rounding error. It is a structural bias, documented extensively in peer-reviewed literature across marketing science, psychology, and behavioural economics, that systematically inflates stated intentions above actual behaviour.

Consumers lie in surveys not out of malice but out of aspiration, social pressure, hypothetical reasoning, and the very human tendency to answer questions from the perspective of who they wish they were rather than who they are on a Tuesday afternoon.

Here’s where we land, and I’m going to be straight with you the same way I am when I’m looking at a trade that looks good on paper but smells wrong in the market: the data you have is not the data you need. The stated preference survey has been the industry standard for decades not because it’s accurate — the research definitively shows it isn’t — but because it’s cheap, scalable, and produces numbers that look decisive in a deck. Nobody ever got fired for showing a board “72% purchase intent.” They just quietly struggled to explain why only 8% of the target market actually bought the product in Q3.

Fixing it requires a combination of smarter survey design (indirect questioning, certainty calibration, temporal specificity), richer data triangulation (stated plus revealed preference), and organisational discipline to treat survey data as hypothesis rather than conclusion.

The businesses that internalise this — that build research stacks designed around the limits of stated-preference data — will make better product decisions, set more accurate price points, and allocate marketing budgets with greater precision than competitors still running standard surveys and then staring, baffled, at their conversion data.

And here’s the final thing I’ll leave you with, because it’s the lesson that took me the longest to accept in my trading career: the goal is never to eliminate uncertainty. You can’t. The market doesn’t care about your confidence interval and neither does your customer. The goal is to make better-calibrated bets — to know that your stated-preference data overpredicts by roughly 40% for new premium products in your category, and to adjust accordingly. To stop being surprised by the gap and start pricing it in.

The gap between what people say and what they do has always existed. It always will. The question is whether your organisation is measuring it, correcting for it, and pricing it into your decisions — or whether you’re still trusting the forecast over the earnings number.

The survey respondents of the world are not going to suddenly become perfect predictors of their own behaviour. They never were. They can’t be. The aspiration machine runs too hot, the self-presentation instinct is too strong, and the context of a survey is too far removed from the context of a purchase decision for perfect alignment to be possible.

But you can build systems that acknowledge these truths, account for them, and transform imperfect data into better-than-average intelligence. And in a world where everyone is working with the same imperfect survey tools, slightly better calibration is all the edge you need.

Close the gap. Or budget to be surprised.


References

  1. Mazar, A., Wood, W., & Verplanken, B. (2022). Understanding the intention-behavior gap: The role of intention strength. Frontiers in Psychology, 13, 923464. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2022.923464/full
  2. Fisher, R. J. (1993). Social desirability bias and the validity of indirect questioning. Journal of Consumer Research, 20(2), 303–315. https://academic.oup.com/jcr/article-abstract/20/2/303/1793106
  3. Larson, R. B. (2019). Controlling social desirability bias. International Journal of Market Research, 61(5), 534–547. https://journals.sagepub.com/doi/10.1177/1470785318805305
  4. Miller, K. M., Hofstetter, R., Krohmer, H., & Zhang, Z. J. (2019/2020). Accurately measuring willingness to pay for consumer goods: A meta-analysis of the hypothetical bias. Journal of the Academy of Marketing Science, 48(3), 499–518. https://link.springer.com/article/10.1007/s11747-019-00666-6
  5. Heide, A., Neuert, J., & Gröne, B. (2025). Reducing the hypothetical bias in measuring willingness to pay for mobile communication products. Journal of Theoretical and Applied Electronic Commerce Research, 20(2), 122. https://www.mdpi.com/0718-1876/20/2/122
  6. Morwitz, V. G., Johnson, E. J., & Schmittlein, D. C. (1993). Does measuring intent change behavior? Journal of Consumer Research, 20(1), 46–61. https://academic.oup.com/jcr/article-abstract/20/2/303/1793106
  7. Morwitz, V. G., Steckel, J. H., & Gupta, A. (2007). When do purchase intentions predict sales? International Journal of Forecasting, 23(3), 347–364. https://www.sciencedirect.com/science/article/abs/pii/S0169207007000799
  8. Auger, P., & Devinney, T. M. (2007). Do what consumers say matter? The misalignment of preferences with unconstrained ethical intentions. Journal of Business Ethics, 76(4), 361–383. https://journals.sagepub.com/doi/10.1177/02761467211054836
  9. Casais, B., & Faria, J. (2022). The intention-behavior gap in ethical consumption: Mediators, moderators and consumer profiles. Journal of Consumer Affairs, 56(1). https://journals.sagepub.com/doi/10.1177/02761467211054836
  10. Zhu, O. Y., & Greene, D. (2024). Should the risk of social desirability bias in survey studies be assessed at the level of each pro-environmental behaviour? Journal of Tourism Management, 105. https://www.sciencedirect.com/science/article/pii/S0261517724000529
  11. Hensher, D. A. (2010). Hypothetical bias, choice experiments and willingness to pay. Transportation Research Part B: Methodological, 44(6), 735–752. https://ideas.repec.org/a/eee/transb/v44y2010i6p735-752.html
  12. Sharma, A., & Kumar, V. (2022). Big data in consumer behavior research: A systematic review of data sources, analytical methods, and research questions. Journal of Marketing Analytics. https://link.springer.com/article/10.1057/s41270-026-00470-6
  13. Radiance Insights / Membership Innovation. (2020). Closing the intention-action gap. https://www.membershipinnovation.com/insights/closing-the-intention-action-gap
  14. NIH/PMC. (2022). Understanding the intention-behavior gap: The role of intention strength. Frontiers in Psychology. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9386038/

Disclaimer: This article is for educational and informational purposes only and does not constitute financial advice. Trading financial instruments carries significant risk of loss. Always conduct your own due diligence and consult a qualified financial professional before making investment decisions.