Customer feedback vs. expert opinion in product development is not just an academic debate — it is a question that has shaped the fate of billion-dollar companies, sunk legendary brands, and occasionally made some absolute genius look like a complete fool on national television. We are talking about the fundamental tension at the heart of every new product launch: do you listen to the people who use the thing, or the people who understand the thing?

And here is the truth that nobody puts on a PowerPoint slide: both camps have been spectacularly, embarrassingly wrong. Not just a little wrong — like, company-collapsing, stock-price-destroying, have-a-meeting-about-this-for-the-next-decade wrong.

I am going to walk you through the research, the evidence, the case studies, and the jokes. Because if you cannot laugh at a $400 million product failure, frankly, you are not going to make it in this business.


Part One: Setting the Stage — What Are We Actually Arguing About?

Before we get into who is right and who is looking for a new job, let us define our terms.

Expert feedback in product development refers to input from professionals, specialists, analysts, engineers, designers, and industry consultants who understand the technical, competitive, and strategic landscape of a product category. These are the people with the credentials, the research, the institutional knowledge, and the LinkedIn profiles with seventeen endorsements for “Strategic Thinking.”

Customer feedback refers to input from the actual end users of a product — the people whose money the company is chasing, whose habits the product must fit into, and whose loyalty determines whether the business still exists in five years. These are the people who do not care about your value proposition framework. They just want the thing to work.

The debate between these two sources of information has been active in management literature for decades. Bosch-Sijtsema and Bosch, writing in the Journal of Product Innovation Management (2014), identified that user involvement throughout the innovation process in high-tech industries produces measurably different outcomes depending on when and how that involvement occurs. Their key finding was not that one type of feedback is universally superior — it was that the timing and structure of feedback integration is what separates winning products from catastrophic launches. (Bosch-Sijtsema & Bosch, 2014, JPIM)

In other words: it is not just whose voice you listen to. It is when you listen, how you listen, and whether you actually do anything with what you hear. Which, by the way, is the same principle that applies to marriages, partnerships, and most trader-client relationships. But let us not go there.


Part Two: The Case for Expert Feedback — When the Nerds Are Right

Let me give experts their flowers first, because they deserve it. There are entire product categories that exist because experts identified a need before consumers even knew they had it. Nobody was walking around in 1991 asking for the internet. Nobody in 2006 said, “You know what I really need? A pocket computer that is also a phone, a camera, a GPS, and a way to argue with strangers at 2 AM.”

Experts, particularly in technologically complex industries, possess what the academic literature refers to as “sticky knowledge” — information about capabilities, constraints, and market dynamics that is embedded in professional experience and is not easily transferred to lay users. Schoenherr and Wagner (2016), cited in research published in the International Journal of Production Management, demonstrated that the source and concentration of knowledge inputs in inter-organisational new product development projects directly affects product design quality and, ultimately, market performance. (International Journal of Production Management, 2021)

This is why pharmaceutical companies have medical advisory boards. This is why aerospace firms use test pilots rather than holding focus groups with frequent flyers. This is why financial products — and yes, as a trader you will appreciate this — are structured by quants and risk managers, not by retail investors who just want higher returns with zero risk. (We have all met that client. We have all smiled and nodded.)

Experts are most valuable when:

  • The product operates in a technically complex domain where users cannot accurately evaluate what they need
  • The product requires predictive knowledge about regulatory, market, or technological shifts
  • The user base lacks the context to articulate latent or future needs
  • The failure mode of getting it wrong is catastrophic (safety, medical, financial products)

Intellectual capital — the accumulated expertise residing in your human, structural, and relational capital — has been empirically linked to new product development performance. Farzaneh et al. (2022), in research reviewed by ScienceDirect, found that firms leveraging high levels of intellectual capital show significantly stronger innovation success rates, particularly in knowledge-intensive industries. (ScienceDirect, 2024)

So the experts are not just gatekeeping for fun. They are protecting products from the well-documented phenomenon where customers confidently tell you exactly what they want — and then do not buy it when you build it. I once had a client who told me for three years he wanted low-volatility, steady returns. The first time a meme stock went up 400% and he missed it, you would have thought I personally robbed his house.


Part Three: The Case for Customer Feedback — When the People Are Right

Now here is where it gets delicious. Because for every example of experts correctly predicting a product need, there is a graveyard of products that experts loved and customers absolutely refused to touch.

Customer feedback is the closest thing to ground truth in product development. Shah and Rai (2022) demonstrated that the strategic use of customer feedback enables sustainable business success by driving service quality improvements and strengthening customer satisfaction, loyalty, and brand equity. Their research emphasised that robust feedback mechanisms are not optional nice-to-haves — they are structural necessities for businesses that want to survive long-term. (ResearchGate, 2022)

Hudson (2008) reinforced this, framing customer feedback as a strategic tool for developing innovative solutions aligned with user needs — not just a customer service function, but a core product development input. When companies establish what the literature calls “feedback loops,” they create dynamic systems that continuously calibrate product direction against real-world usage.

Here is the thing about customers that experts sometimes forget: customers know their own lives. They know what frustrates them in the morning. They know what takes too long. They know what makes them feel respected or ignored. They do not always know how to fix it — that is what engineers are for — but they absolutely know that something is broken.

Naeem and Di Maria (2022), writing in the European Journal of Innovation Management, found that firms deploying Industry 4.0 technologies to enable customer participation in new product design and production processes showed significantly better product outcomes, particularly in product design quality. The key insight: customer involvement is most powerful in product design, and that effect becomes even stronger as digital tools lower the friction of gathering and acting on that input. (Emerald Publishing, 2022)

The psychological dimension matters too. Fuchs et al. (2010), reviewed in the Journal of Product Innovation Management, identified what they called an “empowerment–product demand” effect: customers who participate in product development develop psychological ownership of the resulting product and are significantly more likely to purchase it and advocate for it. (JPIM, 2024)

Translation: when you let customers help build the thing, they become personally invested in its success. That is free marketing. That is brand loyalty you cannot manufacture. That is the kind of thing a good trader would call an asymmetric return on investment — you put in customer engagement, you get back advocacy and repeat purchase. Somebody write that on a whiteboard.


Part Four: Case Study One — The New Coke Catastrophe (Or: What Happens When Experts Override Customers)

Let us talk about one of the most spectacular product failures in corporate history, because it is the single best illustration of what happens when technical experts override customer sentiment in a domain where customer attachment transcends rational preference.

In 1985, Coca-Cola launched “New Coke” — a reformulated version of their flagship product, developed after their own research suggested consumers were drifting toward Pepsi. The company’s scientists conducted taste tests on approximately 190,000 consumers. The results were statistically clear: people preferred the sweeter, reformulated taste. The experts — food scientists, marketers, data analysts — were united in their recommendation. Launch it.

What followed was, in the words of the company’s own leadership, a nightmare.

Within weeks of launch, Coca-Cola was receiving 5,000 angry phone calls per day. By June, that number had climbed to 8,000 calls daily — forcing the company to hire additional operators just to manage the complaints. Protest groups formed. One man, Gay Mullins, spent $30,000 of his own money founding the Old Cola Drinkers of America, which claimed 100,000 members. CBS News framed the backlash as “the people against the corporation.” Pepsi granted its employees a day off and took out full-page newspaper ads declaring victory. Coca-Cola’s stock declined while Pepsi’s rose.

Just 79 days after launch, Coca-Cola brought back the original formula as “Coca-Cola Classic.” (The Branding Journal, 2025) (Learning People, 2024)

The problem was not that the experts were technically wrong about the taste. The blind taste tests were accurate. People did prefer the sweeter formula in a sip test. The problem is that a sip test does not capture what Coke meant to people. The product was not just a beverage. It was a cultural institution. It was the taste of childhood, of baseball games, of American identity. The experts measured flavour preference. They forgot to measure emotional attachment, nostalgia, and the psychological significance of a brand that had existed for 99 years.

Now, as a trader, you understand this perfectly. You know that price is not just a number — it carries sentiment, momentum, psychology, and the weight of previous positions. Any analyst who tells you that price is purely rational has never sat through a flash crash. The New Coke disaster is the product development equivalent of a quant model that perfectly predicts earnings but forgets that the CEO just got caught in a scandal. The fundamentals were correct. The humans were furious.

Key lesson: Expert knowledge is not equipped to measure emotional utility. And emotional utility, in consumer goods, is often the primary utility.


Part Five: Case Study Two — Apple and the iPod (Or: When Experts and Customers Aligned at Exactly the Right Moment)

Now for the other side of the coin. In the early 2000s, the music industry was in chaos. Consumers were pirating music en masse through Napster and its successors. The expert consensus in the music business was that digital distribution would destroy revenue. The consumer behaviour data showed that people wanted digital music — they just did not want to pay for it the way the industry was demanding.

Apple, under Steve Jobs, made a move that combined expert product vision with deep customer insight. The iPod was not designed by asking customers what they wanted — Jobs famously had little patience for focus groups, believing that customers could not articulate needs for products they had never seen. But Apple’s product development was intensely informed by observing how customers behaved — what frustrated them, what they actually did with music, how they navigated the clunky digital music players that already existed.

The iTunes Store married expert-level industry negotiation (getting the major labels to agree to 99-cent track pricing was a feat of commercial diplomacy that required deep industry knowledge) with a customer experience that was intuitive, fast, and emotionally satisfying. The result: 1 million songs sold in the first six days. By 2010, iTunes was the world’s largest music retailer.

This is the integration model — not experts OR customers, but expert knowledge applied in service of customer behaviour insights. The experts understood the technology and the industry dynamics. The customer observation data shaped the experience layer. Strip either element out, and you do not have the iPod. You have either a technically elegant device nobody knows how to use, or a consumer-friendly concept with no technical execution behind it.

Verganti, Vendraminelli, and Iansiti (2020), in their influential paper in the Journal of Product Innovation Management, argued that in the age of artificial intelligence and platform-driven innovation, the most successful product development combines deep expert knowledge of technological possibility with granular understanding of user meaning-making. In other words: know what the technology can do, and know what the human needs it to do for their life. (JPIM, 2020)

Apple did not ask customers what they wanted. Apple watched what customers did, understood what they were trying to accomplish, and built something that matched the emotional and functional need with technical precision. That is not expert override. That is expert synthesis with customer insight.


Part Six: Case Study Three — The Financial Product Graveyard (Or: When Traders Get Cute)

Now we are in your territory. Let us talk about product development in financial services, because this industry has produced some of the most confidently wrong expert opinions in the history of human commerce.

Mortgage-backed securities. Structured investment vehicles. Variable annuities with seventeen fee layers. Products engineered by the brightest minds at the best institutions, reviewed by teams of risk analysts, blessed by compliance departments, and subsequently purchased by retail customers who had absolutely no idea what they were buying.

The 2008 financial crisis is, at its core, a product development failure of catastrophic proportions. The experts built products of extraordinary complexity. The customers — both institutional and retail — did not understand what they owned. When the feedback mechanism that would have flagged this mismatch (rising default rates, deteriorating underwriting standards, customer confusion) was finally activated, it was too late to recall the product.

The parallel to New Coke is striking: in both cases, expert measurement (taste tests / credit ratings) captured one dimension of the product experience (flavour / default probability) while completely missing the more important dimension (emotional attachment / systemic risk and opacity). In both cases, by the time the customer feedback arrived in force, the damage was done.

The lesson for product development — in any industry — is that expert models must be stress-tested against customer reality, not just against other expert models. When your product development process involves experts talking only to other experts, you have created an echo chamber that will eventually produce something that looks great in a presentation and fails catastrophically in the market.


Part Seven: The Research Verdict — It Is Not Either/Or, And It Never Was

By now you might be expecting me to hand down a verdict. Experts win! Customers win! One team, one dream! But the research is more honest than that, and frankly so am I.

The peer-reviewed literature is remarkably consistent on this point: the question is not which source of feedback matters more — it is how to integrate both sources at the right stages of the product development process.

Kabbedijk, Brinkkemper, Jansen, and van der Veldt, in research presented at the Requirements Engineering Conference, examined customer involvement in requirements management for mass-market software development. Their finding: customer involvement is most valuable in the requirements and definition stages, but expert knowledge becomes critical in the technical design and implementation stages. Neither group can substitute for the other. (Springer, as cited in customer feedback literature)

This maps neatly onto what practitioners call the “double diamond” model of product development — diverging to gather broad input (customer and expert), then converging to define the problem, then diverging again to generate solutions, then converging again to build. Customer feedback is most powerful in the divergent phases; expert knowledge is most powerful in the convergent phases. Conflate them — using experts to define problems, or customers to make technical decisions — and you get bad outcomes.

Olsson and Bosch (2014) extended this thinking with their data-driven R&D framework, arguing that the future of product development is not qualitative expert opinion OR qualitative customer feedback, but continuous, data-driven feedback loops that capture quantitative customer behaviour signals in real time and feed them back into development decisions. The goal is not to replace expert judgment — it is to ground expert judgment in empirical customer behaviour data rather than assumption or inference. (Springer, Euromicro Conference 2014)

That is a sophisticated framework. It requires investment in data infrastructure, in customer relationship management, in feedback capture systems. It requires organisations to be humble enough to let data challenge expert assumptions. That last part, if we are honest, is where most organisations fail.

Because look — and I say this with love — experts have egos. I have never met a senior consultant who said, “You know what, the survey data from our customers should completely override my twenty years of industry experience.” That conversation does not happen naturally. You have to build systems that force it to happen. You have to create product development cultures where customer feedback is structurally empowered to challenge expert consensus, not just politely acknowledged in a presentation and then ignored.


Part Eight: The Trader’s Framework — How to Actually Apply This

Let us get practical, because this is where articles usually go soft and start recommending “robust feedback ecosystems” and “agile customer-centricity frameworks” without telling you what to actually do on Monday morning.

As a trader and product thinker, here is how I frame the expert vs. customer feedback question in practical terms:

Think of expert feedback as your fundamental analysis. It tells you what should be true about the market. Experts understand the sector, the technology, the competitive dynamics, the regulatory environment. They can predict structural shifts. They are your primary tool for identifying what is possible and what is sustainable.

Think of customer feedback as your price action. It tells you what is true right now. It does not care about your models or your frameworks. It is the market voting with its feet, its wallets, and increasingly its social media accounts. Customer feedback is leading data. Experts are often lagging data — they explain why something happened after customers have already moved.

The job of a great product developer — like the job of a great trader — is to synthesise both. To use fundamental analysis (expert knowledge) to establish a position thesis, and to use price action (customer feedback) to time the execution and manage the risk.

When they align, you move with conviction. When they diverge — when experts say the product is brilliant but customers are not engaging — that divergence is your most important signal. It means something is wrong with either your expert model or your customer measurement, and you need to find out which before you commit more capital.

Ignoring the divergence because you trust the experts is how you end up holding a position all the way to zero. Ignoring the divergence because you trust the customers is how you miss breakthrough innovations that customers could not articulate until they experienced them. The skill is learning to read when the experts have information the market has not priced in yet, and when the market is telling you that the experts have missed something important.


Part Nine: The Structural Problem — Organisational Incentives and Why Companies Get This Wrong

Here is something the academic literature discusses in careful, measured language, and I am going to say in plain English: most organisations are structurally incentivised to listen to experts over customers, and that bias has nothing to do with what actually produces better products.

Experts are in the room. Customers are not. Experts speak the language of the organisation — they use the same frameworks, attend the same conferences, respond well to the same PowerPoint formats. Their feedback is delivered in forms that are easy to process, cite, and present upward. Customer feedback, particularly qualitative customer feedback, is messy. It is emotional. It uses words like “confusing” and “annoying” and “why does it do that?” rather than “insufficient value proposition differentiation” and “inadequate UX affordance mapping.”

Experts also have institutional authority. When a senior analyst says a product concept is strong, that view carries organisational weight that a hundred customers saying “I do not understand what this does” often cannot overcome. This is a structural failure, not an individual failure. It is built into how most organisations make decisions.

The research by Fuchs and colleagues, cited in JPIM (2024), shows that the empowerment of customers in the product development process requires deliberate structural choices — not just a commitment to “listening to customers” but actual mechanisms that give customer feedback decision-making authority at key gates in the product development process. (JPIM, 2024)

That means product teams with customer advocates. That means feedback data presented in the same format as technical specifications. That means launch gate criteria that include customer validation metrics alongside technical and financial metrics. It means building an organisation where a customer saying “I do not get it” can stop a product launch, not just generate a task in the backlog.

This is uncomfortable. It is uncomfortable because it distributes power in a way that challenges the natural hierarchy of expertise. But every organisation that has survived long enough to become a case study in business schools has eventually learned it.


Part Ten: The Digital Transformation — How Technology Is Changing the Balance

There is a development in this space that deserves serious attention, and it is reshaping the entire expert-vs-customer debate in real time.

The rise of digital platforms, behavioural analytics, and AI-driven feedback systems is fundamentally changing the information asymmetry that historically gave experts their advantage. The reason experts were so valuable in product development was partly because they had information that customers and companies could not easily access — industry trends, technical benchmarks, competitive intelligence, regulatory trajectories. That information moat is eroding.

When a product team can see, in real time, exactly how thousands of users are navigating their application — where they drop off, what they click repeatedly, what search terms bring them to the product, what language they use in reviews — the need to infer customer needs from expert opinion is dramatically reduced. The customer is speaking continuously, in behaviour, and the tools to hear that speech have never been more accessible.

Verganti et al. (2020), in the Journal of Product Innovation Management, argue that artificial intelligence does not replace expert judgment — it transforms it. AI can process customer behavioural signals at a scale and speed that human analysts cannot match, surfacing patterns that would be invisible to either expert intuition or traditional customer research. The expert’s role shifts from generating insight to interpreting and acting on AI-surfaced insight. That is a meaningful shift in the value chain of product knowledge. (JPIM, 2020)

For product developers in any industry — including financial products — this means the question of “experts vs. customers” is becoming less binary. The best organisations are building continuous feedback infrastructure that treats customer behaviour as a primary data stream, with expert judgment applied to interpret and act on that data rather than to substitute for it.

The companies that have not made this shift are the ones still running annual customer satisfaction surveys, sharing the results in a quarterly all-hands presentation, and then continuing to build exactly what they were building before. You know who you are.


Part Eleven: Practical Recommendations — Building a Feedback Architecture That Actually Works

Drawing on the research, the case studies, and a healthy dose of hard-won commercial pragmatism, here is a framework for structuring the expert-customer feedback balance in product development:

Stage 1: Market Definition (Weight toward Expert Input — 70/30)

At this stage, you are defining the problem space, identifying unmet needs, and scoping competitive positioning. Expert knowledge of industry dynamics, technological possibilities, and strategic context is most valuable here. Customer input at this stage is useful for broad directional validation — are we solving a real problem? — but should not drive technical direction, which customers are poorly positioned to specify.

Stage 2: Concept Development (Weight toward Customer Input — 40/60)

Once you have a defined problem space, customer insight becomes primary. Qualitative research — interviews, ethnographic observation, co-creation workshops — should drive concept direction. The question is not “what technology can we build?” but “what does the customer need to accomplish, and what experience do they need to have?” Expert knowledge serves as a constraint layer — validating that customer-desired outcomes are technically and commercially feasible.

Stage 3: Prototype and Testing (Customer Primary — 25/75)

At this stage, the product should be put in front of real users as quickly and frequently as possible. Agile development methodologies, minimum viable products, and rapid iteration cycles are the operational expression of this principle. Expert review continues for technical quality, but customer behaviour data — not customer stated preferences, actual behaviour — should drive iteration decisions.

Stage 4: Launch and Post-Launch (Integrated — 50/50, moving toward customer data dominance)

Post-launch, the balance shifts dramatically toward customer data. Sales performance, retention, usage patterns, support queries, and NPS scores are your primary feedback instruments. Expert analysis helps interpret that data in strategic context, but the data should not be subordinated to expert narrative.

This is not a precise formula — context matters enormously, and different industries will weight these stages differently. But the structural principle holds: customer voice should grow in authority as you move from abstract to concrete, and any organisation where expert opinion consistently overrides customer feedback in the final stages of development is building products for the wrong audience.


Part Twelve: The Uncomfortable Truth About “Visionary” Products

I want to address one more argument before we conclude, because it comes up constantly in these discussions and it needs to be handled carefully.

The argument goes like this: “Steve Jobs did not use customer feedback and he built Apple. Henry Ford said customers would have asked for a faster horse, not a car. True innovation requires vision that transcends customer feedback.”

This argument is made so often, and so confidently, that it deserves a full deconstruction.

First, it is empirically inaccurate. Apple under Jobs was obsessively focused on the customer experience — just not on what customers said they wanted, but on what they did and what the resulting friction told product designers about unmet needs. Jobs did not ignore customers. He ignored customer verbal responses to hypothetical products and instead built rich intuitions from observing customer behaviour with existing products. That is a sophisticated form of customer research, not an absence of it.

Second, the cases where “visionary expert override” has worked — and they do exist — are almost always cases where the expert had deep, long-term exposure to customer behaviour that had not yet been formalised into feedback data. The “vision” was not conjured from nowhere. It was built from years of watching customers struggle with things they could not articulate.

Third, for every Steve Jobs, there are ten product leaders who ignored customer feedback because they believed they had superior vision, and who produced legendary disasters. The survivorship bias in the “visionary override” argument is staggering. We remember the hits. We do not run conferences about the misses.

The research is clear on this: radical innovation and customer-centricity are not mutually exclusive. The Journal of Product Innovation Management has published consistent evidence that firms balancing technological vision with customer insight outperform firms that rely exclusively on either. The framing of expert vision versus customer feedback is a false dichotomy that persists largely because it gives confident, expert people a theoretical justification for not listening. That justification is convenient. It is not well-supported.


Conclusion: Who Wins? The Answer Is the Question

So here we are. You came in expecting a verdict. Experts vs. customers — whose feedback matters more for product development? And the honest answer, the one backed by the literature, the case studies, and a genuinely exhausting amount of experience watching both sides get it wrong, is this:

The question itself is your first mistake.

Products fail when organisations treat expert feedback and customer feedback as competing sources rather than complementary ones. They fail when expert knowledge is used to bypass customer validation. They fail when customer stated preferences are used to override expert knowledge of technical and commercial constraints. They fail when the feedback processes are structurally disconnected from the decisions they are supposed to inform.

The companies that win — consistently, across categories, across economic cycles — are the ones that have built the organisational infrastructure to hold both types of feedback in tension. That means investing in customer research that captures behaviour, not just stated preference. It means building expert teams that are humble enough to let data challenge their assumptions. It means creating product development processes where the question “what does the customer need?” is never treated as settled until you have actual evidence from actual customers.

As a trader, you already live with this tension every day. Your models tell you what should happen. The market tells you what is happening. Your job is to respect both, to know when to trust the model and when to trust the signal, and to have the discipline not to fall so in love with either that you stop listening to the other.

Product development is the same discipline. Expert knowledge is your model. Customer feedback is your market signal. And the market, as we all know, can stay irrational longer than you can stay solvent.

Listen to both. Weight them appropriately for the stage you are at. Build systems that make listening structural, not aspirational. And when your experts and your customers are telling you completely different things — that is not a problem to resolve by choosing one voice over the other. That is a signal that you have not yet found the truth. Keep looking.

The best products in history — and the most profitable ones — were built by people who understood that distinction.


References

  1. Bosch-Sijtsema, P., & Bosch, J. (2014). User involvement throughout the innovation process in high-tech industries. Journal of Product Innovation Management. https://onlinelibrary.wiley.com/journal/15405885
  2. Verganti, R., Vendraminelli, L., & Iansiti, M. (2020). Innovation and design in the age of artificial intelligence. Journal of Product Innovation Management, 37(3), 212–227. https://onlinelibrary.wiley.com/journal/15405885
  3. Maier, E. et al. (2024). The psychological and behavioral consequences of customer empowerment in new product development. Journal of Product Innovation Management. https://onlinelibrary.wiley.com/doi/10.1111/jpim.12734
  4. Naeem, H. M., & Di Maria, E. (2022). Customer participation in new product development: An Industry 4.0 perspective. European Journal of Innovation Management, 25(6), 637. https://www.emerald.com/ejim/article/25/6/637/275412/Customer-participation-in-new-product-development
  5. Schoenherr, T., & Wagner, S. M. (2021). Performance implications of knowledge inputs in inter-organisational new product development projects. International Journal of Production Management. https://www.tandfonline.com/doi/full/10.1080/00207543.2021.1978576
  6. Shah, R., & Rai, A. (2022). The strategic role of customer feedback in enabling sustainable business success. ResearchGate. https://www.researchgate.net/publication/361916247_A_Research_Paper_on_the_Effects_of_Customer_Feedback_on_Business
  7. Farzaneh, M. et al. (2022). Driving new product development performance: Intellectual capital antecedents. Journal of Business Research, ScienceDirect. https://www.sciencedirect.com/science/article/pii/S2444569X24000428
  8. Kabbedijk, J., Brinkkemper, S., Jansen, S., & van der Veldt, B. (2009). Customer involvement in requirements management: Lessons from mass market software development. Requirements Engineering Conference. In: Springer. https://link.springer.com/chapter/10.1007/978-3-319-19593-3_12
  9. Olsson, H. H., & Bosch, J. (2014). From opinions to data-driven software R&D. Euromicro Conference on Software Engineering. In: Springer. https://link.springer.com/chapter/10.1007/978-3-319-19593-3_12
  10. The Branding Journal. (2025). New Coke: A classic branding case study on a major product change failure. https://www.thebrandingjournal.com/2025/02/new-coke/
  11. Learning People. (2024). Project post-mortems: Lessons from the New Coke failure. https://www.learningpeople.com/au/resources/blog/project-postmortems-new-coke/
  12. Cognitive Market Research. (2025). The fall of a major soft drink brand: How a leading beverage company misread its loyal audience. https://www.cognitivemarketresearch.com/blog/the-fall-of-new-coke-how-coca-cola-misread-its-loyal-audience

Disclaimer: This article is for educational and informational purposes only and does not constitute financial advice. Trading financial instruments carries significant risk of loss. Always conduct your own due diligence and consult a qualified financial professional before making investment decisions.