How to choose the right sample size for small or niche audiences (Without a Statistician) is a practical challenge many researchers, marketers, and product teams face when working with limited populations. Unlike massive consumer studies that can rely on thousands of respondents, niche audiences — such as specialized hobbyists, rare disease patients, boutique customer segments, or small professional communities — often have total sizes in the low hundreds or fewer, making traditional sample size formulas feel intimidating or irrelevant. This guide simplifies the process by focusing on actionable rules of thumb, clear trade-offs between confidence levels and feasible sample sizes, and straightforward methods you can apply immediately using basic spreadsheets or free online calculators, empowering you to gather reliable insights without advanced statistical expertise.


Why Sample Size Is the Most Underestimated Decision in Research

Here’s the thing nobody tells you at the beginning of a research project: choosing the wrong sample size is like putting the wrong fuel in your car. You might drive a few metres before the whole thing breaks down in the middle of the motorway. You’re sitting there, hazard lights on, data on fire, and someone from the marketing team is texting you asking why the insights aren’t ready yet.

Sample size is not just a number. It is a strategic decision that determines whether your findings are worth the paper — or the dashboard — they’re printed on. Too small, and you’re drawing conclusions from vibes. Too large, and you’ve burned budget, time, and energy collecting data points that add nothing to your precision.

The peer-reviewed literature is unambiguous on this. As O’Neill (2022) demonstrates in PLOS ONE, a sample that is too small renders inferences from statistical studies practically worthless, while a sample that is too large produces precision far beyond what is practically necessary — at significant cost 1.

Now, for large general populations — say, “UK adults aged 18–65 who drink coffee” — this is relatively straightforward. You use a standard formula, hit your confidence intervals, and move on. But what happens when your audience is tiny? What happens when you’re researching retired female horse trainers in Yorkshire, or CFOs at mid-market fintech firms, or vegan competitive bodybuilders? What then?

That’s where most people panic. And that’s exactly where this article earns its keep.


Understanding the Core Variables Before You Touch a Calculator

Before you start Googling “sample size calculator” and typing numbers in with the energy of someone who definitely knows what they’re doing, you need to understand the four variables that drive every sample size decision. Think of these as the four ingredients of a recipe. Miss one and the whole dish falls apart. I’ve served a few terrible dishes in my time. Statistically speaking, obviously.

1. Population Size

Your population is the total group you want to understand. In niche research, this is often surprisingly small. If you’re studying, say, independent financial advisers in Scotland who specialise in ESG portfolios, your population might be fewer than 500 people. That matters enormously, because sample size requirements shrink as population size shrinks — a concept known as the finite population correction (FPC).

For very large populations (over 100,000), population size barely affects your required sample. But for populations under 10,000 — common in B2B and specialist research — the FPC can dramatically reduce the number of respondents you actually need. According to CleverX’s research guidance (2026), smaller populations require proportionally fewer respondents to achieve the same statistical precision 2.

So step one is simple: define your total population as precisely as you can. Not “marketing professionals” — that’s everyone. “Marketing directors at UK SaaS companies with 50–250 employees.” That’s a population. Now we’re cooking with gas.

2. Confidence Level

Your confidence level is the probability that your results reflect the true population value if you repeated the study many times. The industry standard is 95% confidence, meaning that if you ran the same survey 100 times, 95 of those runs would produce results within your stated margin of error.

Some researchers drop to 90% to reduce required sample size. Some go to 99% for high-stakes decisions. For most niche audience research — where you’re trying to make a commercial or strategic decision, not launch a satellite — 95% is the sweet spot.

3. Margin of Error

The margin of error (also called the confidence interval) is the wiggle room in your results. A ±5% margin means that if 60% of your sample says “yes,” you can be confident the true population value is somewhere between 55% and 65%.

For niche studies where recruitment is difficult and expensive, a ±10% margin is often acceptable — especially for directional insights. You’re not trying to predict an election to within a percentage point. You’re trying to know whether your audience cares more about price or quality. A ±10% margin is perfectly adequate for that, and it dramatically reduces your required sample.

4. Expected Variability (Standard Deviation / Population Proportion)

This is where it gets slightly spicy. If you have no prior data about your audience, you assume maximum variability — in proportion-based surveys, this means using p = 0.5 (50/50 split), which produces the largest required sample size. If you have prior data suggesting the distribution is less variable — say, 80% of your audience tends to answer a certain way — you can use that estimate and reduce your required sample.

For niche audiences you’ve never studied before, assume maximum variability. Trust me. I’ve made assumptions in markets with inadequate data. Let’s just say I learned that lesson and it wasn’t cheap.


The Formula (Don’t Panic, We’re Doing This Together)

Here is the standard formula for sample size with a large or infinite population:

n = (Z² × p × (1-p)) / E²

Where:

  • n = required sample size
  • Z = Z-score for your confidence level (1.96 for 95% confidence)
  • p = expected proportion / variability (0.5 for maximum)
  • E = margin of error (0.05 for ±5%, 0.10 for ±10%)

Let’s run it with standard inputs:

At 95% confidence, ±5% margin, p = 0.5: n = (1.96² × 0.5 × 0.5) / 0.05² = (3.8416 × 0.25) / 0.0025 = 0.9604 / 0.0025 = 384 respondents

That’s why you keep seeing “385” as the magic number in research literature. It’s not magic. It’s maths. And maths doesn’t care about your feelings.

Now at 95% confidence, ±10% margin, p = 0.5: n = (1.96² × 0.5 × 0.5) / 0.10² = 0.9604 / 0.01 = 96 respondents

Widen your margin of error by one factor and your sample size drops by 75%. That’s not cheating — that’s calibration. For a niche audience of 800 CFOs, collecting 96 usable responses might be ambitious. Collecting 384 might be impossible. Now you have a strategy.

Applying the Finite Population Correction

For small populations, adjust using:

n_adjusted = n / (1 + (n-1)/N)

Where N is the total population size.

So if your population is N = 500 and your initial n = 384:

n_adjusted = 384 / (1 + 383/500) = 384 / 1.766 = 217 respondents

You just saved 167 interviews. You’re welcome.


The Problem with Niche Audiences That Nobody Talks About

Here’s the part where I get serious for a moment — and believe me, serious is not my natural habitat. When your audience is niche, the maths is only half the problem. The other half is the hard reality of recruitment.

Let me put it this way: knowing you need 96 CFOs doesn’t help if you can only find 40 who will talk to you. Niche audiences are hard to recruit. They’re busy, they’re protective of their time, and they’ve been surveyed by seventeen competitors already this quarter. They’ve got survey fatigue worse than I’ve got market fatigue after a bad week.

This is a real methodological tension documented in the research literature. Althubaiti (2023) in the Journal of General and Family Medicine notes that in specialised research contexts, small sample sizes are often unavoidable — particularly when studying rare or difficult-to-reach populations — but that an insufficiently small sample makes results hard to reproduce and can undermine the scientific impact of the research [3].

So what do you do when you can’t hit your ideal sample size? You have options. And those options have a name: mixed methods, sequential research, and strategic qualitative anchoring. More on those shortly.


The Niche Audience Threshold: What’s the Minimum Defensible Sample?

Let me give you the number that most people are actually looking for when they read articles like this: what is the absolute minimum sample I can get away with and still have results worth presenting?

Sapio Research’s methodology guidance suggests that for niche or senior-level audiences with fewer required subgroup comparisons, a sample of 200 provides robust directional insights. For highly focused studies with a single audience segment and no comparative analysis, 100 respondents gives a ±10% margin at 95% confidence — which, for most commercial decisions, is entirely adequate [4].

But here’s the caveat — and it’s a big one, so pay attention. If you need to analyse subgroups, the minimum applies to each subgroup, not to the total. If you want to compare responses between junior and senior employees in a B2B survey, you need at least 100 responses per group. Total 300? You need 100 in group A, 100 in group B, 100 in group C. Not 100 spread across all three. That’s 33 per group, and that’s not research, that’s guesswork dressed up in a nice PowerPoint deck.


Case Study 1: The Craft Brewery Market Research Disaster

Let me tell you about a real situation that rhymes with a lot of things I’ve seen in financial markets. A craft brewery — let’s call them Hops & Hubris — wanted to understand whether their target audience of “craft beer enthusiasts aged 25–45 in the South West of England” would pay a premium for a limited-edition barrel-aged stout.

They surveyed 47 people. Forty-seven. All recruited via their own Instagram page. Their Instagram followers are — brace yourself — already customers who already love the product. They got back 89% positive responses and immediately commissioned a production run worth £40,000.

Now I’m going to need you to sit with that for a second.

They asked people who already bought their beer whether they’d like more of their beer. And they were surprised. Shocked, even. Like a trader who only reads bullish analysis and wonders why the short came in.

What went wrong:

  1. Sample too small — 47 is nowhere near the minimum for reliable inference, even with FPC applied to a local niche market
  2. Non-representative sample — recruiting from existing customers introduces severe selection bias
  3. No control for social desirability bias — people tend to say yes when they know the brand is asking

What they should have done:

Using the standard formula at ±10% margin (appropriate for a directional commercial decision), and assuming a local craft beer enthusiast population of approximately 2,000 in the region, they needed just 92 respondents — but recruited neutrally through local events, beer festivals, and panel providers. With proper sampling, they would have uncovered that only 51% of the broader target market expressed genuine willingness-to-pay at the premium price point — not 89%.

The difference between 89% and 51% is the difference between a profitable product launch and a £40,000 lesson in the importance of sample selection.


When Qualitative Research is Your Best Friend

Now listen, I need to talk to you about something that a lot of quantitative-first researchers treat like the uninvited guest at the dinner party: qualitative research. This is important. Don’t scroll past this section.

For niche audiences — particularly when total population size is small (under 200), when you’re exploring new territory with no prior data, or when the complexity of behaviour defies a multiple-choice question — qualitative methods can outperform a larger quantitative sample in terms of actionable insight per research pound spent.

A series of 10–15 in-depth interviews with carefully selected niche audience members can surface insights, motivations, and language that no survey of 200 people would ever reveal. You’ll hear things like: “The reason I switched providers wasn’t the price — it was that their support team made me feel stupid.” No tick-box survey surfaces that. No rating scale captures that. But an experienced interviewer in a 45-minute conversation? They get it every time.

Beresford Research notes that for niche studies, a smaller targeted and statistically significant sample size may often suffice — particularly when balanced with qualitative validation [5].

This is the mixed methods approach: run a small qualitative phase first to understand the landscape, then follow with a focused quantitative phase with a realistic sample size. Your quant survey is better because it’s informed by real language and real concerns. Your qualitative phase gives you the “why” behind the numbers. Together, they’re formidable.

Think of it like this: qualitative is the trader reading the tape — watching the flow, feeling the market. Quantitative is the backtest — confirming statistically what the instinct already suggested. Neither is enough on its own. Together, you have a strategy.


Case Study 2: B2B Fintech — Getting It Right

Now let me show you the other side of the coin. A mid-size fintech firm wanted to understand the payment workflow challenges faced by finance directors at UK SMEs with 50–250 employees. Total addressable population: approximately 3,500 FDs in that segment.

They were smart about this. Here’s what they did:

Phase 1 — Qualitative (10 interviews): They recruited 10 finance directors through a specialist B2B panel, offering a £75 incentive per 45-minute interview. The interviews were semi-structured and explored workflow pain points openly. They discovered three key themes they had not anticipated: approval chain delays, reconciliation with legacy accounting software, and the hidden cost of manual FX conversion on international invoices.

Phase 2 — Quantitative (n = 180): Using the FPC for a population of 3,500 at 95% confidence and ±7% margin, the required sample was approximately 196 respondents. They achieved 180 — just below ideal, but they declared this openly, noting the ±7.3% margin of error in their research report. The survey was built using language and framing drawn directly from Phase 1 interviews.

Outcome: Their findings revealed that 67% of FDs were dissatisfied with their current payment workflow, and 58% cited reconciliation with legacy software as the primary pain point — directly validating Phase 1 and giving the product team clear, statistically defensible priorities for the next development sprint.

This is what it looks like when you do it right. Not 47 people from your Instagram page. Not 400 people recruited because the number felt impressive. A considered, appropriately sized, methodologically honest research design that matched the real-world constraints of a niche B2B audience.


The Trader’s Framework for Sample Size Decisions

As a trader, I make decisions under uncertainty every single day. Sound familiar? Research is no different. Here’s the framework I use — and it works just as well for research decisions as it does for position sizing.

Step 1: Define Your Risk Tolerance

In trading, risk tolerance determines position size. In research, it determines your required confidence level and margin of error. If the decision you’re informing is low-stakes (testing a new email subject line), a 90% confidence level and ±10% margin is fine. If the decision is high-stakes (a £500,000 product launch), you want 95% confidence and ±5% margin. Match precision to stakes.

Step 2: Know Your Population

You wouldn’t trade a market you hadn’t scoped. You shouldn’t size a sample without knowing the total population. Estimate it as carefully as you can. Use industry databases, trade association membership figures, LinkedIn audience estimates, or CRM data. The more precisely you define the population, the more accurately you can apply the FPC and avoid over-sampling.

Step 3: Decide Whether You’re Running Subgroup Analysis

This is the single most common reason sample sizes are underestimated. If you’re running subgroup analysis — and most research projects are, whether you know it yet or not — multiply your minimum sample by the number of subgroups you’ll need to compare. Plan this upfront. Not as an afterthought two weeks before the debrief when someone asks “but what do the over-45s think?”

Step 4: Be Honest About Your Constraints

You have a budget. You have a timeline. You have a recruitment pool. Be honest about what’s achievable and set your precision accordingly. A ±10% margin at 95% confidence with n = 96 is better than a survey of 30 people presented as if it were statistically representative. The former is defensible. The latter is a liability.

Step 5: Document Everything

Always — always — report your sample size, confidence level, margin of error, and population estimate in your research findings. Not in a footnote. In the headline. Readers deserve to know the precision of what they’re reading. Markets punish undisclosed risk. Research audiences should do the same.


Advanced Tools: Online Calculators and When to Use Them

You don’t need a statistician, but you do need a decent calculator. Here are the tools that are actually worth your time:

Evan Miller’s Sample Size Calculator (https://www.evanmiller.org/ab-testing/sample-size.html) is widely regarded as the best free tool for A/B testing and proportion-based sample size calculation. CleverX (2026) specifically recommends it for practical research applications [2].

SurveyMonkey’s Sample Size Calculator (https://www.surveymonkey.com/mp/sample-size-calculator/) is the most accessible for non-technical users. Input your population, confidence level, and margin of error, and it does the rest.

Raosoft’s Calculator (http://www.raosoft.com/samplesize.html) is particularly good for small populations, automatically applying the FPC.

Sapio’s Significant Difference Calculator (https://sapioresearch.com/tutorials/what-makes-a-good-robust-and-useable-research-sample/) helps you understand whether differences between subgroups in your data are statistically significant — which is just as important as the initial sample size decision.

One critical note: these calculators assume random sampling. If your sample is not random — if you’re recruiting through your own newsletter, social media channels, or pre-existing customer list — your margin of error is meaningless, no matter what the calculator says. Garbage in, garbage out. I’ve seen traders chase signals from biased data sources and lose a lot of money. The research world has the same problem, it just loses credibility instead of capital.


Common Mistakes That Will Make a Statistician Weep (And Possibly Leave the Room)

Let me give you the shortlist of errors I see most often in niche audience research. Consider this the part where I’m your brutally honest friend, not the polite colleague who lets you walk into the meeting with spinach in your teeth.

Mistake 1: Confusing “responses” with “usable responses.” You sent 300 surveys. You got 200 responses. You present this as n = 200. But 40 of those surveys were incomplete, 15 failed quality checks, and 12 were from people who don’t actually match your target audience. Your real n is 133. That changes your margin of error from ±6.9% to ±8.5%. Declare the real number.

Mistake 2: Presenting percentages from tiny samples. “66% of respondents said they preferred Option A.” Sounds impressive. But if n = 9, that’s six people. Six people is not a finding. Six people is a Tuesday afternoon conversation. Do not present percentages when your n is below 30. Use frequencies. “6 out of 9 respondents preferred Option A” is honest. “66%” implies a precision that does not exist.

Mistake 3: Ignoring non-response bias. The people who respond to your survey are different from the people who don’t. In niche audiences, this effect is amplified. The respondents tend to be more engaged, more opinionated, or more dissatisfied than the silent majority. Always ask: who didn’t respond? What might they think differently?

Mistake 4: Treating a convenience sample as representative. Your LinkedIn network, your newsletter subscribers, your conference attendees — these are convenience samples. They are useful for exploration, not for inference. You cannot generalise from them to the broader population with any statistical confidence. Use them to generate hypotheses. Use a properly recruited sample to test them.

Mistake 5: Using the same sample size for every project. “We always do 200 respondents” is not a research methodology. It is a default. Sometimes 200 is too many. Sometimes 200 is nowhere near enough. The required sample size depends on the population, the precision required, and the analysis plan. There is no universal number. Anyone who tells you otherwise is selling something.


Case Study 3: The Luxury Watch Retailer and the Power of Sequential Research

A luxury watch retailer in London wanted to understand why high-net-worth millennials — individuals aged 28–42 with investable assets over £500,000 — were not converting on their premium watch financing product. This audience is, by definition, tiny. The total addressable population in the UK is estimated at approximately 180,000 individuals, but the subset who are both watch enthusiasts and millennials narrows that considerably.

The research team initially proposed a quantitative survey of n = 384, which would have been statistically robust for the population. The problem? Getting 384 HNW millennials to complete a survey about why they didn’t buy something is roughly as easy as getting cats to walk in formation. This demographic does not sit surveys. They get phone calls or nothing.

The solution was sequential research:

Round 1 — 8 in-depth telephone interviews with existing non-converting leads from the CRM. These revealed that the primary barrier was not price, not product design, and not competitor offering. It was the perception that financing a luxury item signalled financial weakness. Cultural psychology, not economics.

Round 2 — n = 85 quantitative telephone surveys with a specialist HNW panel, using validated questions based on Round 1 themes. At a ±10.6% margin of error for this population, this was declared upfront as directional research.

Findings: 71% of non-converting prospects cited “perception of financing as a status signal” as their primary barrier. The retailer repositioned their financing product as a “wealth management tool” (parallel to leasing a premium asset), retrained the sales team on language, and saw a 34% increase in financing uptake within six months.

Total sample: 93 individuals. Not 384. Not 500. Ninety-three. But designed properly, declared honestly, and triangulated across methods. That’s not a small sample. That’s a precise instrument.


The Ethics of Sample Size: What You Owe Your Audience

Here’s something that doesn’t get discussed enough: there is an ethical dimension to sample size decisions, especially in research that informs public policy, product development, or commercial decisions that affect people.

Nayak (2010), writing in the Indian Journal of Ophthalmology, puts it plainly: studies performed on samples too small to support valid inference can produce conclusions that are misleading at best and harmful at worst — particularly when those conclusions influence decisions that affect the population being studied [6]. In a research context where the audience is small and often under-represented (niche professional communities, minority consumer segments, rare-condition patient groups), the stakes of bad sample decisions are particularly high.

Overselling underpowered research is not just a methodological error. It’s a breach of trust with the client, the audience, and the decision-makers who act on your findings.

Be honest about what your sample can and cannot support. Use confidence intervals. Report margin of error. Distinguish between directional findings and statistically robust findings. Your research is only as valuable as your honesty about its limitations.


Practical Quick-Reference: Sample Size by Research Type

Let me give you a summary table you can screenshot, print, stick on your wall, tattoo on your forearm — whatever works for you.

Research Type Recommended Min. Sample Margin of Error Notes
Large general population survey 385 ±5% @ 95% conf. Standard benchmark
Niche population (>10,000) 200–385 ±5–7% Apply FPC if possible
Niche population (1,000–10,000) 100–200 ±7–10% FPC reduces requirement
Niche population (<1,000) 50–150 ±8–12% Declare limitations
B2B senior/executive audience 50–100 ±10% Supplement with qualitative
Qualitative in-depth interviews 8–15 N/A Not statistically representative
Mixed methods (niche) 10–15 qual + 80–120 quant ±9–11% quant Best practice for small pops
Subgroup analysis (per group) 100 ±10% per group Each group needs its own n

This table is a guide, not a rulebook. The right answer always depends on your specific population, precision needs, and analysis plan. But if you’re a non-statistician trying to make a defensible decision quickly, this is your starting point.


The Agile Research Approach: Smaller Bites, Smarter Insights

One of the most underused strategies in niche audience research is borrowed directly from the software development world: agile methodology. Instead of designing one large survey that has to answer every question at once, you break your research into iterative rounds.

Round one might be a small exploratory survey of 40–60 respondents to identify which two or three hypotheses are worth pursuing. Round two is a focused confirmatory survey of 80–120 respondents aimed squarely at those hypotheses. Round three might be a short pulse check of 30–50 respondents to validate whether a proposed response to the findings resonates with the audience.

This approach is endorsed by OnePulse (2022), which advocates for agile research as a way to apply what you learn from one survey to the next — using smaller samples and getting valuable insights in less time, while avoiding the need for a single massive omnibus survey [7].

In practice, this means your total research investment across three rounds might be 150–200 respondents — roughly the same as a single mid-sized survey — but the intelligence you extract is richer, more iterative, and more practically useful, because each round builds on the last. For niche audiences where respondent access is scarce and expensive, this is not just smart. It’s essential.

Think of it like dollar-cost averaging into a position. Rather than deploying all your capital at once into an uncertain trade, you enter in stages, adjusting as the picture becomes clearer. Same principle. Different spreadsheet.


When You Genuinely Cannot Hit the Minimum: What Next?

Sometimes reality just doesn’t care about your sample size requirements. You’ve got a population of 120 people, you’ve contacted every single one of them, and 47 have agreed to participate. No amount of incentives, reminders, or charm is going to get you to 100.

Here is what to do — and what not to do — in that situation.

Do:

  • Report exactly what you have: n = 47, margin of error ±14.3% at 95% confidence
  • Present findings as directional and hypothesis-generating, not statistically definitive
  • Use qualitative triangulation — follow up with 5–8 interviews to explore key quantitative themes
  • Run descriptive statistics only — frequencies, central tendencies, distributions
  • Be explicit in the executive summary: “This study is exploratory. These findings should inform further research before major decisions are taken.”

Don’t:

  • Present percentage findings as representative of the population
  • Run significance tests on subgroups
  • Compare group differences without declaring that those differences may not be statistically meaningful
  • Let the client or stakeholder believe the findings carry more weight than they do

Small data, handled honestly and intelligently, has value. Small data presented as large data is a liability. Understand the difference.


A Final Word From the Trading Floor (Sort of)

I want to leave you with something I’ve learned from years of making decisions with imperfect information under time pressure, which is basically what research — and trading — is all about.

The goal is not certainty. Certainty is a luxury nobody can afford. The goal is calibrated confidence — knowing exactly how much you know, how much you don’t know, and making the best possible decision given both. A well-sized sample for a niche audience, declared honestly with appropriate margins of error and methodological caveats, is worth infinitely more than a poorly sized sample presented with false confidence.

The formula isn’t complicated. The variables are manageable. The tools are free. The ethics are clear. The only thing standing between you and good research is the willingness to be honest — about your population, your constraints, your precision, and your limitations.

You don’t need a statistician to do this well. You need clear thinking, a basic formula, a decent calculator, and the professional courage to say: “Here’s what we know, here’s how confident we are, and here’s what we’d need to know more.”

Now go build something worth believing in.


References

  1. O’Neill, B. (2022). Sample size determination with a pilot study. PLOS ONE. https://doi.org/10.1371/journal.pone.0262804
  2. CleverX (2026). How to calculate research sample size: A practical guide for user and market research. https://cleverx.com/blog/how-to-calculate-research-sample-size-a-practical-guide-for-user-and-market-research/
  3. Althubaiti, A. (2023). Sample size determination: A practical guide for health researchers. Journal of General and Family Medicine, 24, 72–78. https://doi.org/10.1002/jgf2.600
  4. Sapio Research (2026). What makes a good, robust, and useable research sample? https://sapioresearch.com/tutorials/what-makes-a-good-robust-and-useable-research-sample/
  5. Beresford Research (2024). How to determine sample size in market research. https://www.beresfordresearch.com/determine-sample-size-in-market-research/
  6. Nayak, B.K. (2010). Understanding the relevance of sample size calculation. Indian Journal of Ophthalmology, 58(6), 469–470. DOI: 10.4103/0301-4738.71673
  7. Dattalo, P. (2009). A review of software for sample size determination. Evaluation & the Health Professions, 32(3), 229–248. https://journals.sagepub.com/doi/abs/10.1177/0163278709338556
  8. TRC Market Research (2025). Sample size matters in marketing research. https://trcmarketresearch.com/whitepaper/sample-size-matters/
  9. QuestMindshare (2024). Mastering niche audience research: Quality, representativeness and beyond. https://questmindshare.com/mastering-niche-audience-research-quality-representativeness-and-beyond/
  10. Wikipedia (2026). Sample size determination. https://en.wikipedia.org/wiki/Sample_size_determination

Disclaimer:  This article is provided for informational and educational purposes. Always consult a qualified research methodologist for high-stakes or policy-relevant research design.