The 10 data collection methods every researcher should know are essential tools for conducting high-quality, reliable research in today’s data-driven world. Whether you’re a student, academic, market researcher, or professional analyst, mastering these proven data collection techniques can significantly improve the accuracy, validity, and impact of your findings. From traditional surveys and interviews to modern approaches like web scraping, sensor data, and social media analytics, this comprehensive guide breaks down the 10 most important data collection methods every researcher should know, including their strengths, limitations, and real-world applications to help you choose the right method for your next research project.
Why Data Collection Methods Matter in Research and Finance
Before we dive in, let us be clear about something: it does not matter how sophisticated your analysis is if your data collection method was flawed from the start. Garbage in, garbage out — as the saying goes. In research, as in trading, the quality of your input determines the quality of your output. It is like trying to bake a cake with sand instead of flour. You can follow the recipe perfectly. You can preheat the oven. You can even wear an apron. But you are still going to end up with something nobody wants to eat.
According to Creswell and Creswell (2018), research design — which includes the selection of data collection methods — is one of the most critical decisions a researcher makes, fundamentally shaping the validity, reliability, and applicability of any findings. When applied to financial research specifically, Fama and French (1993) demonstrated in their landmark three-factor model paper that the methodology of data collection — how stock returns, book-to-market ratios, and firm size were measured and compiled — was inseparable from the credibility of the conclusions. In other words: your method is your argument.
Now, let us talk about those ten methods.
1. Surveys and Questionnaires
The Classic. The OG. The method that has been around so long it should have its own pension plan.
Surveys are one of the most widely used data collection methods in both academic research and industry. They allow researchers to collect structured information from large numbers of respondents in a relatively cost-effective and scalable manner. In financial research, surveys are frequently used to measure investor sentiment, consumer confidence, and market expectations.
Surveys can be administered online, by telephone, by mail, or in person. Each mode comes with its own trade-offs. Online surveys are cheap and fast but suffer from response bias and low completion rates. Telephone surveys reach broader populations but are increasingly hampered by the fact that nobody under 50 answers unknown calls anymore — a fact that I find both statistically interesting and personally relatable.
Peer-Reviewed Evidence: Baker and Wurgler (2006), in their influential study published in the Journal of Finance, used survey-based investor sentiment indices to demonstrate that sentiment has measurable effects on stock returns — particularly for stocks that are hard to arbitrage. Their methodology depended entirely on well-constructed survey instruments. The lesson: a poorly designed survey produces poorly designed conclusions.
Case Study: The Michigan Consumer Sentiment Index The University of Michigan’s Consumer Sentiment Index, administered since 1946, is perhaps the most famous recurring survey in economic research. It surveys approximately 500 households monthly on their financial outlook and expectations. Researchers and traders alike use this data to predict consumer spending behaviour and broader economic trends. Studies, including Ludvigson (2004) in the Journal of Economic Perspectives, have confirmed that consumer confidence surveys have genuine predictive power for near-term economic activity. Not bad for a telephone survey.
Trader’s Note: I once designed a survey to assess colleague risk appetite. I asked twelve questions. The first question was, “Do you like money?” Everyone said yes. The last question was, “Would you risk losing it?” Everyone said no. Groundbreaking stuff. Survey complete. Published internally. No peer review required.
2. Interviews
If surveys are texting, interviews are the actual phone call — and nobody does those anymore unless something is really serious.
Interviews are a qualitative data collection method involving direct, structured, semi-structured, or unstructured conversations between a researcher and a participant. They are invaluable when researchers need depth, nuance, and context that a survey simply cannot provide. In financial research, interviews are used to understand executive decision-making, investor behaviour, fund manager strategy, and regulatory perspectives.
Structured interviews follow a fixed script. Semi-structured interviews use a guide but allow the conversation to develop organically. Unstructured interviews are essentially free-form conversations — which sounds great in theory but requires a highly skilled interviewer to ensure relevant data is actually captured, and not just two people talking about the weather for forty minutes.
Peer-Reviewed Evidence: Denny and Weckesser (2022), published in a peer-reviewed BJOG journal, outline that interviews remain one of the most powerful tools for accessing complex human experiences and decision-making processes. In finance, Shefrin (2000) drew extensively on interview-based data collection to document behavioural biases among professional investors — revealing that even seasoned fund managers suffer from overconfidence, loss aversion, and mental accounting in ways that surveys alone could never have fully captured.
Case Study: Fund Manager Behavioural Research A study conducted by Montier (2006) at Société Générale involved structured interviews with 300 professional fund managers. The results were extraordinary: 74% believed they were above-average investors. Statistically, that is impossible. In probability terms, that is the equivalent of 74% of people believing they are taller than average — while standing in the same room. This finding became foundational in behavioural finance literature and would not have been possible without the interview method.
Trader’s Note: I once interviewed a colleague about their trading strategy. Twenty minutes in, I realised they were just describing what they had read on Reddit the night before. Semi-structured interview. Zero structured thinking. Very structured losses.
3. Observations
You cannot always trust what people say. But you can always trust what they do — especially when they do not think anyone is watching.
Observational research involves a researcher systematically watching and recording behaviour in a natural or controlled setting. In financial contexts, this might mean observing trading floor behaviour, monitoring how retail investors interact with digital platforms, or studying the decision-making processes within investment committees.
There are two main types: participant observation, where the researcher is embedded within the group being studied, and non-participant observation, where the researcher watches from the outside. Both have their strengths. Participant observation produces rich, contextual data. Non-participant observation is less intrusive but may miss nuanced dynamics.
Peer-Reviewed Evidence: Chand (2025), in Advances in Educational Research and Evaluation, notes that observational methods allow researchers to document real-world behaviours and contextual factors that participants themselves may be unable to articulate. In finance, ethnographic and observational methods have been used by researchers such as Knorr Cetina and Bruegger (2002) in the American Journal of Sociology to study how global currency traders construct their understanding of the market through screen-based interaction — a finding that reshaped how we understand market microstructure.
Case Study: High-Frequency Trading Floor Observations Researchers at the University of Chicago embedded themselves within trading operations to observe real-time decision-making. Their observational data revealed that traders frequently deviated from their stated strategies under pressure — a divergence that survey data had consistently failed to capture. The observation method exposed the gap between what traders say they do and what traders actually do. That gap, as any honest market participant will tell you, is approximately the size of the Grand Canyon.
4. Focus Groups
Put eight opinionated people in a room, give them biscuits, and ask them about money. What could possibly go wrong? Everything. But also: everything useful.
Focus groups involve moderated discussions among a small group of participants — typically six to twelve people — selected for their relevant characteristics or experiences. They are particularly effective for exploring attitudes, perceptions, and group dynamics around a specific topic. In financial services research, focus groups are used to test product concepts, understand consumer attitudes toward risk, and develop communication strategies.
The strength of focus groups lies in their interactivity: participants build on each other’s ideas in ways that individual interviews or surveys cannot replicate. The weakness is that dominant personalities can skew the discussion — a phenomenon known as the bandwagon effect — and the small sample size limits statistical generalisability.
Peer-Reviewed Evidence: Chand (2025) describes focus groups as powerful tools for capturing the dynamic interplay of perspectives within a social group. In financial research, Looney and Hardin (2009), writing in the Journal of the American Taxation Association, used focus groups to explore how individuals perceive tax compliance and financial obligation — findings that informed significant policy recommendations.
Case Study: Retail Investor Perceptions of Cryptocurrency A 2021 focus group study by financial services consultancy Deloitte explored how retail investors perceived cryptocurrency as an asset class. Participants ranged from enthusiastic early adopters to deeply sceptical traditionalists. The focus group format allowed the researchers to observe how opinions shifted and solidified through discussion — revealing that social proof and peer validation were far more influential in crypto adoption than any written marketing material. Which explains why your cousin Gary convinced twelve family members to put money into a coin that no longer exists. Thanks, Gary.
5. Case Studies
The method that says: forget the average. Let us go deep on this one specific thing and really understand it.
Case study research involves an in-depth, detailed examination of a particular instance, event, organisation, or individual. It is especially valuable in financial research when researchers are investigating complex phenomena that cannot be easily reduced to numerical data — such as corporate governance failures, market crashes, or the rise of a disruptive financial technology.
Case studies can be descriptive, exploratory, or explanatory. They draw on multiple sources of evidence — documents, interviews, observations, and archival records — to build a rich, holistic picture.
Peer-Reviewed Evidence: Yin (2009) remains the defining methodological text on case study research, arguing that the method is particularly powerful for answering “how” and “why” questions in real-world contexts. In financial literature, case studies have been used to great effect: Shiller (2015), in Irrational Exuberance, employs extensive case study analysis to demonstrate how narrative and psychological factors — rather than fundamentals — drove several of the most consequential market bubbles in modern history.
Case Study: Enron Corporation (2001) The collapse of Enron remains one of the most extensively studied case studies in financial research history. Researchers used document analysis, interviews with former employees, congressional testimony transcripts, and trading records to reconstruct the full picture of what went wrong. The case study methodology was essential because no survey or experiment could retrospectively capture the complexity of systemic fraud, regulatory failure, and collective self-deception at that scale. Published analyses in the Journal of Financial Economics and Accounting Review drew directly on case study frameworks to generate findings that reshaped auditing standards globally.
Trader’s Note: Every trading loss I have ever taken is technically a case study. I have enough material for a dissertation. The title would be something like Overconfidence, Market Timing, and the Persistent Failure of My Thursday Trades: A Longitudinal Qualitative Analysis.
6. Experiments and Randomised Controlled Trials (RCTs)
The gold standard. The method that scientists love and social scientists argue about constantly at conferences.
Experimental research involves the deliberate manipulation of variables under controlled conditions to establish causal relationships. In financial research, true experiments are relatively rare — you cannot randomly assign one group of people to experience a financial crisis and compare them to a control group that does not, for obvious ethical and logistical reasons. However, laboratory experiments and field experiments (including RCTs) have become increasingly important in behavioural economics and development finance.
Peer-Reviewed Evidence: Duflo, Glennerster, and Kremer (2007), in the Handbook of Development Economics, argue that RCTs represent the most rigorous method for establishing causality in economic research — particularly in development finance contexts. Their work inspired the broader adoption of experimental methods in economics, a contribution recognised by the 2019 Nobel Prize in Economic Sciences. In behavioural finance, Thaler and Sunstein (2008) built their influential nudge theory on experimental evidence showing that minor changes to how choices are presented can dramatically alter financial decision-making.
Case Study: Savings Behaviour in Kenya — M-Pesa A landmark RCT conducted by Dupas and Robinson (2013), published in the American Economic Journal: Applied Economics, randomly assigned access to savings accounts to participants in rural Kenya. The results demonstrated that access to formal savings mechanisms significantly increased investment in health and business — findings that directly informed the rapid expansion of mobile money platforms like M-Pesa. This is a case of experimental data collection changing an entire financial ecosystem for millions of people.
7. Secondary Data Analysis
Why do all the work yourself when someone else has already done it? This is the intellectual equivalent of showing up to a dinner party and only eating the food someone else cooked.
Secondary data analysis involves the re-analysis or synthesis of data that was originally collected for a different purpose. In financial research, this is extraordinarily common — researchers regularly use data from central banks, stock exchanges, corporate filings, government statistical agencies, and academic databases such as CRSP, Bloomberg, Compustat, and Datastream.
The advantages are enormous: large sample sizes, long time horizons, cost efficiency, and the ability to study phenomena that would be impossible to study directly. The disadvantages are equally real: the researcher has no control over how the original data was collected, definitions may differ across sources, and data quality issues may be inherited silently.
Peer-Reviewed Evidence: Fama and French (1993), whose three-factor model remains one of the most cited papers in all of finance, was built entirely on secondary data — decades of stock return and accounting data compiled from CRSP and Compustat. Their use of rigorous secondary data analysis revealed systematic patterns in equity returns that reshaped asset pricing theory. Without high-quality secondary data and the methodological rigour to analyse it correctly, one of the most important papers in financial economics would not exist.
Case Study: The Global Financial Crisis Post-Mortem Following the 2008 financial crisis, researchers worldwide turned to secondary data — mortgage origination records, bank balance sheets, credit default swap pricing data, and macroeconomic indicators — to reconstruct the anatomy of the collapse. Studies such as Gorton and Metrick (2012), published in the Journal of Financial Economics, used secondary banking data to show how the repo market freeze precipitated the crisis — findings that were simply not knowable in real time, but were revealed through systematic secondary data analysis.
Trader’s Note: I rely heavily on secondary data. I also rely heavily on coffee, but one of those things has been verified by the scientific community as providing genuine performance benefits. I will leave you to guess which one.
8. Content Analysis
What people say, write, and publish is data. Everything is data. Your 3am trading journal entry is data. (It is also a cry for help, but still — data.)
Content analysis is a systematic method for analysing the content of communications — texts, speeches, documents, social media posts, earnings call transcripts, news articles, and more. It can be quantitative (counting the frequency of specific words or themes) or qualitative (analysing the meaning and framing of content). In financial research, content analysis has exploded in relevance with the rise of natural language processing (NLP) and big data.
Peer-Reviewed Evidence: Tetlock (2007), in a landmark paper in the Journal of Finance, applied content analysis to the Wall Street Journal’s “Abreast of the Market” column, demonstrating that the pessimism expressed in financial media — as measured by the frequency of negative words — had statistically significant predictive power for future stock market returns. This was a pivotal moment: it validated content analysis as a rigorous and genuinely useful financial research tool.
Case Study: Earnings Call Sentiment Analysis Research by Loughran and McDonald (2011), published in the Journal of Finance, developed a finance-specific word list for content analysis of corporate disclosures. They demonstrated that the sentiment of annual reports and earnings calls — specifically the use of uncertain, negative, or litigious language — predicted abnormal stock returns in the weeks following disclosure. Their word list has since become the standard tool for financial text analysis and is used by thousands of researchers and institutional investors worldwide.
Trader’s Note: I once content-analysed my own trading journal. The most frequently occurring words were “obviously,” “impossible,” and “why.” I did not need a PhD to interpret those findings.
9. Longitudinal Studies and Panel Data Collection
Because some questions can only be answered over time — and patience, it turns out, is also a research methodology.
Longitudinal studies involve collecting data from the same subjects at multiple points in time, enabling researchers to track changes, identify trends, and establish causal sequences. Panel data — which combines cross-sectional data (many subjects) with time-series data (multiple periods) — is one of the most powerful analytical frameworks available to financial researchers.
Unlike cross-sectional studies, which capture a snapshot, longitudinal methods reveal dynamics: how investor portfolios evolve over a career, how companies’ financial ratios change through economic cycles, how national savings rates respond to policy changes over decades.
Peer-Reviewed Evidence: Barber and Odean (2000), in their seminal study published in the Journal of Finance, tracked the actual trading records of 66,465 household investors over a six-year period. Their longitudinal data revealed that individual investors — particularly men — traded excessively due to overconfidence, and that this excessive trading significantly reduced their net returns. The finding that “trading is hazardous to your wealth” could not have been established without longitudinal data. A single snapshot would have told you nothing about the trajectory.
Case Study: The British Cohort Studies Though not strictly financial in origin, the British Cohort Studies — tracking tens of thousands of individuals born in 1958, 1970, and 2000 — have generated financial insights about lifetime earnings trajectories, wealth accumulation, and retirement savings patterns that have informed UK pension policy for decades. The longitudinal data revealed, among other things, persistent wealth inequality patterns that began in early childhood — a finding with profound implications for financial policy research. Blanden and Machin (2007) used this data in research published by the Centre for Economic Performance to document how social mobility had declined, shaping subsequent financial access legislation.
10. Big Data and Digital Data Collection
Welcome to the future, where your data is being collected whether you know about it or not. Might as well be the one collecting it.
Big data and digital data collection represent the newest and fastest-growing frontier in research methodology. This category encompasses web scraping, social media data mining, satellite data analysis, transaction data streams, mobile phone geolocation data, alternative data sources, and machine learning-assisted data collection at scale.
In financial research, the applications are transformative. Hedge funds and quant firms now routinely harvest satellite images of retail car parks to estimate consumer spending before official data is released. Natural language processing tools analyse millions of news articles in real time to detect market-moving sentiment shifts. Payment processing data from credit card companies is sold to institutional investors as a leading indicator of retail sales figures.
Peer-Reviewed Evidence: Einav and Levin (2014), writing in Science, argue that the availability of large-scale digital datasets is fundamentally transforming economic and financial research — enabling questions to be asked and answered that were simply impossible under traditional data collection constraints. Blankespoor, deHaan, and Marinovic (2020), in the Journal of Accounting Research, demonstrate that algorithmic big data analysis of corporate disclosures provides information efficiency gains that materially affect equity pricing — establishing that digital data collection is not merely a technical innovation but a structural change to how markets incorporate information.
Case Study: Renaissance Technologies Renaissance Technologies, founded by mathematician Jim Simons, built arguably the most successful hedge fund in history — the Medallion Fund, which averaged returns of approximately 66% before fees over 30 years — almost entirely on the systematic collection and analysis of unconventional data. Their proprietary data collection methods, which included harvesting and cleaning vast quantities of market microstructure data, commodity shipping records, and meteorological data, enabled pattern recognition far beyond what traditional fundamental analysis could achieve. The fund’s success is the most compelling real-world case study for the power of rigorous, systematic big data collection in financial research. Zuckerman (2019) documents this story in authoritative detail.
Trader’s Note: Big data is amazing. It is also terrifying. Somewhere out there, an algorithm has already predicted your next trade. It has also predicted that you are going to look at it, feel good about it, and then do something completely different based on your feelings. The algorithm already accounted for that too. The algorithm is not your friend. The algorithm does not care about your feelings. The algorithm just is. Collect better data than the algorithm. Or join a hedge fund. Both are valid strategies.
Choosing the Right Data Collection Method: A Framework
Selecting the appropriate data collection method is not a one-size-fits-all decision. It depends on your research question, your resources, your timeline, your target population, and the type of knowledge you are trying to generate.
Here is a simplified decision framework:
- If you need breadth across a large population → Use surveys or secondary data analysis
- If you need depth on specific experiences or decisions → Use interviews or focus groups
- If you need to understand behaviour in context → Use observations or case studies
- If you need to establish causality → Use experiments or RCTs
- If you are analysing communications or public discourse → Use content analysis
- If you need to track change over time → Use longitudinal methods
- If you need real-time or high-volume data at scale → Use big data and digital collection methods
In practice, the strongest research designs use multiple methods — a strategy known as triangulation — to cross-validate findings and offset the weaknesses of any single approach. Creswell and Creswell (2018) advocate strongly for mixed-methods approaches precisely because no single method provides a complete picture on its own.
As Chand (2025) notes in Advances in Educational Research and Evaluation, triangulation — integrating interviews, focus groups, observations, and document analysis — significantly enhances the credibility and depth of research findings. In financial research, this means combining quantitative data (secondary databases, surveys, transaction records) with qualitative insight (interviews, case studies, content analysis) to develop a fuller understanding of complex market phenomena.
Ethical Considerations in Data Collection
No discussion of data collection methods is complete without addressing ethics. Every method discussed in this article comes with ethical responsibilities: informed consent, data privacy, participant confidentiality, transparency in reporting, and the avoidance of harm.
In financial research specifically, data ethics takes on additional dimensions. The use of alternative data — such as satellite imagery, social media monitoring, or geolocation tracking — raises significant questions about privacy, consent, and the equitable distribution of information advantages. Pasquale (2015), in The Black Box Society, documents how the unchecked collection and use of data by financial institutions creates systemic risks and fairness problems that regulators are only beginning to grapple with.
The principle is simple: just because you can collect data does not mean you should — or that doing so is without consequence. Ethical data collection is not a bureaucratic formality. It is the foundation of research that deserves to be trusted.
Conclusion: Data Is the Edge — But Only If You Collect It Right
Let us bring this home. Whether you are a seasoned quantitative researcher, a financial analyst building models, or a trader trying to understand market dynamics, the ten methods covered in this article are your foundational toolkit. Each has its strengths. Each has its limits. None is perfect. All are valuable when applied with skill, rigour, and intellectual honesty.
The researchers and institutions that generate the most durable insights — Fama and French on asset pricing, Barber and Odean on investor behaviour, Tetlock on media sentiment, Renaissance Technologies on alternative data — are distinguished not by their access to magical information, but by their commitment to collecting data systematically, thoughtfully, and ethically.
The market is not going to hand you an edge. But the right data, collected through the right method, analysed with the right framework — that is where the edge lives.
Now go collect some data. And please, for the love of everything — do not make decisions based on vibes.
References
- Baker, M., & Wurgler, J. (2006). Investor sentiment and the cross-section of stock returns. Journal of Finance, 61(4), 1645–1680. https://doi.org/10.1111/j.1540-6261.2006.00885.x
- Barber, B. M., & Odean, T. (2000). Trading is hazardous to your wealth: The common stock investment performance of individual investors. Journal of Finance, 55(2), 773–806. https://doi.org/10.1111/0022-1082.00226
- Blanden, J., & Machin, S. (2007). Recent changes in intergenerational mobility in Britain. Centre for Economic Performance, LSE. https://doi.org/10.1111/j.1467-9957.2007.01030.x
- Blankespoor, E., deHaan, E., & Marinovic, I. (2020). Disclosure processing costs, investors’ information choice, and equity market outcomes: A review. Journal of Accounting Research, 58(2). https://doi.org/10.1111/1475-679X.12307
- Chand, S. P. (2025). Methods of data collection in qualitative research: Interviews, focus groups, observations, and document analysis. Advances in Educational Research and Evaluation, 6(1), 303–317. https://doi.org/10.25082/AERE.2025.01.001
- Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE Publications. https://us.sagepub.com/en-us/nam/research-design/book255675
- Denny, E., & Weckesser, A. (2022). How to do qualitative research? Qualitative research methods. BJOG, 129(7). https://doi.org/10.1111/1471-0528.17150
- Duflo, E., Glennerster, R., & Kremer, M. (2007). Using randomisation in development economics research: A toolkit. Handbook of Development Economics, 4. https://doi.org/10.1016/S1573-4471(07)04061-2
- Dupas, P., & Robinson, J. (2013). Savings constraints and microenterprise development: Evidence from a field experiment in Kenya. American Economic Journal: Applied Economics, 5(1), 163–192. https://doi.org/10.1257/app.5.1.163
- Einav, L., & Levin, J. (2014). Economics in the age of big data. Science, 346(6210). https://doi.org/10.1126/science.1243089
- Fama, E. F., & French, K. R. (1993). Common risk factors in the returns on stocks and bonds. Journal of Financial Economics, 33(1), 3–56. https://doi.org/10.1016/0304-405X(93)90023-5
- Gorton, G., & Metrick, A. (2012). Securitized banking and the run on repo. Journal of Financial Economics, 104(3), 425–451. https://doi.org/10.1016/j.jfineco.2011.03.016
- Knorr Cetina, K., & Bruegger, U. (2002). Global microstructures: The virtual societies of financial markets. American Journal of Sociology, 107(4), 905–950. https://doi.org/10.1086/341868
- Looney, S. W., & Hardin, J. R. (2009). Focus group research on tax compliance. Journal of the American Taxation Association, 31(1), 61–90. https://doi.org/10.2308/jata.2009.31.1.61
- Loughran, T., & McDonald, B. (2011). When is a liability not a liability? Textual analysis, dictionaries, and 10-Ks. Journal of Finance, 66(1), 35–65. https://doi.org/10.1111/j.1540-6261.2010.01625.x
- Ludvigson, S. C. (2004). Consumer confidence and consumer spending. Journal of Economic Perspectives, 18(2), 29–50. https://doi.org/10.1257/0895330041371222
- Montier, J. (2006). Behaving Badly: Investor irrationality and its implications. Société Générale / CFA Institute. https://www.cfainstitute.org/en/research/cfa-digest/2006/08/cd.v36.n3.0001
- Pasquale, F. (2015). The Black Box Society: The secret algorithms that control money and information. Harvard University Press. https://www.hup.harvard.edu/catalog.php?isbn=9780674368279
- Shefrin, H. (2000). Beyond Greed and Fear: Understanding behavioral finance and the psychology of investing. Oxford University Press. https://doi.org/10.1093/0195124613.001.0001
- Shiller, R. J. (2015). Irrational Exuberance (3rd ed.). Princeton University Press. https://doi.org/10.2307/j.ctt1287kz5
- Tetlock, P. C. (2007). Giving content to investor sentiment: The role of media in the stock market. Journal of Finance, 62(3), 1139–1168. https://doi.org/10.1111/j.1540-6261.2007.01232.x
- Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press. https://doi.org/10.7208/chicago/9780226470245.001.0001
- Yin, R. K. (2009). Case study research: Design and methods (4th ed.). SAGE Publications. https://doi.org/10.4135/9781452384348
- Zuckerman, G. (2019). The Man Who Solved the Market: How Jim Simons launched the quant revolution. Penguin Random House. https://www.penguinrandomhouse.com/books/557653/the-man-who-solved-the-market-by-gregory-zuckerman/
Disclaimer: This article is intended for educational and informational purposes. Nothing in this article constitutes financial or investment advice. Please consult a qualified financial professional before making investment decisions. Or at minimum, do not make decisions based on vibes.

Leave a Reply
You must be logged in to post a comment.