In today’s data-driven world, distinguishing market trends from statistical noise is more critical than ever — especially when wondering “Is your data a fluke?”

A sudden spike in sales, website traffic, conversions, or stock prices can feel exciting, but it often turns out to be nothing more than random variation or a temporary blip. Mistaking statistical noise for a genuine market trend can lead to costly decisions, wasted marketing budgets, and missed growth opportunities.

If you’re tired of chasing false signals in your metrics, you’re not alone. Business owners, marketers, analysts, and traders frequently struggle to separate real patterns from random fluctuations caused by seasonality, outliers, or pure chance.

In this post, you’ll discover 5 practical ways to tell market trends from statistical noise. You’ll learn how to use statistical tests, confidence intervals, data smoothing techniques, and consistency checks to confidently identify what’s actually driving your results—so you can make smarter, evidence-based decisions.

Stop reacting to illusions in your data. Start focusing on the real trends that matter.


What Is Statistical Noise and Why Should You Care?

Before we dive into the five ways, let’s be very clear about what we mean by statistical noise, because this is the foundation of everything.

In financial markets, statistical noise refers to random price fluctuations that carry no predictive information whatsoever. They are the market equivalent of your uncle’s unsolicited stock tips at Thanksgiving — loud, confident, and ultimately meaningless. A market trend, on the other hand, is a persistent, directional movement in prices that reflects genuine shifts in supply and demand, investor sentiment, fundamentals, or macroeconomic forces.

The problem is that the human brain is spectacularly bad at telling the two apart. We are pattern-recognition machines. We evolved to see a tiger in the bushes even when it’s just leaves rustling. In the markets, that same evolutionary gift becomes our greatest enemy. You see three red candles in a row and your brain says, “That’s a downtrend!” Meanwhile, the market is just… breathing.

Researchers have known about this confusion for decades. In their foundational 1988 paper, Andrew Lo and A. Craig MacKinlay demonstrated in “Stock Market Prices Do Not Follow Random Walks: Evidence from a Simple Specification Test” that weekly stock returns over the period 1962–1985 showed statistically significant departures from a pure random walk — meaning there are real patterns in markets, but they are wrapped in an enormous amount of noise that makes them incredibly hard to isolate. The entire challenge of trading is to find the signal inside that noise — and not to confuse the noise itself for a signal.

Shall we? Let’s go.


Way #1 — Check Your Sample Size (Because Five Data Points Is NOT a Trend)

Let me be real with you right now. And I’m going to be real in a way that might sting a little.

If you are looking at five candles on a chart and declaring a trend, you are not a trader. You are an optimist with a brokerage account. There’s a difference. A big one. The difference has a lot of zeroes — and they’re all in the negative.

One of the most common ways traders mistake noise for signal is by working with sample sizes that are laughably small. I’ve seen traders build entire strategies around six months of backtested data. Six months! My grandma has been making jerk chicken for longer than that and she still doesn’t call it a “proven technique.”

The statistical reality is brutal. For any trading signal to be considered statistically robust, you need a sufficient number of independent observations — and in financial markets, that number is far higher than most people assume. Due to what statisticians call autocorrelation — where today’s returns are somewhat related to yesterday’s returns — the effective sample size of financial time series is considerably smaller than the raw number of data points suggests.

Let me put it another way. Say you’re backtesting a daily stock strategy with two years of data. That’s about 504 trading days. Sounds like a lot, right? Sounds reasonable? Here’s the problem: if your strategy only generates a signal once a week, you actually have about 100 observations — barely enough to determine whether your local kebab shop is profitable, let alone whether you’ve discovered a market inefficiency. And if those observations are during a single extended bull market? Congratulations, you’ve discovered that stocks go up when stocks are going up. This is not an edge. This is an observation available to anyone with a window.

The discipline you need here is cold, sober, unsentimental statistics. You need to ask: how many independent signals has this strategy generated? Over how many different market environments? The answer needs to be measured in hundreds, ideally thousands, and it needs to span conditions including crashes, melt-ups, sideways markets, high volatility regimes, and low volatility regimes. If your strategy only ever got tested in the 2020–2021 post-COVID bull market, I don’t care how good the Sharpe ratio looks. That tells me nothing about how your strategy performs when the music stops.

A landmark piece of research by Harvey, Liu, and Zhu (2016), “…and the Cross-Section of Expected Returns” published in the Review of Financial Studies, argued that given the sheer number of factors being tested by researchers in financial markets, the traditional t-statistic threshold of 2.0 for significance was far too low. They suggested a threshold closer to 3.0 should be used to account for the multiple-testing problem — meaning you need far more data and far stronger signals than most traders ever collect before declaring something “real.”

Here’s the practical rule: if your strategy hasn’t been tested across at least one complete market cycle (which typically spans 5–10 years and includes both bull and bear phases), across multiple asset classes or instruments, and with statistical significance above that 3.0 t-stat threshold, you don’t have a trend. You have a coincidence with ambitions.

Case Study: The Monday Effect

Back in the 1980s, researchers identified what became known as the “Monday Effect” — the empirical finding that stock returns on Mondays were systematically lower than returns on other days of the week. Traders got excited. Strategies were built. Papers were published.

And then? It slowly faded away. As Schwert (2003) documented in “Anomalies and Market Efficiency” in the Handbook of the Economics of Finance, many calendar anomalies that appeared statistically significant in initial samples either weakened dramatically or disappeared entirely when tested out-of-sample. Why? Because the original samples weren’t large enough, and the effect itself was often an artifact of data mining.

The market saw everyone trying to exploit Monday, shrugged, and changed the game.

Practical Takeaway: Before declaring a trend, ask yourself: How many independent observations does this signal have? Would this pattern survive a walk-forward test on data I haven’t seen yet? If the answer makes you uncomfortable, congratulations — you just saved yourself some very expensive tuition.

Also ask yourself this: would you stake your mortgage on this pattern if someone told you it was built on 12 data points? No? Then don’t stake your trading account on it either. The size of your emotional conviction has absolutely no bearing on the statistical validity of your sample. The market doesn’t grade on enthusiasm.


Way #2 — Beware the P-Hacking Trap (The Market’s Version of a Setup)

Oh, this one. This one right here is the one that gets the sophisticated traders. The ones who think they’re being rigorous. The ones with spreadsheets and Python scripts and notebooks full of backtests.

P-hacking is when you run so many different tests, try so many different parameter combinations, and tweak so many variables that eventually — purely by chance — something looks statistically significant. You’ve essentially rolled a hundred dice and then bragged about the six that came up sixes.

Let me paint you a picture. Say you’re testing 100 different trading strategies. Even if none of them have any real predictive power, at the 5% significance level, you’d expect roughly five of them to appear significant purely by chance. Five strategies that look like winners. Five strategies that will get you absolutely cooked when you trade them live.

And here’s the thing that really gets me about this — the people who fall hardest for p-hacking are not lazy traders. They’re the hardest working traders. They’re the ones who stayed up until 3 AM running backtests. They’re the ones with the colour-coded spreadsheets, the custom Python indicators, the multi-monitor setups. They did the work. They genuinely believe in the process. And the process still lied to them — because the process was asking too many questions of too little data.

I’ve been there personally. I once spent three weeks optimising a mean-reversion strategy across 47 different parameter combinations. I found a beautiful set of parameters: a 14-period RSI, a specific threshold, a custom exit rule, and a volatility filter. My in-sample Sharpe ratio was 2.3. It looked incredible. I was ready. I had the position sizing worked out. I was mentally spending the profits.

Out-of-sample? Sharpe ratio of 0.4. Barely above random. Three weeks of work and the market looked at my strategy like I’d just told a joke that nobody laughed at. Complete silence. Just the sound of commissions eating my account.

This is not a hypothetical problem. This is an epidemic in quantitative finance.

Hou, Xue, and Zhang, in their devastating 2020 NBER working paper “Replicating Anomalies”, tested 452 published anomalies from the academic finance literature and found that more than half could not be replicated using consistent methodology. More than half. These were published, peer-reviewed, widely-cited findings. Gone. Poof. Statistical noise wearing a graduation cap.

And if that’s happening in academia — where researchers have every incentive to be rigorous — imagine what’s happening in the wild west of retail trading forums and social media where a guy with a laptop and a YouTube channel is “revealing the secret strategy the banks don’t want you to know.”

The Multiple-Testing Correction

The fix for p-hacking is applying what’s known as the Bonferroni correction or similar multiple-testing adjustments. If you’re testing 20 different strategies simultaneously, you don’t use a p-value threshold of 0.05 — you divide that threshold by 20, giving you a much more demanding threshold of 0.0025. This means you need results so strong they couldn’t possibly be a fluke.

In practice for traders: if you’ve tweaked more than two or three parameters in your strategy, you’ve likely already entered p-hacking territory. Every additional parameter you test is another six you rolled on that dice. And the market? The market is watching. It always wins when you’re rolling dice.

Case Study: The Factor Zoo

Cochrane (2011), in his American Finance Association presidential address, famously described the state of empirical asset pricing as a “factor zoo” — hundreds of variables that purportedly predict returns, the vast majority of which were almost certainly false positives from data mining. By 2017, researchers had documented over 300 “significant” factors in the cross-section of stock returns. Three hundred. That is not research anymore. That is a salad bar.

The lesson: more tests, more tweaks, more optimisation equals more noise masquerading as signal.

Practical Takeaway: Before trusting any pattern you’ve found, ask: How many combinations did I test to find this? Did I decide on my rules before looking at the data, or did I work backwards from the result? If you worked backwards — that’s not analysis. That’s storytelling with a chart.


Way #3 — Separate In-Sample from Out-of-Sample Performance (Train On History, Don’t Live In It)

Alright, this is where it gets real. This is the difference between a trader who survives and one who becomes a cautionary tale told over drinks at a trading conference.

In-sample data is the historical data you used to build and test your strategy. Out-of-sample data is data your strategy has never seen — data from a different time period, or a held-out portion of your dataset. The single most important question you can ask about any trading signal is: Does it work on data that didn’t exist when I built the strategy?

This matters because financial models are almost infinitely flexible. Given enough parameters, you can fit a curve to any historical dataset perfectly. This is called overfitting — your model has learned the noise in the historical data, not the underlying signal. It’s like memorising the specific questions from last year’s exam rather than learning the material. Come exam day with new questions? Complete disaster.

Schmidhuber (2020), in “Trends, Reversion, and Critical Phenomena in Financial Markets”, found using 30 years of futures data across equities, interest rates, currencies, and commodities that genuine trends tend to revert before they become statistically obvious — meaning by the time a trend is visible enough to most traders to act on confidently, it is already late-stage. This has a direct implication: strategies built purely on historical trend identification will perpetually be one step behind.

Walk-Forward Testing: The Gold Standard

The best practice in quantitative trading is walk-forward analysis — a technique where you:

  1. Train your model on a defined period of historical data
  2. Test it on the period immediately following, without any adjustment
  3. Move forward in time and repeat
  4. Aggregate the out-of-sample results

If your strategy performs well in-sample but falls apart out-of-sample, you have noise. You have a very expensive, beautifully designed noise detector. If it performs consistently across multiple out-of-sample windows, you may be onto something — though even this is not foolproof.

Case Study: The LTCM Collapse

Long-Term Capital Management was managed by two Nobel Prize-winning economists and some of the smartest minds in finance. Their models were built on massive amounts of historical data and showed extraordinary in-sample performance. They were generating returns of 40%+ annually in the mid-1990s.

Then 1998 happened. Russia defaulted on its debt. Correlations across asset classes that had never historically moved together suddenly moved in lockstep. The out-of-sample world looked nothing like the in-sample world. LTCM lost $4.6 billion in less than four months and had to be bailed out by a consortium of major banks to prevent a global financial meltdown.

The models had confused historical pattern with immutable law. The out-of-sample world disagreed.

Foresmann and Chen’s (2015) work in the Journal of Financial Econometrics on Bayesian forecasting of financial markets confirms what LTCM learned the catastrophic way: models that fail to account for structural breaks — sudden changes in the underlying data-generating process — will produce spectacular false signals precisely when you can least afford them.

Practical Takeaway: Divide your data. Always. Reserve at least 30% as an out-of-sample test set that you do not touch until you have completely finalised your strategy. Treat that holdout set like it’s locked in a vault. The moment you start peeking at it to refine your model, it becomes in-sample data and the whole exercise is worthless.


Way #4 — Apply the Momentum and Mean-Reversion Framework (Because Not All Trends Are Created Equal)

Now we’re getting into the sophisticated territory. Buckle up.

Here’s a truth that will reorganise how you think about markets: real trends are persistent in the medium term but mean-reverting in the short and long term. This is not a philosophical statement. It is one of the most empirically robust findings in financial market research.

In 1993, Narasimhan Jegadeesh and Sheridan Titman published what has become one of the most cited papers in finance history: “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency” in the Journal of Finance. Their finding: stocks that performed well over the previous 3–12 months tended to continue performing well over the following 3–12 months. This is the momentum effect, and it is one of the most durable anomalies in all of finance.

But — and this is the critical but — this momentum effect exists specifically in that medium-term window. Over shorter time horizons (days to weeks), markets exhibit mean reversion — prices tend to snap back from extreme moves. Over longer time horizons (multiple years), momentum fades and value factors dominate.

More recently, Schmidhuber (2025) in “Trends and Reversion in Financial Markets on Time Scales from Minutes to Decades” confirmed this layered structure across 30 years of futures data: markets trend in the hours-to-years range, but revert on very short (intraday) and very long (decades) timescales. The implication is that whether you’re looking at a “trend” or “noise” depends critically on your time horizon.

The Three-Tier Framework

Here’s a practical framework every trader should engrave on their trading desk:

Short-term (minutes to days): Markets are primarily noisy with mean-reverting tendencies. Any pattern you see here is overwhelmingly likely to be statistical noise. High-frequency traders with superior technology and data might extract signal here, but for most traders, this is the noise zone. Stay out or play with extreme caution.

Medium-term (weeks to months): This is where genuine momentum trends live. Jegadeesh and Titman’s evidence is robust and has been replicated across international markets by Rouwenhorst (1998) and across asset classes by Moskowitz et al. (2012). If a trend exists in the medium-term, it has a reasonable chance of persisting a little longer — though remember, trends tend to revert before they become obvious (see Way #3).

Long-term (years to decades): Value factors dominate here. Mean reversion is the law. The best long-term trend you can follow is not a price trend at all — it’s the long-term upward drift of earnings and economic growth.

Case Study: The 2017 Cryptocurrency Bubble

In 2017, Bitcoin went from roughly $1,000 at the start of the year to nearly $20,000 by December. At every stage of that run, traders using simple moving average crossovers and momentum strategies were declaring trend confirmed. And technically, in the medium-term Jegadeesh-Titman sense, they were right — momentum was real.

But by December 2017, the trend had become so statistically and visually obvious that everyone saw it. And as Schmidhuber’s research notes, by the time a trend is statistically unmistakable, it is typically near reversal. Bitcoin crashed 84% over the following 12 months. Those who bought the “obvious trend” in December 2017 waited nearly three years to get back to breakeven.

The lesson: real trends exist, but they are most exploitable in the medium-term before they become obvious. Once the taxi driver is giving you stock tips at the airport, you are no longer reading a trend. You are reading a newspaper headline about a trend that is already dying.

Practical Takeaway: Before trading any signal, identify your time horizon and ask whether your signal type (momentum, mean-reversion, value) is appropriate for that horizon. Applying a mean-reversion strategy on a genuine long-term trend — or a momentum strategy on noise — is how accounts die.


Way #5 — Demand an Economically Logical Explanation (If You Can’t Explain Why It Works, It Probably Doesn’t)

This is my favourite one. This is the one that separates traders who last from traders who last exactly one bull market.

Here is a question that will save you an enormous amount of money: Why does this pattern work? What is the economic mechanism that causes this signal to have predictive power?

If your answer is, “I don’t know, it just does, look at the backtest,” then congratulations, you’ve found noise. Statistical patterns without economic rationale are almost certainly false positives — artefacts of data mining, sample-specific coincidences, or overfitting.

This principle is what separates genuinely durable anomalies from the factor zoo. The momentum effect, for example, has legitimate economic explanations: investor underreaction to information (Daniel and Titman, 1999), herding behaviour, and the gradual diffusion of information through the market. These are behavioural mechanisms that are both empirically observable and theoretically coherent. When the mechanism makes sense, the signal is more likely to be real.

By contrast, consider some of the more exotic backtested signals that have circulated in trading communities: the Super Bowl indicator (which NFL conference wins the Super Bowl predicts stock market performance for the year), the hemline indicator (skirt lengths predict bull and bear markets), and — my personal favourite — the butter production in Bangladesh predicting S&P 500 returns. David Leinweber famously demonstrated in the 1990s that butter production in Bangladesh “explained” 75% of S&P 500 returns from 1983–1993. A perfect example of a statistically significant signal with absolutely zero economic mechanism — and therefore, pure noise.

The Economic Logic Checklist

When evaluating any trading signal, demand satisfactory answers to all of these:

  1. What is the supply and demand mechanism that drives this pattern? Who is on the other side of the trade and why?
  2. Why hasn’t this been arbitraged away? If the signal is real and exploitable, why hasn’t capital already flowed in to eliminate it?
  3. What are the transaction costs? Many statistically significant patterns disappear entirely once realistic bid-ask spreads and market impact costs are applied.
  4. Does it work across different markets and instruments? A real economic mechanism should generate similar patterns wherever similar conditions exist.
  5. Has it survived multiple market regimes? A genuine pattern should persist through both rising and falling rate environments, bull and bear markets, and periods of high and low volatility.

Case Study: Value Investing — A Signal With a Mechanism

Value investing — buying stocks trading at low multiples of earnings, book value, or cash flows — is perhaps the best example of a signal that has both statistical durability and a clear economic rationale. Fama and French (1992), in their landmark paper “The Cross-Section of Expected Stock Returns” in the Journal of Finance, documented that value stocks (high book-to-market ratio) consistently outperformed growth stocks over long periods.

Why? Because value stocks are fundamentally riskier and/or because investors systematically overestimate the prospects of glamorous growth companies and underestimate distressed value companies. There’s a mechanism. The returns are compensation for risk, behavioural error, or both. This is a signal you can reason about, stress-test, and apply with confidence — even knowing that there will be extended periods where it underperforms (as value investors painfully experienced from 2007 to 2020).

Case Study: The “Secret” Indicator That Wasn’t

In 2014, a viral trading blog post claimed to have found a “proprietary technical indicator” that had achieved 87% accuracy over five years of backtesting. The creator sold subscriptions. Hundreds of traders bought in. The indicator involved a complex combination of 11 different parameters — all optimised on the same five-year dataset.

When independent researchers applied it out-of-sample to the following two years, accuracy dropped to 49% — essentially a coin flip. When pressed for the economic mechanism, the creator could offer nothing except, “It works because it’s based on market psychology.” That is not a mechanism. That is a sentence shaped like an explanation.

The economic logic test would have caught this immediately. With 11 parameters optimised on five years of data, you have almost unlimited flexibility to find apparent patterns. Without a mechanism, you have noise. A very expensive, subscription-priced package of noise.

Practical Takeaway: Write down, in plain English, why your signal should work. Describe the specific market participants whose behaviour creates the pattern, why they behave that way consistently, and why that behaviour is unlikely to be immediately arbitraged away. If you can’t write three coherent paragraphs, you don’t have a signal. You have a hypothesis — and hypotheses need rigorous out-of-sample testing before you commit a single dollar of real capital to trading them.


Bringing It All Together: The Five-Point Noise Filter

Let’s recap what we’ve covered. Because if you’ve made it this far, you deserve a clean summary you can actually use at the trading desk.

1. Check Your Sample Size. Demand statistically significant results across a full market cycle, with enough independent observations that chance alone cannot explain the pattern. Apply the higher t-stat threshold recommended by Harvey, Liu, and Zhu (2016) — aim for t > 3.0, not just t > 2.0.

2. Beware P-Hacking. If you tested more than a handful of parameter combinations to get to your strategy, apply multiple-testing corrections. Prespecify your strategy rules before looking at the data. If you worked backwards from the result, you have a backfit story, not a signal.

3. Out-of-Sample Testing is Non-Negotiable. Walk-forward testing, held-out datasets, and paper trading in real-time are your immune system against overfitting. A strategy that only works in-sample is a strategy that works nowhere real.

4. Match Your Signal to Your Time Horizon. Momentum works in the medium-term, mean-reversion in the short-term, and value in the long-term. Applying the wrong signal type to the wrong time horizon is not just a mistake — it is reliably backwards.

5. Demand Economic Logic. Every pattern you trade needs a reason to exist that goes beyond “the backtest says so.” The butter-production-in-Bangladesh signal has a great backtest. It will not make you money.


A Final Word From the Trading Desk

I want to leave you with something that took me longer to learn than it should have.

The market is the most sophisticated signal-detection challenge human beings have ever devised. It is a system populated by millions of intelligent participants, all simultaneously searching for exploitable patterns, all simultaneously trying to profit from each other’s mistakes. Every pattern that becomes widely known gets arbitraged toward elimination. Every easy signal gets crowded until it stops working.

Think about that for a moment. The very act of you reading this article — and the fact that thousands of other traders are reading similar material — means that any signal mentioned in widely circulated research is already in the process of being competed away. Jegadeesh and Titman published the momentum effect in 1993. By the time you read about it on a trading forum in 2025, roughly a trillion dollars of capital has been pointed at that anomaly. The signal still exists — barely — but the easy money is long gone. The people who got rich on pure, clean momentum stopped bragging about it publicly sometime around the late 1990s.

This is why statistical rigour is not a luxury for traders — it is survival equipment. The traders who survive long-term are not necessarily the smartest or the most creative. They are the most disciplined about what they believe and why they believe it. They demand evidence. They test ruthlessly. They update when the data says they’re wrong.

And here’s the hard truth that most traders never fully accept: being wrong is the default condition in this business. The market’s job is to take money from overconfident people and redistribute it to disciplined ones. The overconfident ones are the ones who saw five candles and called it a trend. The overconfident ones are the ones who ran 200 backtests and presented the best result as if it were their only backtest. The overconfident ones are the ones who bought the “obvious” trend in December 2017 and rode it straight into a bear market.

The disciplined ones? They know their sample sizes. They test out-of-sample. They apply the right signal to the right timeframe. They demand economic logic. And when they can’t find evidence that a signal is real, they do something radical and underrated: they sit on their hands. They wait. They protect their capital until a genuine edge presents itself.

Lo and MacKinlay showed us in 1988 that markets are not perfectly random — there is signal in the noise. Jegadeesh and Titman showed us in 1993 that momentum is real. Fama and French showed us that value is real. These are the landmarks. These are the signals with weight, mechanism, and decades of replication.

Everything else? You need to earn the right to believe in it. You earn that right through sample size, through out-of-sample testing, through economic logic, through multiple-testing corrections, and through the humility to accept that your brilliant, beautiful, backtest-confirmed strategy might be — just might be — a fluke.

The market has absolutely no interest in your feelings about the matter. It never has. And the sooner you stop arguing with it and start listening to it on its own terms — statistical, rigorous, evidence-based terms — the longer, more profitable, and significantly less stressful your trading career will be.

Test everything. Trust sparingly. Trade accordingly. And remember: the market has been here longer than you, and it will be here long after you. Respect the data.


References

  1. Lo, A.W. and MacKinlay, A.C. (1988). Stock Market Prices Do Not Follow Random Walks: Evidence from a Simple Specification Test. The Review of Financial Studies, 1(1), 41–66. https://doi.org/10.1093/rfs/1.1.41
  2. Jegadeesh, N. and Titman, S. (1993). Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency. The Journal of Finance, 48(1), 65–91. https://doi.org/10.1111/j.1540-6261.1993.tb04702.x
  3. Harvey, C.R., Liu, Y. and Zhu, H. (2016). …and the Cross-Section of Expected Returns. The Review of Financial Studies, 29(1), 5–68. https://doi.org/10.1093/rfs/hhv059
  4. Fama, E.F. and French, K.R. (1992). The Cross-Section of Expected Stock Returns. The Journal of Finance, 47(2), 427–465. https://doi.org/10.1111/j.1540-6261.1992.tb04398.x
  5. Hou, K., Xue, C. and Zhang, L. (2020). Replicating Anomalies. The Review of Financial Studies, 33(5), 2019–2133. NBER Working Paper 23394. https://www.nber.org/papers/w23394
  6. Schwert, G.W. (2003). Anomalies and Market Efficiency. Handbook of the Economics of Finance, Vol. 1B, Chapter 15. https://doi.org/10.1016/S1574-0102(03)01024-0
  7. Schmidhuber, C. (2020). Trends, Reversion, and Critical Phenomena in Financial Markets. arXiv:2006.07847. https://arxiv.org/pdf/2006.07847
  8. Schmidhuber, C. (2025). Trends and Reversion in Financial Markets on Time Scales from Minutes to Decades. arXiv:2501.16772. https://arxiv.org/pdf/2501.16772
  9. Moskowitz, T.J., Ooi, Y.H. and Pedersen, L.H. (2012). Time Series Momentum. Journal of Financial Economics, 104(2), 228–250. https://doi.org/10.1016/j.jfineco.2011.11.003
  10. Rouwenhorst, K.G. (1998). International Momentum Strategies. The Journal of Finance, 53(1), 267–284. https://doi.org/10.1111/0022-1082.95722

Disclaimer: This article is for educational and informational purposes only and does not constitute financial advice. Trading financial instruments carries significant risk of loss. Always conduct your own due diligence and consult a qualified financial professional before making investment decisions.