Enter a ticker, simulate thousands of futures, then backtest the model against 10 years of real data to see how often it would have been right.
In production quant systems, Monte Carlo simulations typically run 1 to 10 million paths to price exotic derivatives, compute CVA/DVA, or stress-test portfolios. That scale is necessary when the payoff depends on multiple correlated factors (multi-asset, path-dependent options, etc.) and when you need precision down to a few basis points.
Here we're estimating a single-asset price distribution under GBM — a much simpler problem. 5,000 paths give stable percentiles (P5/P50/P95 converge within ~1%). For this use case, going from 10K to 1M paths barely moves the needle. But if you were pricing a basket autocallable or running XVA on a portfolio, you'd absolutely need millions.
We go back 10 years. Every month, we pretend we're on that date: compute mu and sigma from the prior year, run Monte Carlo forward, then check if the actual price landed inside our predicted range. This gives you a real accuracy score.
Pull 1 year of daily closes from the market.
Average return (drift) and daily bounce (volatility) from log returns.
Each day: trend + random shock. Repeat thousands of times.
Median = center. P5/P95 = edges. That's your probability range.
Each run draws fresh random numbers. More simulations = more stable. Lock the seed for identical results across runs.
Price tomorrow = price today × trend + random shock
+50% then -50% ≠ 0%. (100→150→75 = -25%). This term fixes the asymmetry. More volatile = bigger correction.
Assumes constant volatility and log-normal returns. Both wrong in practice. Underestimates tail risk. Use as a starting point, not a crystal ball. Run the backtest to see exactly how wrong.