Back to blog
Alerts on unusual intraday price moves – with automatic follow-up explanation

Alerts on unusual intraday price moves – with automatic follow-up explanation

Vilhelm Niklasson, PhD

Published: 9 January 2026
Last updated: 15 January 2026

What this is: how Quanor detects unusual intraday price movements (statistical anomalies) and how we then analyze why the price moved, producing a short, conservative, source-backed summary.

Who this is for: people using the Quanor app who want a clearer, more technical understanding of how intraday triggers work and how the follow-up explanation is produced.

What this is not: trading advice — or a copy-and-paste template with all the details.


Why most “% move” alerts are noisy

Markets move all the time. A static “±2%” rule fires constantly in volatile regimes and still misses the moves that matter: earnings surprises, guidance changes, exchange notices, liquidity shocks, or idiosyncratic risk events.

A 0.7% move in 20 seconds can be far more interesting than a 2% move over six hours. A pure percent rule cannot tell the difference — but our approach can, because it looks at both the size and the speed of the move.

A better question is:

Is this move unusual for this stock right now, and unusual compared to the market at the same moment?

That framing turns “alerts” into an anomaly-detection problem with explicit controls for volatility, liquidity, and microstructure.

Design goals

  • Statistical grounding: “rare under a reasonable model,” not “looks big.”
  • Robustness: handle outliers, thin trading, and regime shifts.
  • Liquidity awareness: avoid false positives driven by tiny turnover.
  • Explainability: every trigger is paired with a conservative “what happened?” summary.

Why we don’t rely on classic TA rules in production

Bollinger bands, breakouts, and chart patterns are intuitive, but they are not designed as calibrated, regime-stable anomaly detectors.

What we explicitly want (and most classic TA does not guarantee):

  • Cross-sectional context: a move can be “large” yet totally normal if the whole market/sector is moving.
  • Microstructure awareness: stale prints, bid–ask bounce, and bursty ticks matter intraday.
  • False-positive control: thresholds tied to volatility and robust statistics, not arbitrary percent bands.
  • Limited parametrization: fewer knobs reduces data-snooping risk.

We borrow a good idea from TA — normalize moves — but we anchor it in realized-volatility-style statistics and robust cross-sectional outlier detection.


The detector: two complementary lenses

1) Time-series lens: moves in “volatility units”

A 1% move can be huge for one stock and routine for another. We therefore normalize moves by an estimate of recent intraday variability.

We work with log returns (standard in finance):

r = log(P_now / P_prev)

Conceptually, we track surprise scores such as:

  • z_step = r_step / σ_step
  • z_cum = r_cum / σ_cum

where σ_step and σ_cum are intraday realized-volatility-style estimates computed over rolling windows.

Why this helps: volatility changes across names and across regimes. A volatility-normalized move is closer to “how rare is this?” than a raw percent change.


2) Cross-sectional lens: “is this name an outlier right now?”

A stock down 4% on a day when its sector is down 3–4% is usually less interesting than a stock down 4% in a flat market.

So we ask: relative to the exchange (or peer universe) at this moment, is this name an outlier?

Robust cross-section (median & MAD):
For each detection run, we compute a robust center and scale of the cross-sectional moves using:

  • Median as the location estimate
  • MAD (median absolute deviation) as the scale estimate (often scaled to behave roughly like a standard deviation under mild assumptions)

Each stock receives a cross-sectional outlier score z_cs based on how far its move sits from the robust center relative to the robust scale.

Why robust stats: the cross-section often contains extreme prints, halts, and genuine jumps. Median/MAD are far less sensitive to a few wild observations than mean/standard deviation.


Putting the lenses together

The detector combines both views:

  • Time-series surprise (unusual for the stock given its current volatility)
  • Cross-sectional surprise (unusual relative to peers/market at that moment)

In broad macro moves, cross-sectional context suppresses spam. In true single-name shocks, the time-series score dominates.

A practical rule of thumb:

  • TS + CS both extreme → very likely interesting
  • CS extreme but TS mild → often drift/rotation
  • TS extreme but CS noisy → can still be a real single-name event

Practical robustness: avoiding false positives that look dramatic

Intraday anomaly detection fails in predictable ways unless you defend against them.

Liquidity gating

Many “spectacular” moves happen on tiny turnover. We require a minimum effective notional activity before a move is eligible to trigger, and we treat missing or low-quality liquidity signals conservatively.

Effect: fewer “−20% on nothing traded” alerts.

Time-of-day awareness

Open and close are structurally different. We account for intraday seasonality with simple session-aware adjustments so the detector doesn’t overreact to early noise or late bursts.

Stale-print guards

A quote that arrives long after the last real trade may be valid data, but it’s not a good basis for a real-time “something just happened” trigger. We cap quote age for eligibility.

Microstructure realism (downsampling for volatility estimation)

Sampling too frequently can inflate realized-variance estimates due to bid–ask bounce and other microstructure effects. For volatility estimation we downsample to a coarser grid (e.g. minute-ish buckets), keeping detection responsive while the volatility baseline stays stable.

Re-arm / hysteresis

Once a trigger fires, we suppress rapid re-triggers unless the name moves meaningfully further (in volatility units) or enough time has passed. This prevents “machine-gun alerts” during oscillations while still capturing genuinely new information.


After a trigger: how we analyze why the price moved

Detection answers: “this move is unusual.”
Users immediately ask: “what happened?”

Our analysis layer is designed to be conservative, evidence-led, and readable in under a minute — without pretending certainty where none exists.

1) Build the market context first

Before reading headlines, we assemble a compact context frame:

  • the intraday path around the trigger (step vs cumulative move)
  • cross-sectional behavior (is the sector/index moving similarly?)
  • basic regime hints (high-vol day vs calm day)
  • nearby discontinuities (halts, gaps, sudden volume bursts if available)

This reduces “story-first” explanations that ignore the tape.

2) Generate hypotheses and tie them to sources

We search recent public information relevant to the company and its ecosystem, typically including:

  • company news and press releases
  • earnings/guidance items
  • filings and exchange communications when applicable
  • sector peers and sympathy moves
  • macro headlines only when they plausibly explain the cross-section

The output is constrained to a structured format (title + short explanation + uncertainty flags), with explicit linkage to supporting sources.

3) Be explicit about uncertainty

If no credible company-specific catalyst is found, we say so. In that case we may cite:

  • sector rotation / risk-off moves
  • macro drivers (rates, commodities, FX) only if the cross-section supports it
  • “no clear public catalyst visible yet” as a first-class outcome

This matters because markets sometimes move before the headline is broadly visible — or for reasons that never become public.

4) Follow-ups when information arrives later

When the initial pass finds no strong catalyst, we may perform limited follow-ups after a delay. If a credible filing or headline appears later, the explanation updates with continuity (“initially unclear; later confirmed by …”).

5) Sanity checks (don’t narrate around bad data)

For outsized moves, we add extra guardrails: if the observed move looks inconsistent with independent reference points or is otherwise suspicious, we mark the explanation accordingly rather than forcing a narrative.


Selected references

  1. Andersen, T.G., Bollerslev, T., Diebold, F.X., & Labys, P. (2003). Modeling and Forecasting Realized Volatility. Econometrica.
  2. Barndorff-Nielsen, O.E., & Shephard, N. (2002). Econometric Analysis of Realised Volatility… Journal of the Royal Statistical Society: Series B.
  3. Hansen, P.R., & Lunde, A. (2006). Realized Variance and Market Microstructure Noise. Journal of Business & Economic Statistics.
  4. Aït-Sahalia, Y., Mykland, P.A., & Zhang, L. (2005). How Often to Sample a Continuous-Time Process in the Presence of Market Microstructure Noise. Review of Financial Studies.
  5. Huber, P.J., & Ronchetti, E.M. (2009). Robust Statistics (2nd ed.). Wiley.
  6. Rousseeuw, P.J., & Croux, C. (1993). Alternatives to the Median Absolute Deviation. Journal of the American Statistical Association.
  7. Sullivan, R., Timmermann, A., & White, H. (1999). Data-Snooping, Technical Trading Rule Performance, and the Bootstrap. Journal of Finance.

Disclosure: This article describes how a production system works today. It is not investment advice.