Introductory PricingSingle Symbol $10 (was $19) · PRO $49 (was $99) · 14-day PRO trial

What is Jensen--Shannon Divergence?

Jensen–Shannon Divergence in practical quant workflows

When evaluating trading models, profit alone rarely tells the full story. Two models might generate similar returns, yet behave completely differently under the surface. One may produce smooth, consistent performance, while another jumps between gains and losses in a far less stable way.

This is where metrics like Jensen--Shannon Divergence (JSD) become useful. Instead of focusing only on the size of returns, it helps measure how much the statistical behaviour of a model changes.

In practice, it compares probability distributions. If the distribution of outcomes remains stable over time, the divergence stays small. If behaviour changes significantly, the divergence increases.

This makes it a useful signal for detecting behavioural drift in trading models.


Why it matters when comparing trading models

When many models compete with each other, the goal is not simply to find the one with the highest return. What matters more is whether that performance is structurally stable.

Some models perform well only under very specific market conditions. Others behave more consistently across different regimes. Even when profits look similar, their long‑term reliability can be very different.

Metrics like Jensen--Shannon Divergence help reveal these differences by quantifying how much a model's behaviour changes over time.

In practical terms, it helps answer questions like:

  • Does the model behave consistently?
  • Are its return distributions stable?
  • Or does its behaviour drift as market conditions change?

Models whose distributions shift strongly over time are often less robust than models whose behaviour remains structurally stable.


How darwintIQ uses Jensen--Shannon Divergence

In darwintIQ, trading models are continuously evaluated on a rolling window of recent market data.

Each model receives a fitness score that combines multiple aspects of behaviour:

  • profitability
  • drawdown characteristics
  • stability of returns
  • distribution behaviour

Jensen--Shannon Divergence contributes to this evaluation by measuring how stable certain statistical properties remain over time.

Large divergence values can indicate that a model's behaviour is changing significantly, while smaller values suggest more consistent behaviour.

Importantly, darwintIQ never evaluates this metric in isolation. It is always combined with other metrics within the population of models.

The goal is not simply to reward the highest profit, but to identify models whose behaviour remains coherent under current market conditions.

This helps separate adaptive models from those that only appear profitable due to temporary structures in the market.


For the real quant nerds

Mathematically, Jensen--Shannon Divergence is a symmetrised and smoothed version of the Kullback--Leibler divergence.

Given two probability distributions (P) and (Q), the Jensen--Shannon Divergence is defined as:

[ JSD(P || Q) = \frac{1}{2}{=tex} KL(P || M) + \frac{1}{2}{=tex} KL(Q || M) ]

where

[ M = \frac{1}{2}{=tex}(P + Q) ]

and (KL) denotes the Kullback--Leibler divergence.

Unlike KL divergence, JSD has several properties that make it convenient for practical applications:

  • it is symmetric
  • it is always finite
  • it is bounded between 0 and 1 (when using log base 2)

In the context of trading models, this allows us to compare distributions of behaviour, for example:

If two distributions are identical, the divergence is 0.
The more they differ, the larger the divergence becomes.

In evolutionary trading systems like darwintIQ, this can be useful for detecting behavioural drift inside a population of models. A model whose statistical profile changes too much may be adapting --- or it may be becoming unstable.

The Jensen--Shannon Divergence therefore acts as a small but useful signal when evaluating model robustness within an evolving population.