



The Knowable Future:
Forecastability and the
Limits of Prediction
Applied research on horizon-specific forecastability and the limits of inference
in time series.

Our core research question:
How far into the future does a time series contain usable information about its own evolution?



Forecastability is the extent to which the past contains exploitable information about the future.
The Knowable Future is a research programme focused on measuring forecastability, mapping how predictive information changes across forecast horizons, and identifying the limits of prediction in time series. Rather than beginning with model improvement, this work begins with a prior question: what structural limits to prediction are imposed by the decay of past-future dependence?
The programme develops pre-modelling diagnostics that estimate forecastability horizons using training data alone, and studies the implications of these limits for forecasting practice, model selection, and economic value.
Why the world is not fully deterministic
In 1814, Pierre-Simon Laplace imagined an intellect that knew the position and momentum of every particle in the universe. For such an entity, nothing would be uncertain: past and future would be equally visible. This became known as Laplace’s Demon. Two findings from physics defeat it, independently.
Quantum mechanics rules out local hidden-variable determinism. It does not merely say outcomes are hard to predict. It says no determinate value exists prior to measurement. Bell’s theorem (1964), confirmed by loophole-free experiments in 2015, rules out local hidden-variable theories: no account of underlying local facts can reproduce the observed correlations. Non-local deterministic interpretations, such as de Broglie-Bohm pilot-wave theory, remain logically possible, but only by allowing instantaneous action across arbitrary distances. So what is gone is the comfortable Laplacian picture of a locally deterministic world awaiting a diligent calculator. The first self-replicating molecule, the event that separated chemistry from biology, was almost certainly triggered by a quantum fluctuation. Everything that has ever lived, every civilisation, traces its existence to a single molecular accident that was not determined by anything prior to it.
Deterministic chaos defeats the Demon independently, and more deeply. Even in a perfectly deterministic universe, prediction remains bounded. In chaotic systems, tiny initial errors grow exponentially at a rate set by the Lyapunov exponent. Any imprecision in the starting state eventually expands across the system’s full dynamic range. Atmospheric predictability is therefore limited to roughly two weeks. That limit is not mainly quantum. It arises because chaotic systems cannot be measured precisely enough to constrain their trajectories indefinitely. Quantum mechanics enters only as a guarantee: Heisenberg’s uncertainty principle ensures a perfect initial specification is impossible even in principle. Chaos then amplifies the remainder.
The Demon therefore fails on two grounds. The universe may contain genuine randomness. And even a deterministic universe can produce systems that are practically unforecastable. Prediction is not just a computational problem. It is bounded by the system’s information structure.
That is where physics hands the problem to information theory. Shannon’s data processing inequality states that no transformation can recover information that was never present. Once mutual information between past and future has decayed, no model can reconstruct it. In many systems, the ceiling on predictive performance is set before modelling begins. Model capability is not the binding constraint. Information content is.
Physics ended Laplace’s dream of perfect prediction. Information theory explains why. Forecasting therefore begins not with models, but with a measurement of how much of the future is encoded in the past.
What the past can tell us
Forecasting is often treated as a modelling problem. This research begins one step earlier, asking a more fundamental question: how much of the future is actually knowable from the past? The answer varies across series and horizons. Some systems retain enough structure to support meaningful prediction. Others do not. The Knowable Future is concerned with measuring that difference.
Drawing on information theory, specifically the mutual information between past observations and future values, it examines how predictive signal persists, weakens, or collapses as the forecast horizon extends. The aim is to estimate forecastability before committing substantial modelling effort, a question with practical weight across business, economics, engineering, public policy, and the physical sciences.
The deeper question, then, is not simply which model performs best, but whether the underlying process contains enough recoverable information to make forecasting worthwhile at all. By shifting attention from model choice to the limits of inference itself, this work aims to clarify when sophisticated forecasting is justified, when simpler approaches are sufficient, and when the structure of the problem places hard limits on what can reasonably be predicted.
An Information-Theoretic View of Forecastability
How Much of the Future Is Knowable from the Past?
