From Wikipedia, the free encyclopedia - View original article
The Hurst exponent is used as a measure of long-term memory of time series. It relates to the autocorrelations of the time series, and the rate at which these decrease as the lag between pairs of values increases. Studies involving the Hurst exponent were originally developed in hydrology for the practical matter of determining optimum dam sizing for the Nile river's volatile rain and drought conditions that had been observed over a long period of time. The name "Hurst exponent", or "Hurst coefficient", derives from Harold Edwin Hurst (1880–1978), who was the lead researcher in these studies; the use of the standard notation H for the coefficient relates to his name also.
In fractal geometry, the generalized Hurst exponent has been denoted by H or Hq in honor of both Harold Edwin Hurst and Ludwig Otto Hölder (1859–1937) by Benoît Mandelbrot (1924–2010). H is directly related to fractal dimension, D, and is a measure of a data series' "mild" or "wild" randomness.
The Hurst exponent is referred to as the "index of dependence" or "index of long-range dependence". It quantifies the relative tendency of a time series either to regress strongly to the mean or to cluster in a direction. A value H in the range 0.5–1 indicates a time series with long-term positive autocorrelation, meaning both that a high value in the series will probably be followed by another high value and that the values a long time into the future will also tend to be high. A value in the range 0 – 0.5 indicates a time series with long-term switching between high and low values in adjacent pairs, meaning that a single high value will probably be followed by a low value and that the value after that will tend to be high, with this tendency to switch between high and low values lasting a long time into the future. A value of H=0.5 can indicate a completely uncorrelated series, but in fact it is the value applicable to series for which the autocorrelations at small time lags can be positive or negative but where the absolute values of the autocorrelations decay exponentially quickly to zero. This in contrast to the typically power law decay for the 0.5 < H < 1 and 0 < H < 0.5 cases.
To estimate the Hurst exponent, one must first estimate the dependence of the rescaled range on the time span n of observation. A time series of full length N is divided into a number of shorter time series of length n = N, N/2, N/4, ... The average rescaled range is then calculated for each value of n.
1. Calculate the mean;
2. Create a mean-adjusted series;
3. Calculate the cumulative deviate series ;
4. Compute the range ;
5. Compute the standard deviation ;
6. Calculate the rescaled range and average over all the partial time series of length
The Hurst exponent is estimated by fitting the power law to the data. This can be done by plotting the logarithm of as a function of , and fitting a straight line; the slope of the line gives . Such a graph is called a pox plot. However, this approach is known to produce biased estimates of the power-law exponent. A more principled approach fits the power law in a maximum-likelihood fashion.
The basic Hurst exponent can be related to the expected size of changes, as a function of the lag between observations, as measured by E(|Xt+τ-Xt|2). For the generalized form of the coefficient, the exponent here is replaced by a more general term, denoted by q.
There are a variety of techniques that exist for estimating H, however assessing the accuracy of the estimation can be a complicated issue. Mathematically, in one technique, the Hurst exponent can be estimated such that:
for a time series
may be defined by the scaling properties of its structure functions Sq():
where q > 0, is the time lag and averaging is over the time window
usually the largest time scale of the system.
Practically, in nature, there is no limit to time, and thus H is non-deterministic as it may only be estimated based on the observed data; e.g., the most dramatic daily move upwards ever seen in a stock market index can always be exceeded during some subsequent day.
H is directly related to fractal dimension, D, where 1 < D < 2, such that D = 2 - H. The values of the Hurst exponent vary between 0 and 1, with higher values indicating a smoother trend, less volatility, and less roughness.
In the above mathematical estimation technique, the function H(q) contains information about averaged generalized volatilities at scale (only q = 1, 2 are used to define the volatility). In particular, the H1 exponent indicates persistent (H1 > ½) or antipersistent (H1 < ½) behavior of the trend.
For the BRW (brown noise, 1/f²) one gets
and for pink noise (1/f)
For the popular Lévy stable processes and truncated Lévy processes with parameter α it has been found that
In the above definition two separate requirements are mixed together as if they would be one. Here are the two independent requirements: (i) stationarity of the increments, x(t+T)-x(t)=x(T)-x(0) in distribution. this is the condition that yields longtime autocorrelations. (ii) Self-similarity of the stochastic process then yields variance scaling, but is not needed for longtime memory. E.g., both Markov processes (i.e., memory-free processes) and fractional Brownian motion scale at the level of 1-point densities (simple averages), but neither scales at the level of pair correlations or, correspondingly, the 2-point probability density.[clarification needed]
An efficient market requires a martingale condition, and unless the variance is linear in the time this produces nonstationary increments, x(t+T)-x(t)≠x(T)-x(0). Martingales are Markovian at the level of pair correlations, meaning that pair correlations cannot be used to beat a martingale market. Stationary increments with nonlinear variance, on the other hand, induce the longtime pair memory of fractional Brownian motion that would make the market beatable at the level of pair correlations. Such a market would necessarily be far from "efficient".