Why fractal analysis




















The participant performed the task 10 times in each feedback condition. When dealing with real-world data, if fractal scaling is present it may be limited to a range of time scales i. If this is not taken into account, it may lead to inaccuracies in the estimation of H.

Before estimating H , then, it was important to visually inspect the plots of log 2 F w as a function of log 2 w to identify regions where linear scaling might be present. If fractal scaling appears limited, it may be necessary to restrict the range of the linear fit to the plot to exclude regions where linear scaling does not occur. Inclusion of regions where fractal scaling is actually absent can lead to inaccuracies and reduce the reliability of H estimates Cannon et al.

In practice, it is desirable to make this process as objective and automated as possible to avoid bias. Elsewhere Kuznetsov et al. For the sake of this tutorial, however, we chose the linear regions visually after inspecting the AFA plots for each trial without the linear fits imposed to examine the possibility of linear scaling. As often occurs with empirical data as opposed to pure mathematical fractals , some of our time series yielded slightly curved log 2 F w functions cf.

Visual inspection of the AFA plots see Figure 7 suggested two distinct regions of linear scaling, one for low w i. Figure 7. AFA plots for the time series of time estimates presented in Figure 6. The H fGn values are indicated for each scaling region. In the first experimental session the fast scaling region for the no-feedback condition spanned windows log 2 w from 1.

The H fGn value associated with this region was 0. The slower scaling region for the no-feedback condition had an H fGn value of 0.

A similar pattern of results was found for performance in the feedback condition see Figure 6 , right panel. The fast scaling region during the first session spanned windows from 1. One major difference compared to the no-feedback condition was the shorter length of the slow scaling region in the first session, which now spanned values of log 2 w from 3. The breakdown at larger log 2 w is likely due to an initial transient evident in the time series plot for this session—for about the first estimates the participant consistently underestimated the 1-s interval, but then began to estimate it more accurately.

Because this only happened during one part of the trial, this affected the slowest scaling region of the AFA plot. Finite, real-world time series are typically more complex than the ideal simulated noises of mathematics. For example, as was apparent in these time series, experimental data can contain multiple scaling regions. Partly, this may be because experimental data contain both the intrinsic dynamics of the process that generated the signal plus the measurement noise inherent in any recording device.

Apart from that, the intrinsic dynamics of real-world signals may have singular events and non-stationarities that if severe enough often can complicate many analyses including AFA. Because of this it is very important to carefully examine the raw data and the corresponding scaling plots before conducing any quantitative analyses. With regard to the dynamics of cognitive performance in this temporal estimation task, these results provide preliminary evidence of the presence of practice effects in the continuous time estimation task.

Practice led to a decrease in the H exponent of the slow scaling region, suggesting that the responses became somewhat more uncorrelated at this scale with practice. Of course our preliminary results have to be interpreted with caution because they are based on single participant and there are individual differences in the slow scaling region H values in this task Torre et al.

The differences between feedback conditions at the fast time scales were not expected because previous literature reported anti-correlated dynamics at this scale Lemoine et al. Feedback clearly resulted in an increased tendency for anti-correlated, corrective dynamics at faster time scales because participants were displayed their performance with regard to the benchmark 1 s time.

They appeared to use that information to correct performance on a trial-by trial-basis. In the no-feedback condition, this information was not readily available, which led to essentially random performance at the fast time scales. We applied AFA to known fractal signals and to real-world data from an experiment in human cognitive psychology that involved the repeated reproduction of a time interval. AFA recovered the H values of the known mathematical signals with high accuracy.

Linear scaling was well defined over a single region for these signals. Application of AFA to the experimental data revealed some of the complexities in applying fractal analyses to real data, particularly the issue of identification of linear scaling regions. We determined the scaling regions visually and then fit lines to them to obtain estimates of H. Often this is sufficient, but it is not an objective process and it could be subject to bias in an experiment that involves testing a particular hypothesis or an initial effort to classify a previously unanalyzed type of signal.

If visual selection of the scaling region is used, it should be done by multiple observers so that inter-rater reliability can be computed who are blind to the experimental conditions and study hypotheses to avoid bias.

In Kuznetsov et al. For the experimental time series we analyzed two linear scaling regions were apparent rather than one. Consistent with previous results using other analysis methods including spectral analysis Lemoine et al. The faster time scale yielded lower H fGn and were basically random white noise processes especially for the no-feedback condition with a slight tendency toward exhibiting anti-correlated fluctuations.

The longer time scale yielded higher H fGn values consistent with a correlated process that was close to idealized pink-noise. The presence of feedback had some influence on the structure of the fluctuations of the repeated temporal estimates, as did the practice afforded by performance on consecutive experimental sessions.

One of these effects was that linear scaling for the slower time scale broke down at larger w for the first session in the no-feedback condition, but spanned the entire upper range of w for the last session.

These results show that AFA may be sensitive to experimental manipulations that affect the temporal structure of data series both with regard to the estimated H values and the range of w over which fractal scaling occurs.

Besides the issue of identifying linear scaling region, AFA requires several other choices such as the step size for the window size w. Typically 0. In principle this choice should have little impact on H estimates, and would not seriously impact computation time except perhaps for extremely long time series.

It could, however, have a strong impact on the ability to identify linear scaling regions, especially with regard to resolving the existence of linear scaling regions at faster time scales. The choice of polynomial order M for the local fits is also important, especially for signals that may have oscillatory or non-linear trends as higher-order polynomials may be more effective at extracting those trends.

Typical choices of 1 or 2 seemed to provide about the same accuracy in estimates of H for the known signals we analyzed. Other factors that impact the ability to identify linear scaling include the sampling rate and the trial length, which, respectively, will affect the ability to resolve faster and slower time scales. These are important choices. A very high sampling rate might indicate the appearance of scaling at very fast time scales, but if those time scales are not physically realistic, one should be cautious about interpreting them.

Increasing trial length may help reveal or resolve scaling over very long time scales, which may be very important when dealing with apparently non-stationary time series. Ideally, AFA should be used in conjunction with other methods, and converging results should be sought. Like all fractal analysis methods, AFA requires careful consideration of signal properties, parameter settings, and interpretation of results, and should not be applied blindly to unfamiliar signals.

It is particularly important to plot and carefully inspect the time series and the AFA plots to ensure that the apparent signal properties match with the obtained results. In addition, as we noted previously the appearance of linear scaling regions in an AFA plot is not a definitive test for fractal scaling. When used carefully AFA may provide another useful tool for analyzing signals that may exhibit fractal dynamics.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Beran, J. Statistics for Long-Memory Processes. Brainard, D. We create a map of the Koch curve covering the whole interval. The program starts with a box side of pixels, which is decreased at each step.

We obtain the data shown in Table 3 , and we then calculate the parameters of the regression line and the residuals. Applying eq. A11 we calculate the probability P N of obtaining our sequence of runs, obtaining 0. Data obtained performing the box counting algorithm on the Koch curve.

The requirements are met. We have reliably measured D to be equal to 1. Since this holds over three orders of magnitude, we may interpret this as a physically meaningful self-similarity. We can then fairly safely assume that the Koch curve is a fractal, which is, of course, well known.

If we were to perform such a fractal analysis on real objects, the main difficulty would probably be the lack of data. We therefore believe that fractal analysis is only worth carrying out when the phenomenon one wants to investigate is amenable to producing enough data to fulfil the criteria discussed above.

We would like to point out that there are works in literature that meet the above criteria. In Turcotte there are both bad and good examples of fractal analyses, the latter being the ones where more data were available. In particular, phenomena linked to fragmentation exhibit clear linearity in the charts, at least visually; the author, in fact, does not provide the results of regression analysis. Fragmentation-related phenomena therefore appear to be the most likely to produce good results.

We have critically reanalysed the way in which fractal dimension is commonly measured, finding that the intrinsically boring process of counting induces the use of too few points. We have developed a suite of computer programs to evaluate fractal dimension through box counting which potentially solves this problem. The programs deal directly with images, implement a virtual screen limited only by computer memory, and include a zooming capability, which is necessary to study subsets of the whole picture.

Obviously, as with any algorithm, our codes are ineffective if the data are intrinsically deficient. Our suite of programs implementing the BC and tools for regression analysis is available by anonymous FTP from ibogeo. Aviles C. Scholz C. Boatwright J. Google Scholar. Cowie P.

Vanneste C. Sornette D. Draper N. Smith H. Google Preview. Grassberger P. C, , — Procaccia I. Hirata T. Fractal dimension of fault systems in Japan: fractal structure in rock fracture geometry at various scales , in Fractals in Geophysics , pp.

Mandelbrot B. Statistical self-similarity and fractional dimension, Science , , — The Fractal Geometry of Nature , W. Norton D. Sorenson S. Variations in geometric measures of topographic surfaces underlain by fractured granitic plutons , in Fractals in Geophysics pp. Richardson L. Schertzer D. Lovejoy S. Scholz H. Snow R. Fractal sinuosity of stream channels , in Fractals in geophysics pp. Turcotte D. Wirth N. This test is employed to compute the probability that a given sequence of runs might have occurred by chance.

Note that no test is powerful with few data. Because manual execution of the BC is such a laborious and inaccurate task, using a computer is the only sensible choice. Here we describe the algorithm implementing the method. To see why this is so, consider the plots shown in Figure 9. Displaying the data with a price range of 0— USD yields a box count of When the prices shown range from 0— USD top of Figure 9 , we find that a minimum of 37 boxes are needed to entirely cover the trace.

However, when the price range is expanded to 0— USD effectively increasing the domain: range aspect ratio of the data; bottom of Figure 9 , the number of boxes needed to cover the trace falls to While it is entirely reasonable to overlay a spatial figure with boxes of a well-defined area in the case of a box-counting analysis of a spatial fractal, the concept of a square drawn on a plot with incompatible and independently scalable axes is ill-defined. In some cases, this inadequacy is resolved by adopting conventions that eliminate such ambiguity.

For example, a time-series trace may be normalized in its x - and y -axes such that the domain and range of the plot each run from 0 to 1, and the structure may be analyzed via a box-counting analysis that utilizes a square grid that just circumscribes the trace.

While such a normalization convention may provide a consistent method for investigating the relative scaling properties among a set of related time-series traces, the absolute values of the dimensions produced by such analyses would remain essentially arbitrary.

Developing a fractal analysis technique that is appropriate for time-series structures generally amounts to taking one of two approaches: 1 to treat the time-series structure as a geometric figure without a well-defined aspect ratio, or 2 to treat the time-series structure as an ordered record of a process that exhibits a quantifiable degree of randomness. Following the latter approach, Harold Edwin Hurst introduced a formalism for quantifying the nature of self-affine time-series structures in a paper on the long-term storage capacity of water reservoirs [ 8 ].

By contrast, Hurst exponents in the range 0. The Hurst exponent also may be described as a measure of long-range correlations within a data set, such that measuring these correlations as a function of interval width may provide another measurement of the Hurst exponent.

In practice, however, the variance method is found to produce a poor estimate of Hurst exponent. As another means of quantifying the fractal properties of time-series traces, we now turn our attention to a method proposed by Benoit Dubuc in a paper [ 9 ] on the fractal dimension of profiles. The trace under consideration is a fractional Brownian motion fBm , whose properties are discussed below. Within each window, the linear best-fit line to the data within that window is calculated, resulting in a series of disconnected straight lines.

That is, the series of disconnected best-fit lines overlap in pairs such that each index in the domain of the original data set is matched with respective points on each of two subset fit lines with the exception of the n data points at either end of the trace. Then, within each window j , construct the curve. Conceptually, each value y w l may be thought of as representing the weighted average of the values of the two best-fit lines with values at that index, weighted so as to be inversely proportional to the distance between that index and the midpoint of the window.

Examples of applying the procedure of AFA at several values of N corresponding to the window width w discussed in the text. Traces are vertically offset for clarity.

Note that smaller values of N yield approximations that are increasingly similar to the trace under consideration. As w is decreased, y w t becomes a better approximation to x t ; the scaling behavior of this fidelity as w is varied is used to determine the Hurst exponent. Each of the fractal analysis techniques discussed above is best understood as providing an estimate of the fractal dimension or Hurst exponent that characterizes a given time-series data set. The sections that follow present a method for evaluating the fidelity of these estimates that was developed and applied by the authors to the fractal analysis techniques under consideration.

A noise trace, as an example of a time-series structure, may be described as a single-valued function of a single independent variable. A variety of methods exist for quantifying the statistical properties of noise traces. Power-law noise represents a significant and broad class of noise traces. Brownian motion generally may refer to a process extending in any number of dimensions; however, we restrict our attention to brown noises that may be understood as a time-dependent plot of the position of a particle undergoing Brownian motion along one dimension.

Given that a Brownian motion may be described as the cumulative sum of a series of random, independent steps, it is straightforward to generate a Brownian motion trace as a cumulative integral of a white noise trace. For our purposes, we define a white noise trace as a series of values with zero mean taken from a normal distribution i. The cumulative sum of Gaussian white noise results in Brownian motion.

This property represents an example of statistical self-affinity, in which the observed statistical properties within the intervals are preserved when the x and y axes are scaled by distinct factors specifically, h and h H , respectively.

Quantifying self-affinity using the formalism of the Hurst exponent motivates drawing a parallel between the Hurst exponent and the fractal dimension, as follows. Following the argument of Ref. Deriving a relationship between the Hurst exponent and fractal dimension. The self-affinity of an fBm trace leads to an estimation of the number of square boxes needed to cover the trace at a given length scale, motivating a relationship between H and D F.

See text for details. This disparity serves to highlight a general distinction between the Hurst exponent and the fractal dimension as descriptors of a time-series trace. In practice, it is impractical to utilize a spectral analysis to evaluate the fractal properties of a time-series structure, due to the imprecision relative to the aforementioned fractal analysis techniques of applying a power law best-fit curve to characterize a spectral decomposition of a trace.

This relationship may be derived by observing that the two-point autocorrelation function. However, systematic study [ 15 ] demonstrates that such a relationship is generally not very robust. Indeed, it is straightforward to test this robustness: In analogy to the investigation performed in Ref.

Each error bar represents one standard deviation from the mean value of D F recorded for each set of 20 traces. Lines connecting the data points are provided as a guide to the eye. The framework of the investigation summarized in Figure 15 may be applied to a more thorough investigation of the fidelity of each fractal analysis technique discussed above.

That is, if we generate a fBm trace with a well-defined Hurst exponent and subject such a trace to the analysis techniques under consideration, we may evaluate the robustness of each analysis technique. That is, by generating fBm traces with well-defined Hurst exponents and modifying the traces to better resemble real-world data sets, we may gain insight into how best to interpret our analytical results of experimentally derived data.

A variety of methods exist for generating a fractional Brownian motion trace that exhibits a well-defined predetermined Hurst exponent. Examples of such methods include random midpoint displacement, Fourier filtering of white noise traces, and the summation of independent jumps [ 14 ]. Retinal neurons and vessels are not fractal but space-filling. J Comp Neurol ; — Feder J. Book Fractals. New York: Plenum Press, Russ JC. Book Fractal surfaces. A fractal analysis of cell images.

J Neurosci Meth ; — Takayasu H. Book Fractals in the physical sciences. Landini G. Applications of fractal geometry in pathology. Fractal geometry in biological systems. Amsterdam: CRC Press, ; — Are neurons multifractals?

J Neurosci Meth ; —7. Sholl DA. Dendritic organization on the neurons of the visual and motor cortices of the cat. J Anat ; — PubMed Google Scholar. Topological analysis of individual neurons. In: Capowski JJ, ed. Computer techniques in neuroanatomy.



0コメント

  • 1000 / 1000