As the technologies advance and the software speed starts to allow derived measures to be used for feedback, the field is being offered many new tools for neurofeedback, including ICA based feedback, LORETA based feedback, and Z-score feedback.
All of these new tools will require clinical validation prior to being able to be considered standard techniques within our field’s armamentarium of efficacious techniques and clinical applications. All of these techniques offer great hope at this time with preliminary results, but careful clinical outcome studies remain to be performed.
In this brief note I will discuss Z-score feedback. This promising technique offers to set normative boundaries around the mean of many features of the EEG, and allow feedback to be controlled by these parameters. This obviously offers great hope to clinical outliers, as their Z-score divergence should be related to their pathology. One difficulty is that database Z-scores also show divergence when an adaptive or counter-balancing feature is used to cope with an abnormal finding. A crutch is not a normal finding, but you can’t walk without it if you have a broken leg.
This suggests that the selection of which Z-score features to include as feedback contingencies and which to “ignore” will become an important feature in clinical decision making using these new tools. Training away an adaptive coping mechanism is not a proper NF Z-score targeting choice.
One area which is not very well discussed in the field of qEEG is how poorly the databases are at characterizing shifts in the frequency “tuning” of the EEG. The NeuroGuide database reports peak frequency, but the calculation is not for the peak, but for a “centroid” which is more related to the Mean than the Peak frequency. Nx-link does not report the peak, but uses a mean frequency calculation. BRC database uses a peak frequency of alpha statistic, but it is constrained to looking within the alpha predefined band.
Databases report “too much” and “not enough” amplitude/magnitude/power, but they do not tell you if this value would be normal at a different frequency tuning. An example is in order to illustrate this important concept. Take a normal amount of 9.5 Hz sinusoidal alpha seen dominant posteriorly, with normal coherence relationships… let’s arbitrarily say there is 50 microvolts of amplitude in the alpha spindles. Now, take this alpha tuning and shift it 2 Hz slower, so 7.5 Hz is the sinusoidal frequency, and what does the database tell you?
Databases will say there is too much 7.5 Hz power, and that it is hypercoherent, since the database does not expect alpha at 7.5 Hz. In reality, the alpha frequency is slow, but the fact that it is 50 microvolts is not too much power for the background, and it really is not hypercoherent, it is just too slow.
Frequency tuning issues are so poorly described in databases that the databases will not do a good job of normalizing the client’s function… dropping the background’s normal power and coherence relationships is not appropriate, but the database would use these values as their contingencies for NF based on the database.
This would suggest that frequency shifted clients may comprise another group that will require special adaptations for Z-score based feedback to be properly applied.
One other area that deserves some discussion is the use of NF in non-medical applications for “peak performance”. By definition, these peak states are not a common occurrence, as they are seen in uniquely gifted athletes, scholars, and business leaders that are not that common in the first place, and then these states are not always seen in these individuals in their average states. These states that are being trained for are not statistically “Mean-oriented” states, but rather exist as a unique pattern of outliers which are not capable of being reported in univariate statistics such as Z-scores. It requires a flexible nervous system to achieve these outlier states, and a resilient nervous system to “return” or “recover” and continue to function “normally”.
These observations suggest that peak performance may not be the best application for Z-score feedback, though this is hypothetical, and requires the validation only achieved with experience over the years.
The database selected also will become an issue, as the NeuroGuide is severely restricted in the frequency range, with the amplifier response stopping at 28Hz, as seen in the FRC curve(Fig 1). It is not possible to do gamma based feedback with Z-scores of the database used does not go to gamma.
What is clear at this time is that Z-score feedback remains experimental until the validation studies are performed, and though it is a promising new application utility, there are areas which deserve special attention even in this early stage of the evaluation of this emerging technique, including coping mechanisms, frequency shifts, peak performance applications, and database limitations.
The vendors who promote Z-score feedback are all adamant that the Z-score feedback does not preclude the need for client evaluation, but rather that is increases the complexity of the evaluation as various features are selected or de-selected for being feedback contingencies to account for client coping mechanisms, and the various frequency shifting issues and other database inadequacies.
Welcome to the New World of high tech clinical application tools, please check your expectation that this will be “quick and easy” at the door. More on LORETA and ICA based neurofeedback later.
Very good point regarding individuals with peak frequency scores that are atypically high or low. In these cases would you suggest we interpret any coherence deviations with even more caution or disregard them entirely? Many times when I get an abnormal read by the neurologist such as a temporal lobe slowing or sharp epileptiform activity and proceed to normalize these with NFB and then re-Q the second Q shows coherence readings that are very different from what I got when the abnormality was present. Would you say this is an analogous situation?
An abnormal EEG finding can often change coherence. Coherence is a phase stable spatial pattern of a spectral feature, and a focal spectral event will not have the same spatial phase pattern as the normal background rhythmicity does, so it will be an outlier in Z-score space that can be normalized by removing the focal event… not by altering the background coherence.
Shifted frequencies cause a background coherence pattern to be mis-read by the database as an aberant pattern, when it may be prefectly normal for the rhythm if it were tuned (frequency adjusted) properly… altering the coherence is not the NF fix in this situation, but rather a re-tuning of the aberrantly shifted frequency.
The coherence needs to be understood… and whether to feed it back or not picked based on a real coherence issue, not a distortion due to aberrant tuning or a focal event issue.