qEEG Artifacting

The qEEG represents the statistical manipulation of the raw EEG, so an understanding of these manipulations should precede any discussion of the qEEGs clinical indications for protocols. Without such knowledge any given finding may be misinterpreted.

Following the careful recording of the EEG, the quantitative analysis is begun with the sampling of the data to be used in the analysis by the Fourier transform. The Fourier analysis assumes there are no transients (epileptic discharges, episodic voltage changes etc.) or state changes (light sleep, drug effect, mental task, etc.), so these must be avoided when selecting data for analysis in qEEG for eyes closed resting database comparison. There are some eyes open and task databases available more recently (Hudspeth, Sterman, Duffy etc.)

Transients are an event with a rapid onset and ending, with an increase in amplitude of greater than 50% over the ongoing activity. Epileptiform activity is one common example of this phenomenon. The only time transient discharges may validly be included is when dipole localization or “mapping” of the sources of this activity is the intent, and in this circumstance only significant discharges should be sampled, with the ongoing background treated as the state change and eliminated.

Less rigorous analysis selection standards exist for data not intended for database comparison, such as reading or other task related data. Usually these task data will only be used for gross comparison to the more carefully collected steady state data, looking for gross changes in brain function. When there is an intent to compare to an eyes closed normative database, transients will increased the variability of the dataset, but will be averaged out in the mapping unless persistent or very prominent. These transients will simultaneously alter the dataset’s standard deviation from the norm.

State changes include typically sleep stages collected for an eyes closed awake database comparison. Stage 1 sleep is a subtle drowsy state, where people recorded will usually deny being drowsy when alerted. The alpha is beginning to wax and wane, with subtle increases in theta and occasionally slow rolling eye movements and a decreased EMG tonus. This is just prior to stage 2, where an object being held will slip from grasp, alerting the client, with most realizing the drowsing is present in stage 2. Many people doing a repetitive task or automatic pilot type task will be in stage 1 without being aware of time passing. This is a dangerous situation if response to change is needed, as there are delays in reaction time associated with this state.

The problem with stage 1 (drowsing) being added to a dataset is that it is a mixing of states, violating the FFT assumptions and making database comparison validity more than merely suspect. The inclusion invalidates comparison.

The task of artifacting is to sample enough data to provide reliable maps while maintaining the validity of the sampling, not taking state changes or transients. The amount of data acquired should provide highly repeatable or reliable mapping. The time required to achieve these results is different for each frequency.

Beta becomes reliable in the first 30 to 45 seconds, with alpha following at 60-90 seconds. The intermittent nature of theta makes it the least easily established reliability, with 120-180 seconds required. Delta is reliable at about 120 seconds. Our lab tries to get 120 seconds of data when possible.

Reliability may be established another way, with split-half replication. This actually looks at the lability within the sampled data by looking for the invariant similarity of the repeatable parts of the two data sets, with the more variable parts looked at with less confidence.

The time span or length of the epochs selected determines the sensitivity to the slower frequencies. The Fourier transform has to have a completed waveform in the data epoch sampling for the frequency to be detected and quantified. A 1/2 Hz low frequency sensitivity is thus achieved only when 2 second epoch lengths or longer are sampled. There is a downside to too long an epoch. It increases the likelihood of including state changes and transients, though when clean state stable data exists in long episodes, it should be sampled.

Some equipment has preset epoch lengths and interactions between the sampling rates and the epoch length, which is problematic in sampling data flexibly. The epochs in some equipment are not able to be adjusted in time to ’slide’ past artifacts, making the data artifacting such that it is difficult to sample the clean data in a record with intermittents like eye movement or other transients. Careful selection of the equipment should precede entry into this field.

The artifacting is mostly concerned with eliminating the more common artifacts of eye movement and EMG, as well as movement or electrode artifacts. This cleaning of the data is part of the art of doing good qEEGs, though the science of adequate sampling and the assumptions of the Fourier must be kept in mind.

New artifacts introduced by the digital processing.

The digital recording and processing of the raw analog waveform of the EEG should be understood in technical detail to properly interpret the resultant maps and numerical tables of findings.

The EEG is digitally converted from the analog data by an analog-to-digital or A-D converter and a resultant digital dataset is derived. The digitizing sampling rate and the bit length of the computer data will determine the resolution of the resultant image and tabular datasets. The faster the sampling rate, the faster the frequency that can be resolved, with a minimal sampling frequency defined by the Nyquist principle as 2 times the frequency being resolved.

Proper reproduction of the EEG for visual perspective requires a more conservative sampling rate than the 2:1 Nyquist ratio. This greater than 2:1 ratio must be set by the individual’s preferences. Few would choose less than 128 samples per second, most would prefer 500/second to 1000/second. (The manufacturer’s with set buffer sizes for the epochs, like the Lexicor, will however need to look at the impact in loss of lower frequency sensitivity. For this situation 128 to 256 is the highest reasonable choice.)

The channel sampling should be simultaneous, to avoid remontaging error or slew. If not simultaneous, a faster sampling rates will reduce this error (as will burst mode sampling) reducing the phase or time base error to a minimum.

The bit length of the computer “word” processed by the CPU effects the amplitude resolution in qEEG, irrespective the sampling rate. The longer the bit length, the better the resolution ( a bit length of 12 is acceptable, but 16 is preferred. Older units will have 8 bit processing and are not fully adequate without further scaling adjustments).

The epochs selected during artifacting will all have an abrupt start and stop, without a zero voltage point at each end of all the epochs. This sudden voltage is seen by the FFT (fast Fourier transform) of the computer analysis as a square wave at the start and stop of each epoch. The resultant output of the Fourier is that all frequencies were present at that point to “reconstruct” the square wave. This is termed “leakage artifact”, or “Gibb’s artifact”.

The result of the Gibb’s artifact is that if a 10 Hz waveform was put into the FFT, a spectral plot of the output would have a rise of the baseline where all frequencies were used to reconstruct these abrupt starts and stops of the epochs. There would be a spectral peaking at 10 Hz, with a tapered response and a broadened base to the frequency plot.

To correct for the leakage or Gibb’s artifact, a “windowing” filter is used. The result of this windowing is the return to the baseline of the generally elevated spectral plot mentioned previously. There is a residual broadening of the idealized spectral peak at 10 Hz. This residual artifact of the broadening of the spectrum following the windowing is “smearing” artifact.

The windowing used in qEEG is usually a Hanning window. This filter slowly ramps up the start and ramps down at the end, to avoid the apparent square wave the FFT sees. Other windowing techniques are triangular, Blackman, Hamming, Meyer’s , with the lack of windowing occasionally referred to as a “rectangular window”. A full discussion of these details, contrasting the different styles of windowing is outside the scope of this chapter.

“Aliasing” is an artifact caused by a frequency source near the sampling rate (and above the nyquist sampling rate) so that a beat frequency is created as an alias of the source frequency/sampling rate interaction. Aliasing filters are used to control for this artifact in all modern devices.

Database issues

The datasets derived from the artifacted EEG are the starting point for the comparison of these data points to the databases used in qEEG analysis.

The databases are the means and standard deviations used to establish the significance probabilities for the observed measurements. These will be represented as Z-Scores, roughly these may be seen as the standard
deviations from the normative database of the data points in the dataset. They are calculated as the patient mean minus the database mean, divided by the standard deviation of the population.

Z-scores are reported in tabular form and may be mapped. Significance probability mapping ( a term derived by Frank Duffy, M.D. of Harvard’s Children’s Hospital) allows the interpreting individual to view the spatial distribution and extent of deviation in an easily discernible display. 1.96 Z-score deviation is equal to 2 standard deviations, and 3.08 Z-score deviation is three standard deviations.

Normative databases are constructed with highly screened normal individuals with an age range establishing the limits of the database. The database is constructed controlling for socio-economic and other demographic influences. Importantly, the databases must be established with different norms for male and female, to account for the significantly different neurophysiologic structures and rates of development of the male and female brains.

The database used should be selected to match the end use and population to be seen. The age range may be critical for those seeing children or the elderly. For others, the presence of multivariate stepwise discriminants used in determining the likelihood of membership in one of two or more clinical groupings will be critical in the selection.

For others, the presence of an eyes open database for use in clinical eyes open work, or the task related data will be critical.

The database from E.Roy John, Ph.D. of NYU’s Brain Reasearch Laboratory contains more than univariate measures, with multivariate parametric statistical evaluations of normal and many clinical subgroups. This has stepwise discriminant analysis associated with it. Discriminants for head trauma are also available from Thatcher. The Duffy database is used, though is not is commercial distribution now that the Nicholet BEAM instrumentation does not carry it. Sterman has an age limited performance based database available, with new databases such as Hudspeth’s coming into availability recently as well.

In using the discriminants, if available in a database, care should be exercised to assure the applicability of the discriminant to the client being evaluated. The client must fit the conditions that were set for the construction of the discriminant.

If a discriminant is set up to decide the likelihood of being a member of group A or group B, a member of another group, C, will be classified as A or B, not properly identified as another type, type C. This weakness of discriminants must be controlled in the selection and use of the discriminant, not after it has been performed.

Displays

The displays in the qEEG report will be presented as a progressive analysis of the data. The displays may be sorted into those that are closer to and farther from the raw EEG, being farther from the EEG as more statistical manipulations are performed. The farther from the raw data one goes, the easier it is to mistakenly interpret artifacts as real or misinterpret relational data.

To avoid these easy and eventually certain mistakes, the full visual interpretation of the EEG must precede any review of the analyzed data. Only then should this be followed by a review of the raw amplitude mapping, in as detailed a frequency display as possible. This may be followed by broad band analysis. The amplitude mapping should be followed by the power and then the relative power analysis. Only after this step-wise evaluation this should absolute or relative power Z score or other database comparisons be done.

Following the spectral evaluation, the statistically extracted measures of symmetry, coherence and phase are evaluated, without being as likely to make a mistake.

The presence of artifact should be expected in clinical work. In clinical work the luxury of prolonged recordings and rejection from a study due to artifacts is not present as it is in academic situations. The progressive step by step evaluation will control for these situations as best they may be.

To err may be human, but it should be controlled and accounted for with methodologic routine to the extent humanly possible.

Leave a Comment