The receiver gain is one of the parameters that should be optimised before recording an NMR spectrum. Here I'll explain what the receiver gain is and why it needs to be optimised.
When recording an NMR spectrum the coils in the probe detect a continuously varying voltage. This analog signal needs to be digitised, converted to a list of numbers, for processing by the computer. This is done by an analog to digital converter (ADC). The ADC has a limited range so if the analog signal is too large, part of that signal will be lost and fourier transforming such a truncated signal will produce artifacts. On the other hand, if the analog signal is too small then the digitised data will be small as well, and smaller peaks may not be detectable. To get the best possible spectrum the analog signal fed to the ADC should be as large as possible without exceeding its capacity. The receiver gain is a scaling factor applied to the analog signal before digitisation.
In the figure below, a 1D 1H spectrum was acquired with different
values for the receiver gain. All the remaining parameters were identical and
the spectra are plotted with the same scaling. As the receiver gain is
increased the intensity of the peaks increases. At the highest value, however,
the baseline shows a lot of noise due to the signal being truncated during
digitsation.
The standard parameters used at the SSPPS NMR facility automatically set the receiver gain before starting each experiment. This is done by running test scans with different gain values until one is found at which the signal is not truncated. Sometimes, however, the spectrometer still produces a warning that the receiver was exceeded, like that shown below. In most cases the spectrum will still be fine and the warning can be ignored. If the spectrum does show artifacts then it is possible to use linear prediction to replace the points that were truncated using the information in the logfile reported by the error message, but if the spectrum is a quick 1D 1H it is quicker and easier to run the spectrum again with a reduced receiver gain (start by halving it) and the automatic receiver gain setting disabled.
For the spectra shown in the first figure, warnings were generated for the three largest receiver gain values, but only the spectrum with the highest value shows artifacts. For this sample the receiver gain optimisation selected a value of 228. One might think a slightly higher value would be better, since no artifacts can be seen, but this is not necessarily the case. The graph below plots signal to noise vs receiver gain for the spectra in the first figure. As receiver gain is increased signal to noise rapidly increases until it plateaus. At very high receiver gains (2050 here) artifacts are produced and signal to noise is greatly reduced. The selected, optimal value of 228 is where the curve starts to flatten out and increases are no longer worthwhile because scaling the signal further just increases the noise along with the signal.
In most cases the automated receiver gain optimisation selects the best value. If warnings are obtained they can usually be ignored. Only if the spectra look bad should something different be tried.
No comments:
Post a Comment