Re: Using SDR# to drive the SDRplay RSP
Posted: Thu Feb 19, 2015 6:25 pm
to do wrong to none -- "Shannon" I meant "Nyquist-Shannon"
This forum is now suspended and will not accept any new posts or account registrations. For technical support relating to SDRplay hardware or software, please open up a support ticket via www.sdrplay.com/support
http://www.sdrplay.com/community/
In Zero IF mode, you will tend to be more vulnerable to any residual DC offsets and possibly flicker noise. You will always have some level of DC offset in zero IF mode and if there is a slight frequency error, this might become audible as a SSB tone. Some programmes such as SDR# have algorithms to remove the DC offset and in my understanding this is done simply by 'averaging' the signal in I and Q over a fairly long period of time to estimate the level of DC and then subtracting this estimated level. This process is equivalent to a 'high pass filter and as such, it puts a small 'hole' in the spectrum. The size of hole depends on how long the averaging is done for, but if it is say 1 second, then you will have roughly 1 Hz wide 'hole' in the spectrum. The longer the averaging time, the smaller the hole, but the more memory the correction algorithm requires.>>>Q:
When should I use IF mode greater than zero? Can you tell something about?
They are completely separate. The AGC checkbox in the SDR# plugin for the RSP is designed to prevent a strong signal from overloading the ADCs. When enabled,the 'average' signal level at the ADC input is held at a roughly constant level. This might matter if you have a large number of signals present at the ADC input and some of them are varying in level. The gain will automatically be adjusted to keep the total signal power constant. This MIGHT result in the signal that you are monitoring actually varying with the gain variation, so it is best to also check the AGC box in SDR# as well as this will apply a separate software AGC to the signal that you are monitoring. Where you use the AGC available with the plugin or simply adjust the gain manually is purely a matter of preference. Some people like AGC, others don't. The most important thing is to ensure that there is no signal 'clipping' at the ADC inputs. The other thing to note is whether using AGC or adjusting the gain manually, make sure that the LNA GR threshold is set appropriately. For best sensitivity set it to 59 dB. This means that the IF gain can be reduced by 59 dB (the max possible) before the LNA gain is reduced. This preserves the receiver NF for as long as possible. If you have a fairly good signal, but a lot of interferers, then you may want to reduce the LNA GR threshold number as the compression point and intermodulation performance of the receiver improves considerably when the LNA gain is turned down.What are the relationships of this option with the ACG option in SDR#?
Which is the best setting for this two "ACG" checkboxes?
Practically speaking, you are correct, although I cannot seem to set a LNA GR threshold of > 100 and I am not sure why. I suspect that this does not matter in reality as you cannot have a GR value of greater than 83 dB without the LNA gain being turned down. I believe in HF mode that the LNA gain steps are in fact 6 dB and so as long as the LNA GR Threshold is > 96, then the LNA will always be at maximum gain.I understand from SDRplay ACG documentation:
let's assume HF (3-30MHz)
LNA GR 0 or 19dB - MIX GR 0 or 24dB - IF GR 0 to 59dB
overall GR range 0 - 102dB
if I set "LNA GR Threshold" to 0 LNA will be always OFF
if I set "LNA GR Threshold" to 102 LNA will be always ON
Right?
If you are in Zero IF mode, then 1.5 MHz bandwidth is 750 KHz in I and Q and so a SR of 1.5 MHz does NOT breach the Nyquist criterion. If you are using an IF of say 2.048 MHz, then it will as the upper corner frequency of the filter will at (2.048 + 0.75) MHz and so you would need a minimum sample rate 5.596 MHz to avoid serious alaising.> Q:
Suppose I set IF bw 1.5MHz and SR 1.5MHz.
According to "Shannon" SR should be 2xBW. I suppose this means the SR is
1.5 on I channel and 1.5 on Q channel so Shannon is "OK". Right?
Then suppose I set IF bw 1.5MHz and SR 3.0MHz. What difference should I notice
on my FFT? And what if I double again, SR = 6.0MHz and again to SR 12.0?
I am not too sure about the SDR# display, but power relates to RMS voltage and not peak. The peak signal will be greater than this. So for example, with an OFDM signal such as DAB, the ratio between peak and RMS is around 10 dB. If you have many different signals, the peak to average ratio of the composite signal could be quite high. If you set the setpoint to -15dBfs, then this simply means that the total RMS voltage will be 15 dB below the ADC full scale. Peak signals could be close to full scale, but that might not occur every often as the true peaks only occur when all of the signals are momentarily at their peak voltage. It's a bit like 'all of the planets lining up'The level I see on the SDR# FFT is a "power"? In other words the FFT is a plot
of power as function of frequency?
So, let say "Setpoint" is -15, means when somewhere in the FFT display
a signal reach -15dbfs this trigger sdrplay's ACG?
This sounds like a bug to me. I believe the maximum IF gain reduction that is possible is 59 dB. It seems as though either the plugin or the API will not allow the mixer gain to be reduced until the LNA gain has already been reduced. If you set the LNA GR Threshold to greater than 59 dB, one the total gain setting goes above 59 dB, the S/W tries to set the IF gain to an impossible level (59 dB is the maximum - I read it somewhere).Now set LNA GR T =75 (it is allowed)
Raise GR till 63 ===>>> LNA on - MIX on - IF GR 63
GR = 64 ===>>> LNA on - MIX on - IF GR 64 but the audio jump to high level (so the display)
until you reach
GR 75 ===>>> LNA off (24dB) - MIX on - IF GR 51 (now the audio jump down to expected level)
and then
GR 83 ===>>> LNA off (24dB) - MIX off (19dB) - IF GR 40 all right.
I suppose it is better to limit LNA GR Thrsld to 59 to avoid this strange behaviour.