Hi, sorry I've not responded sooner but I've been away for a bit, and also tied up with work, so I've quoted the last post we had below in July.
SteveAndrew wrote: alantlk wrote:
V = V/num_FFT bins - is this part of the normalisation noted in the accepted answer here? - https://uk.mathworks.com/matlabcentral/
... -operation or is that done in the first two terms? I'm not sure what this term is doing other than it must be some kind of averaging?
Sorry - I was a bit pushed for time and didn't answer your query as clearly as I should have done - and I left out an important step in the calculations
First, we carry out a Fourier transform on the IQ samples provided by the API, the transform returns these as real and imag arrays.
FFT algorithms scale the FFT output in various ways. Commonly the output is scaled by N , or N^2, where N is the number of FFT bins used. The algorithm I'm using scales by N, so each FFT result has been multiplied by N. We have to divide each result by N to get the actual ADC numerical value for each bin.
real = real/num_FFT bins
imag = imag/num_FFT bins
We now have the FFT output scaled back to the original ADC numerical levels. We need to normalise the FFT/ADC result so that instead of having a bunch of numbers that are the numerical representation of the ADC's binary output, we get a number in the range 0 to 1.0 that represent the ADC's normalised output from 0 to the numerical FSD of the ADC.
real = real * 1/32678
imag = imag * 1/32768
Equally, we could have done the above step like this, so that the result is directly scaled in Volts - 1.5 is the ADC FSD in volts.
real = real * (1/32678) * 1.5
imag = imag * (1/32768) * 1.5
Then get the magnitude of the vector
Vpk = sqrt [ (real * real) + (imag * imag) ]
We now multiply by the full scale deflection voltage of the ADC. The ADC used in the SDRplay modules has an FSD of 1.5 Volts. We can skip this step if ADC output has already been scaled to Volts as per the above.
Vpk = Vpk * 1.5
This is the step I forgot to include - where we convert Vpk to Vrms
Vrms = Vpk * 1 / Sqrt(2) (or Vrms = Vpk * 0.707 if you prefer)
Get the power in Watts into 50 Ohms
Pw = (Vrms * Vrms)/50
Convert to dB, add 30 to scale to dBm, and subtract the gain reported by the API to get actual dBm at the input.
dB = 10 * Log10(Pw)
dBm = dB + 30 (same as dB * 1000 to get mW)
dBm = dBm - API_gain_value
I hope the above is a bit clearer than my initial offering. This gives results that are usually with 1.0 - 1.5 dB of that reported by SDRuno.
Edit: dB = Log10(Pw)
changed to dB = 10 * Log10(Pw)
Noted on the extra math term, good to know I recognised something was missing, even if not quite the right thing!
I think the root of what I found is as follows:
1. The SDRPlay series are designed for 50 ohm systems. The RSP2 has an additional 1000 ohm input.
2. The SA code assumes the input is at 50 ohms by using that to calculate power, that's then displayed on screen.
3. In the case of the RSP2 balanced input at 1000 ohms that's unfortunately already wrong by a factor of 20. Furthermore in a typical measurement scenario, as opposed to receive from an antenna or properly designed signal generator, the system impedance involved is quite likely to be anything but 50 ohms!
The displayed power will therefore be incorrect, because 50 ohms is assumed in the SA maths, and is what I found with various very loose couplings.
Just a thought - I'd suggest maybe don't use power units on the display as the default, maybe use instead dBmicrovolts as they are what's actually being measured by the RSP? You could of course retain power units at user choice if they are in a known 50 ohm (or 1000 ohm) impedance system.
Hope that helps?