I think the minimum the Mir API will allow is 2048000. Is this "Samples per second", "Bits per second" or "Bytes per second"?
Looks like each time I get a packet of samples, I get 336 samples. Each sample is 2 shorts. One short is for I and the other short is for Q.
Do I need to buffer this? And how much do I buffer? 2048000 samples? 2048000 bytes? 2048000 bits?
I am getting data from the SDRplay in C#. And I can take a wav file previously saved with I/Q data using HDSDR with SDRplay as the device tuned to the 40 meter band at 7.252 MHz. I can take this file and demodulate it as LSB and play it on the sound card. The sample rate appears to be 192000. The DSP code I have works on an array of doubles where I and Q have been interleaved. And the data is > -1.0 but < 1.0
So, how do I get the data from SDR play to a usable format that the rest of software can process? Do I decimate to 192000 some how? Maybe instead of a Sample Rate of 2048000, i use a sample rate that is divisible evenly, such as, 2112000?
I think you need to understand various DSP concepts especially filters, decimation, convolution, and iterpolation.
I already have code that works on a sample rate of 192,000 that can demodulate, filter, and decimate by a factor of 4 to give an output sample rate of 48,000 that can be fed to an audio sink. This audio sink can be: save the output to a wav file, output the audio stream to sound card so you can hear it in your computer speaker, or stream it over TCP/IP so you can listen to it remotely. For C#, I used the NAudio library to handle retrieving/saving from/to a WAV file or sending an audio stream to the sound card to hear on your speakers. Some people prefer PortAudio library instead since it can work on Windows, Linux, and Mac and is fast and very flexible.
To get SDRplay to give me what I wanted at 192,000; I had to buffer the samples and then decimate the signal. However, the minimum rate I believe is 2,048,000. Yet, some places online I see 2,000,000 sample rate as the minimum, such as, HDSDR. Is HDSDR rounding on the display to a nice rounded number? I had to understand that the sample rate that you give SDRplay when you Init the device using their API means "samples per second". Each sample is two signed 16-bit integers. In languages like C#, this is a short. You get 2 of these shorts because one is for the I (inphase) part of the sample and the second is the Q (quadratrue) part of the signal that is 90 degrees out of sync with the I part. So, I need to buffer the "samples" based on the sample rate.
If I need a 192,000 sample rate, then I probably need to decimate. Well, 2,000,000 and 2,048,000 are not divisible evenly to get to 192,000. However, I read on this forum that any sample rate can be used between the min and max sample rate. So, I kept using a calculator and multiplying/dividing 192,000 to see what I could get. 2,112,000 is the sample rate I need because I can divide evenly by 11 to get 192,000. So, this 11 will be my decimate factor. In order to decimate, you need to filter and down sample. Well, I already had code for this thanks to following Chris Thompson's excellent tutorial on "SDR and DSP For Radio Amateur" at https://www.g0kla.com/sdr/index.php
His tutorial used Java and Python. I did mine in C#. Because the concepts were the same for DSP., you could easily do your own DSP in your favorite language such as Go.
When you get data called a packet from the SDRplay via their API, you will get 336 samples. So, you have to buffer this somehow to get a sample count that matches your sample rate. Since I interleave my I\Q samples, I need to have a buffer that can handle twice the sample rate. I read that a ring buffer is better. At the same time I put my samples in the buffer, I also convert to a double (floating point that is 8 bytes in C# but other programming languages could be a different size). Languages like C/C++, knowing the byte size of these data types are important. Anyways, the I and Q shorts in the sample, I have to divide by a number. I have seen numbers like 2048, 8192 and 32768 in my research. Not sure why these numbers were chosen. But I imagine it is to turn the short (signed 16-bit integer) into a double floating point number that is: -1.0 > x < 1.0 where x is either the I or Q short converted to a double. However, 2112000 is not evenly divisible by 336, so you will have a little extra over if the data you get plus what you already have in the buffer exceeds the buffer size. So, you will have to deal with that. Not difficult. But I just wished somewhere online someone explained how to buffer what you get from SDRplay. Nevertheless, this is my attempt to do so.
Again, look at the tutorial "SDR and DSP for Hams" on decimation. You will see you how to filter and down sample -- not just down sample.
If anyone spots any inaccuracies in my description, please reply. Thanks.
Bit/samples/bytes etc work with a power of 2.
So it's always a multitude of 2^x, that works the same when decimate.
It decimates in the same power of 2, like 0, 2, 4, 8, 16, 32 and 64.
All other decimates result it illegal call and turn off decimate at the device.
I have adjusted middle-where for websdr.org, if you check out the sample-rate section of the code it may become clear how it works.
It just depends on how small bandwidth you select, if then the sample-rate must be lower it's solved by decimating.
There are several BW's to select from, and to get 192000samples, you can decimate high or low depending on the selected BW.
To give you your 192000 I would select BW 200 KHz and you do not have to decimate, why use a bigger BW if you just want 0,192MHz wide?
If you start to decimate the resampling goes up.
Also the number of bit for I/Q can be selected either 8 or 16 bit, they are not always 16bit.
It's my observation that decimating simply gives more samples to process but not more bandwidth in realtime.
To do all at once the max is (optimal) 1536000 samples at full bandwidth and that equals to 1.536MHz wide.
However I sample the entire band, not just a section, that makes a lot of difference.
Reason: No reason
You might be talking about the sdr play's API function mir_sdr_DecimateControl() to decimate which I was not using originally. The docs say to use a power of 2.
The use of a factor that is a power of 2 is efficient but not required. If more than 10, then it should be broken into smaller decimations. And a decimation involves both filtering and down sampling. When you decimate, your sample rate goes down. You might be thinking about interpolation which involves zero padding and upsampling.
I was using the old API. I have since switched to the stream API. I decided to try and see if the built-in decimation via mir_sdr_DecimateControl() would work - and it did. I used a factor of 16 to get my sample rate of 192,000. I had to set my sample rate of 3,072,000 when I called the stream init function. The stream API has a call back instead of a read function to call. So, I had to buffer to my sample rate of 192,000 times 2. This is because I interleave my I and Q sample in the buffer. And I also convert my short (16-bit signed integer) to a double and divide by 32,768.
I also create multiple threads:
1. main user interface thread (already created)
2. buffering of data retrieved from SDR device (SDRplay)
3. DSP code like demodulation, filtering, a decimation to 48,000 sample rate to be fed to audio sink / audio thread.
4. audio sink which I use NAudio
5. FFT / Spectrum wave form display calculation
Then there is the painting of lines in a windows forms app in C#. I needed to paint lines for the spectrum display.
I have discovered painting directly to the screen in a windows forms C# application is too slow so I had to paint to a bitmap in a picturebox instead which was faster. I have considered creating my own user control or switching to WPF to see if that would be faster. And it helps to have a fast FFT implementation too.
Also your factor of 16 is again a power of 2. Same with the division of 32768.For example, if you double the sample rate, an equivalent filter will require four times as many operations to implement. This is because both amount of data (per second) and the length of the filter increase by two, so convolution goes up by four. Thus, if you can halve the sample rate, you can decrease the work load by a factor of four. I guess you could say that if you reduce the sample rate by M, the workload for a filter goes down to (1/M)^2.
Still all a power of 2.
However I notices that it multiplies the output rate.
Say you have 1024Msamples, then you decimate by 2 and it gives an output rate of 2048Msamples, then many programs understand it and can use it.
I also noticed that the CPU load of the system goes up a lot.
However keep in mind, I sample 2048M all the time, 2MHz wide, you probably don't.
Reason: No reason