I'm using the GNU Scientific Library to define & use complex numbers. A complex number is defined as
typedef struct{
double dat[2];
} gsl_complex;
Which is just representing real and imaginary parts as a double-precision floating point (each part using up 8 bytes). I need to pass an array of these values to a D/A converter which works with a SC16Q11 (signed-complex, 16-bit Q11) format.
From what I understand, a 16-bit processor that uses the Q11 format uses 16-(11+1)=4 bits for the integer portion, leaving one sign bit and 11 bits for the fractional portion.Is this correct? How can I convert between these two data types?
The documentation states that each IQ sample is an interleaved IQ pair, where each value of the pair is an int16_t.
Yes, a Q11 format is a Q4.11 (or Q5.11 if you count the sign bit to the int bits).
more informations can you find at http://en.wikipedia.org/wiki/Q_%28number_format%29
you can do the conversion with:
int16_t number[2];
number[0] = round(dat[0] * 2048);
number[1] = round(dat[1] * 2048);
the 2048 comes from 2^11. It's also written in the link.