I'm writing embedded code for spectroscopy.
In order to build my spectrum, I need to map linearly samples from an interval (dynamic range is given by the physics/specs of the problem) to another. Basically after data is processed I have a series of samples (peaks) and every one of them will contribute to the spectrum (i.e. will increment the counter of a specific bin in a histogram).
Here's a sketch:
So in C, I need to map each peak value into the [0:4095] and I'm doing this in real-time on a MCU (LPC4370) so I need to go fast.
The problem is that my dumb implementation is squeezing everything to 0.
here's what i did:
#define MCA_SIZE 4096
#define PEAK_MAX 1244672762
#define PEAK_MIN 6000000
int32_t mca[MCA_SIZE];
int32_t peak_val;
int32_t bin_val;
[...]
if(peak_val > PEAK_MIN)
{
bin_val = (int)(MCA_SIZE*(peak_val-PEAK_MIN)/(PEAK_MAX-PEAK_MIN));
/*Increment corrispondent multi channel bin*/
mca[bin_val]+=1;
};
Where every quantity is int32 if lower cas, #define is upper case. The problem is that is believe this one
(peak_val-PEAK_MIN)/(PEAK_MAX-PEAK_MIN)
Goes very often near zero. So I end-up having just the first one or two bins filled.
Here's a screenshot of first values of mca after few thousand iterations:
Here's the disassbly view of the code under study, along with register status at breakpoint.
What is the best/fastest way to handle this kind of problem?
The intermediate result (MCA_SIZE*(peak_val-PEAKMIN))
is too large for a 32-bit integer datatype. I would use uint64_t for these calculations, and I would define all of your constants as const uint64_t
rather than using a #define
, adding a suffix of ULL
to their literal values.