In the iOS 5.0 documentation it is stated that the canonical audio data type is 16 bit signed int (link):
The canonical audio data sample type for input and output.
typedef SInt16 AudioSampleType;
Discussion
The canonical audio sample type for input and output in iPhone OS is linear PCM with 16-bit integer samples.
However if I right-click "jump to definition" on AudioSampleType
I see the following definition, in CoreAudioTypes.h
:
#if !CA_PREFER_FIXED_POINT
typedef Float32 AudioSampleType;
typedef Float32 AudioUnitSampleType;
#else
typedef SInt16 AudioSampleType;
typedef SInt32 AudioUnitSampleType;
#define kAudioUnitSampleFractionBits 24
#endif
and again when jump-to-def for CA_PREFER_FIXED_POINT
I see:
#if !defined(CA_PREFER_FIXED_POINT)
#if TARGET_OS_IPHONE
#if (TARGET_CPU_X86 || TARGET_CPU_X86_64 || TARGET_CPU_PPC || TARGET_CPU_PPC64) && !TARGET_IPHONE_SIMULATOR
#define CA_PREFER_FIXED_POINT 0
#else
#define CA_PREFER_FIXED_POINT 1
#endif
#else
#define CA_PREFER_FIXED_POINT 0
#endif
#endif
Checking in my code at run-time, I see that CA_PREFER_FIXED_POINT
is defined to be 1, both on the simulator and on my iPod.
So, my questions:
SInt16
on the device?CA_PREFER_FIXED_POINT
to 0 (when programming for iPhone)?Read the contents of the link, and this line in your headers again:
#define kAudioUnitSampleFractionBits 24
The canonical type for audio input and output is equivalent to SInt16.
The canonical type for other audio processing, such as the new iOS 5 filter Audio Units, is 8.24 signed fixed-point.
If you do your own DSP code for near real-time iOS audio processing, benchmark it with the different types, as on some of the newest ARM cores, sequences of 32-bit floats are often faster than using either of the above canonical types, and coded in NEON asm code even faster.