Search code examples
iosobjective-cwavavaudiorecorderaudacity

Audio file format issue in objective c


I have written an audio WAV (have recorded my voice) file using AVAudioRecorder. The final file format is a WAV file. File was successfully saved and I can hear my voice. I want to send this file to a back End server (web-service). But my server accepting only data and FMT information in WAV . It is not accepting my wav file because of my wav file infromation with FLLR, data, FMT. I have checked my WAV file information in Riffpad tool. It's showing FLLR, data, FMT. But I want only data and fmt. Because my server side accepts only data and FMT. So please advice how to remove FLLR in my wav file in programmatically?

enter image description here

My source code for the record:

  NSError *error;

    // Recording settings
    NSMutableDictionary *settings = [NSMutableDictionary dictionary];
    [settings setValue: [NSNumber numberWithInt:kAudioFormatLinearPCM] forKey:AVFormatIDKey];
    [settings setValue: [NSNumber numberWithFloat:22050] forKey:AVSampleRateKey];
    [settings setValue: [NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey]; // mono
    [settings setValue: [NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey];
    [settings setValue: [NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey];
    //[settings setValue: [NSNumber numberWithInt:16] forKey:AudioSampleType];




    // File URL
    NSURL *url = [NSURL fileURLWithPath:FILEPATH];

    //NSLog(@"Url file path ::: %@",url);

    // Create recorder
    recorder = [[AVAudioRecorder alloc] initWithURL:url settings:settings error:&error];
    if (!recorder)
    {
//        NSLog(@"Error establishing recorder: %@", error.localizedFailureReason);
        return;
    }

Solution

  • Thank God and your support friends. Yes finally i solved my issues. I don't know the way is correct. But my issues is solved. I recorded voice using above code and save my audio .Then again i will export the audio using the following code. I got this code from https://developer.ibm.com/answers/questions/180732/seems-watson-text-to-speech-service-returns-a-wav.html

    -(NSData*) stripAndAddWavHeader:(NSData*) wav {
        unsigned long wavDataSize = [wav length] - 44;
    
        NSData *WaveFile= [NSMutableData dataWithData:[wav subdataWithRange:NSMakeRange(44, wavDataSize)]];
    
        NSMutableData *newWavData;
        newWavData = [self addWavHeader:WaveFile];
    
        return newWavData;
       }
    
    
     - (NSMutableData *)addWavHeader:(NSData *)wavNoheader {
    
           int headerSize = 44;
           long totalAudioLen = [wavNoheader length];
           long totalDataLen = [wavNoheader length] + headerSize-8;
           long longSampleRate = 22050.0;
           int channels = 1;
           long byteRate = 8 * 44100.0 * channels/8;
    
    
    
         Byte *header = (Byte*)malloc(44);
         header[0] = 'R';  // RIFF/WAVE header
         header[1] = 'I';
         header[2] = 'F';
         header[3] = 'F';
         header[4] = (Byte) (totalDataLen & 0xff);
         header[5] = (Byte) ((totalDataLen >> 8) & 0xff);
         header[6] = (Byte) ((totalDataLen >> 16) & 0xff);
         header[7] = (Byte) ((totalDataLen >> 24) & 0xff);
         header[8] = 'W';
         header[9] = 'A';
         header[10] = 'V';
         header[11] = 'E';
         header[12] = 'f';  // 'fmt ' chunk
         header[13] = 'm';
         header[14] = 't';
         header[15] = ' ';
         header[16] = 16;  // 4 bytes: size of 'fmt ' chunk
         header[17] = 0;
         header[18] = 0;
         header[19] = 0;
         header[20] = 1;  // format = 1
         header[21] = 0;
         header[22] = (Byte) channels;
         header[23] = 0;
         header[24] = (Byte) (longSampleRate & 0xff);
         header[25] = (Byte) ((longSampleRate >> 8) & 0xff);
         header[26] = (Byte) ((longSampleRate >> 16) & 0xff);
         header[27] = (Byte) ((longSampleRate >> 24) & 0xff);
         header[28] = (Byte) (byteRate & 0xff);
         header[29] = (Byte) ((byteRate >> 8) & 0xff);
         header[30] = (Byte) ((byteRate >> 16) & 0xff);
         header[31] = (Byte) ((byteRate >> 24) & 0xff);
         header[32] = (Byte) (2 * 8 / 8);  // block align
         header[33] = 0;
         header[34] = 16;  // bits per sample
         header[35] = 0;
         header[36] = 'd';
         header[37] = 'a';
         header[38] = 't';
         header[39] = 'a';
         header[40] = (Byte) (totalAudioLen & 0xff);
         header[41] = (Byte) ((totalAudioLen >> 8) & 0xff);
         header[42] = (Byte) ((totalAudioLen >> 16) & 0xff);
         header[43] = (Byte) ((totalAudioLen >> 24) & 0xff);
    
         NSMutableData *newWavData = [NSMutableData dataWithBytes:header length:44];
         [newWavData appendBytes:[wavNoheader bytes] length:[wavNoheader length]];
         return newWavData;
     }