Access raw audio data with AudioQueueBuffer

I am trying to get the raw audio data from the system microphone using AudioToolbox and CoreFoundation frameworks. So far the writing packets to file logic works but when I try to capture the raw data into a file I am getting white noise.

Callback function looks like this:

static void MyAQInputCallback(void *inUserData, AudioQueueRef inQueue,
                             AudioQueueBufferRef inBuffer,
                             const AudioTimeStamp *inStartTime,
                             UInt32 inNumPackets,
                             const AudioStreamPacketDescription *inPacketDesc)
{
   MyRecorder *recorder = (MyRecorder *)inUserData;
   
   if (inNumPackets > 0)
   {
       
       CheckError(AudioFileWritePackets(recorder->recordFile, FALSE, inBuffer->mAudioDataByteSize,
                                        inPacketDesc, recorder->recordPacket, &inNumPackets, 
                                        inBuffer->mAudioData), "AudioFileWritePackets failed");
       
       recorder->recordPacket += inNumPackets;
       int sampleCount = inBuffer->mAudioDataByteSize / sizeof(AUDIO_DATA_TYPE_FORMAT);
       
       AUDIO_DATA_TYPE_FORMAT* samples = (AUDIO_DATA_TYPE_FORMAT*)inBuffer->mAudioData;
       FILE *fp = fopen(filename, "a");
       for (int i = 0; i < sampleCount; i++){
           fprintf(fp, "%i;\n",samples[i]);
       }
       fclose(fp);
       
   }
   
   if (recorder->running)
       CheckError(AudioQueueEnqueueBuffer(inQueue, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
}

Some parameters: NumberRecordBuffers = 3 buffer duration = 0.1 format->mFramesPerPacket = 4096 samplerate = 44100 inNumPackets = 1 recordFormat.mFormatID = kAudioFormatAppleLossless; recordFormat.mChannelsPerFrame = 1; recordFormat.mBitsPerChannel = 16;

Is this the correct way to do this? I could not find much information in the documentation. Any help is appreciated. Thank you in advance.

Replies

Does it make a difference if you open the file in binary mode vs ascii? FILE *fp = fopen(filename, "ab+");

  • There was no difference in the file mode. Actually the values are still gibberish when printed to the terminal.

Add a Comment

Instead of writing byte by byte to the file you already have the buffer in mAudioData. Why not just write the entire data buffer to the file stream when the callback events are raised?

  • Isn't the data buffer still in binary format? I am using casting to SInt16 to get the amplitude of the sound. I believe that maybe the casting is wrong and the data buffer has some formatting that I am missing.

Add a Comment

You might have better luck using the AVFoundation API for reading and writing audio to the file system. https://developer.apple.com/documentation/avfaudio/audio_engine/performing_offline_audio_processing

I solved it using recordFormat.mFormatID = kAudioFormatLinearPCM and passing recordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked flag. The issue was definitely in the streaming of the AppleLossless format.

Add a Comment