Sound Analysis

RSS for tag

Analyze streamed and file-based audio to classify it as a particular type using Sound Analysis.

Sound Analysis Documentation

Posts under Sound Analysis tag

11 Posts
Sort by:
Post not yet marked as solved
1 Replies
64 Views
Hello, I can see many apps that analyzes sound from microphone in real time. Is there another library like Audiokit or all of them are made with Audiokit?? Thanks
Posted
by Xavi1984.
Last updated
.
Post not yet marked as solved
0 Replies
179 Views
I'm working with MLSoundClassifier to try to look for 2 different sounds in a live audio stream. I have been debating with the team if it is better to train 2 separate models, one for each different sound, or train 1 model on both sounds? Has anyone had any experience with this. Some of us believe that we have received better results with the separate models and some with 1 single model trained on both sounds. Thank you!
Posted
by mhbucklin.
Last updated
.
Post not yet marked as solved
2 Replies
147 Views
I've only been using this late 2021 MBP 16 for nearly 2 years, and now the speaker is producing a crackling sound. Upon inquiring about repairs, customer service informed me that it would cost $728 to replace the speaker, which is a third of the price of the laptop itself. It's absolutely absurd that a $2200 laptop's speaker would fail within such a short period without any external damage. The repair cost being a third of the laptop's price is outrageous. I intend to initiate a petition in the US, hoping to connect with others experiencing the same problem. This is indicative of a subpar product, and customers shouldn't bear the burden of Apple's shortcomings. I plan to share my grievances on various social media platforms and if the issue persists, I will escalate it to the media for further exposure.
Posted Last updated
.
Post not yet marked as solved
0 Replies
295 Views
I am working on a design that requires connecting an ios device to two audio output devices specifically headphones and a speaker. I want the audio driver to switch output device without user action. Is this manageable via ios SDK?
Posted
by AnLiu.
Last updated
.
Post not yet marked as solved
1 Replies
334 Views
Hi, I have been trying to connect my microphone on my reason studio for days now without any outcome. So I was asked to download ASIO Driver on my mac. I realised that I have an IAC driver. I need help on downloading the Asio driver and wish to know if there will be a problem running it with the IAC driver. I also just upgraded without knowing from ventura to sonoma. Am using an audio iinterface(Focusrite scarlett solo to connect to the reason application and my mic is nt1-a rode but I can get a sound but cannot record. Will be overwhelmed if I can get help from here. Thanks
Posted Last updated
.
Post not yet marked as solved
1 Replies
397 Views
Is it possible to pick the user's current device tone/sound as the Push Notification sound for my app programmatically. So that s/he does not miss out any notification just because of unsustainable sound. I have gone through the UNNotificationSound Class, it provides the option to opt either the default or the custom sound. But, my concern is to auto pick the sound which user is using on his/her mobile. Thanks in advance!
Posted
by YR23.
Last updated
.
Post not yet marked as solved
1 Replies
830 Views
I was play a pcm(24kHz 16bit) file with AudioQueue, and it mix with other sound( 192kHz 24bit) named sound2. Setting for AVAudioSession as: category (AVAudioSessionCategoryPlayback), options (AVAudioSessionCategoryOptionMixWithOthers|AVAudioSessionCategoryOptionDuckOthers) when playing pcm the sound2 should pushed volume lower as setting. BUT, there has a absolutly mute keep 0.5 second when the sound2 become low, and after a 0.5s mute the pushed lower sound came out. It only become in sound2 situation(192kHz, 24bit). if the sound2's quality lower everything is OK.
Posted
by yuanjilee.
Last updated
.
Post not yet marked as solved
0 Replies
367 Views
Please Teach me. I wan't to get list of SystemSoundId. (relation can file) (Officially if possible) (example) 1001 - mail-sent.caf 1002 - Voicemail.caf  ・  ・ I know where [.caf] file in iPhone. /System/Library/Audio/UISounds/ but, I don't know relation [.caf] and soundId.
Posted Last updated
.
Post not yet marked as solved
0 Replies
752 Views
I'm seeing unexpected results when examining the results from a sound classification test. Whilst I appear to get accurate startTime for observations, the duration is always the same as the value put into the windowDuration. I'm guessing I'm misunderstanding the purpose of duration in the classification results. The link here says: The time range’s CMTime values are the number of audio frames at the analyzer’s sample rate. Use these time indices to determine where, in time, the result corresponds to the original audio. My understanding of this statement is it should give me the startTime AND the duration of that detection event. For example, if I attempt to detect a crowd sound and that sound lasts for 1.8 seconds, then I should see 1.8 seconds in the duration. Below is some code showing what I'm seeing. Initialisation of request.windDuration of 1 second. If I change this to any other value, that value is reported back as the duration of the event. Even if the event is half a second in duration. Any help in either a code issue or understanding the results better would be appreciated. Thanks let request = try SNClassifySoundRequest(classifierIdentifier: .version1) request.overlapFactor = 0.8 request.windowDuration = CMTimeMakeWithSeconds(600, preferredTimescale: 600) My code to get the values out of the SNResult func request(_ request: SNRequest, didProduce result: SNResult) { guard let analysisResult = result as? SNClassificationResult, let predominantSound = analysisResult.classifications.first?.identifier, soundsToDetect.contains(predominantSound) else { return } let startTime = analysisResult.timeRange.start.seconds let duration = analysisResult.timeRange.duration.seconds let confidence = analysisResult.classifications.first?.confidence ?? 0.0 let detectedSound = ClassificationObject(id: UUID(), name: predominantSound, startTime: startTime, duration: duration, confidence: confidence) self.detectedSounds.append(detectedSound) }
Posted
by 3saul.
Last updated
.