Integrate photo, audio, and video content into your apps.

Media Documentation

Posts under Media tag

62 Posts
Sort by:
Post not yet marked as solved
3 Replies
1.3k Views
Hi community I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation. The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker). I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same. There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
Posted Last updated
.
Post not yet marked as solved
0 Replies
368 Views
I am trying to locate information or documentation on how to pull in photos from the iCloud Shared Albums, but have not been able to find anything yet. Dakboard is currently doing it so I know it is possible, but I cannot find an API or any documentation covering how to access the photos in a Shared Album for incorporation into web applications. Can anyone help?
Posted Last updated
.
Post not yet marked as solved
0 Replies
251 Views
Is there any royalty fee needed if I am going to develop a camera which save video in mov format?
Posted
by WinG.CEBM.
Last updated
.
Post not yet marked as solved
0 Replies
475 Views
Hi there, I'm wondering how to get GroupSessionJournal API to work. I have gone through the "Share files with SharePlay" session WWDC23 and have been unsuccessful at getting the DrawTogether example app to work with syncing the images using the GroupSessionJournal as described and shown in the session. When I run the DrawTogether example app with the GroupSessionJournal code in it, I can get the two devices to see one another and the strokes will update across both devices in realtime (they are using GroupSessionMessenger) but the image code doesn't cause images loaded on either side to sync to the other device. Is the GroupSessionJournal still in beta - and/or I'm missing something? Cheers! j*
Posted
by Jefc.
Last updated
.
Post not yet marked as solved
3 Replies
787 Views
https://developer.apple.com/documentation/shazamkit/shazamkit_dance_finder_with_managed_session The song detection is successful however with new APIs, I can't find this demo working with SHLibrary, it expect to display the RecentDanceRowView. I wonder if I missed any steps or the SHLibrary is not ready yet.
Posted
by Ruizhe.
Last updated
.
Post not yet marked as solved
3 Replies
558 Views
The documentation for this API mentions: The system uses the current representation and avoids transcoding, if possible. What are the scenarios in which transcoding takes place? The reason for asking is that we've had a user reaching out saying they selected a video file from their Photos app, which resulted in a decrease in size from ~110MB to 35MB. We find it unlikely it's transcoding-related, but we want to gain more insights into the possible scenarios.
Posted
by AvdLee.
Last updated
.
Post not yet marked as solved
2 Replies
648 Views
I know that I can uniquely identify a PHAsset on a given device using localIdentifier but if that asset is synched (through iCloud, say) to another device, how to I uniquely identify that asset across multiple devices? My app allows users to store their images in the standard photo gallery, but I have no way of referring to them when they sync their app profile to another iOS device with my app installed.
Posted Last updated
.
Post not yet marked as solved
1 Replies
985 Views
Is it still possible to tutor a QuickTime movie with hyperlinks? I'm building a website and I know at one point you could author a QuickTime movie that supported links inside the video - either to other timestamps in the video or to other web pages. I don't want to use a custom player, I'd prefer to use the system level. I've seen a really amazing example of this on the mobile version of the memory alpha (Star Trek nerds!) website. There is a movie that plays at the top of pages that is fully interactive. Is that still supported? Is it possible to author that way? I'm not making anything insanely complicate, I just thought it would be a nice way to build a website with tools I'm more comfortable working in.
Posted
by videoalex.
Last updated
.
Post not yet marked as solved
0 Replies
553 Views
I'm trying to use the resourceLoader of an AVAsset to progressively supply media data. Unable to because the delegate asks for the full content requestsAllDataToEndOfResource = true. class ResourceLoader: NSObject, AVAssetResourceLoaderDelegate { func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool { if let ci = loadingRequest.contentInformationRequest { ci.contentType = // public.mpeg-4 ci.contentLength = // GBs ci.isEntireLengthAvailableOnDemand = false ci.isByteRangeAccessSupported = true } if let dr = loadingRequest.dataRequest { if dr.requestedLength > 200_000_000 { // memory pressure // dr.requestsAllDataToEndOfResource is true } } return true } } Also tried using a fragmented MP4 created using AVAssetWriter. But didn't work. Please let me know if it's possible for the AVAssetResourceLoader to not ask for the full content?
Posted
by on-d-go.
Last updated
.
Post not yet marked as solved
3 Replies
1.8k Views
I  am trying to save an image to the user's photo library using PHPhotoLibrary and set the image file name at the time of saving suing the code below. This is working the first time, but if I then try to save the same image again with a different file name, it saves with the same file name as before. Is there something I need to add to let the system know to save a new version of the image with a new file name? Thank you PHPhotoLibrary.shared().performChanges ({ let assetType:PHAssetResourceType = .photo let request:PHAssetCreationRequest = .forAsset() let createOptions:PHAssetResourceCreationOptions = PHAssetResourceCreationOptions() createOptions.originalFilename = "\(fileName)" request.addResource(with: assetType, data: image.jpegData(compressionQuality: 1)!, options: createOptions) }, completionHandler: { success, error in if success == true && error == nil { print("Success saving image") } else { print("Error saving image: \(error!.localizedDescription)") } })
Posted
by tomcoomer.
Last updated
.
Post not yet marked as solved
1 Replies
509 Views
I am aware that HLS is required for most video streaming use cases (watching a movie, TV show, or YouTube video). This is a requirement for all apps. However, I am confused as to whether this would also apply to video chat/video conferencing apps. It would be inefficient to upload compressed video using rtmp/rtp, decompress it, and create HLS segments. Low latency requirements only make this worse. So, is it permissible to use other protocols for video conferencing use cases? Thanks
Posted
by xosteddd.
Last updated
.
Post not yet marked as solved
0 Replies
574 Views
Hi, Replay doesn't work on HLS videos (.m3u8) on ios 16 safari when you get to the end of a video. It works on .mp4s, .movs, etc. I have written a github issue on the videojs repo here: https://github.com/videojs/video.js/issues/8345 But i'm starting to think its the new native ios 16 player that is causing issues and not the library itself.
Posted
by leo8.
Last updated
.
Post not yet marked as solved
4 Replies
1.9k Views
Hi, I'm trying to add a video to my first iOS app. From the tutorials I've read online, this seemed to be a simple process of creating a AVPlayer, providing a URL to the video file and using onAppear to start the video playing when the view is shown. Below is a simplified version of the code I'm using in my app: struct ContentView: View {   let avPlayer = AVPlayer(url: Bundle.main.url(forResource: "Intro", withExtension: "mp4")!)   var body: some View {     VStack{       VideoPlayer(player: avPlayer)         .onAppear{           avPlayer.play()         }     }   } } When I run this code, the video plays but when it finishes playing I receive the following errors in the Xcode output window: 2023-01-27 11:56:39.530526+1100 TestVideo[29859:2475750] [api] -[CIImage initWithCVPixelBuffer:options:] failed because the buffer is nil. 2023-01-27 11:56:39.676462+1100 TestVideo[29859:2475835] [TextIdentificationService] Text LID dominant failure: lidInconclusive 2023-01-27 11:56:39.676822+1100 TestVideo[29859:2475835] [VisualTranslationService] Visual isTranslatable: NO; not offering translation: lidInconclusive 2023-01-27 11:56:40.569337+1100 TestVideo[29859:2476091] Metal API Validation Enabled I have googled each of these error messages but have not been able to find any information explaining exactly what they mean or how to eliminate them. I am using Xcode 14.2 and testing on iOS 16.2. If anyone could please point me in the right direction of how to understand and eliminate these errors I'd really appreciate it. Thanks!
Posted
by Bandit03.
Last updated
.
Post not yet marked as solved
0 Replies
741 Views
If I disable playback controls for an AVPlayer (showsPlaybackControls), some feature of MPNowPlayingInfoCenter no longer working. (play/pause, skip forward and backward). I need custom video and audio controls on my AVPlayer in my app, that's why I disabled the iOS playback controls. But I also need the features of the MPNowPlayingInfoCenter. Is there another solution to achieve this?
Posted Last updated
.
Post not yet marked as solved
0 Replies
449 Views
Are there plans to expose the cinematic frames (e.g. disparity) to a AVAsynchronousCIImageFilteringRequest? I want to use my own lens blur shader on the cinematic frames. Right now it looks like the cinematic frames are only available in a AVAsynchronousVideoCompositionRequest like this: guard let sourceFrame = SourceFrame(request: request, cinematicCompositionInfo: cinematicCompositionInfo) else { return } let disparity = sourceFrame.disparityBuffer
Posted Last updated
.
Post not yet marked as solved
1 Replies
746 Views
I am trying to create an album for an app in photo_kit and store images in it, is there any way to do this under the NSPhotoLibraryAddUsageDescription permission? At first glance, using NSPhotoLibraryAddUsageDescription seems to be the best choice for this app, since I will not be loading any images. However, there are two album operations that can only be done under NSPhotoLibraryUsageDescription. Creating an album Even though the creation of an album does not involve loading media, it is necessary to use NSPhotoLibraryUsageDescription to allow users to load media. This is a bit unconvincing. Saving images in the created album Before saving, you must check to see if the app has already created the album. You need to fetch the name as a key. This is where NSPhotLibraryUsageDescription is needed. I understand that the NSPhotLibraryUsageDescription is needed for fetching, but if iOS forbids the creation of albums with the same name and ignores attempts to create an album with an already existing name, this fetching will not be necessary. In summary, I just want to create an album for my app and store media in it, but in order to do so I need to get permission from the user to read into the photos, which goes against the idea that the permissions I request should be minimal and only what I need. If there is a way to do this under the NSPhotoLibraryAddUsageDescription permission I would like to know, I am new to Swift so sorry if I am wrong.
Posted
by natsuk4ze.
Last updated
.
Post not yet marked as solved
1 Replies
974 Views
Background I am building a web app where users can talk to a microphone and watch dynamic video that changes depending on what they say. Here is a sample flow: User accesses the page Mic permission popup appears, user grants it User presses start buttton, standby video starts playing and mic turns on User speaks something, the speech gets turned to text, analyzed, and based on that, the src changes The new video plays, mic turns on, the loop continues. Problem The problem in iOS is that volume goes down dramatically to a level where it is hard to hear on max volume if mic access is granted. iPhone's volume up/down buttons don't help much either. That results in a terrible UX. I think the OS is forcefully keeping the volume down, but I could not find any documentation about it. On the contrary, if mic permission not granted, the volume does not change, video plays in a normal volume. Question Why does it happen and what can I do prevent volume going down automatically? Any help would be appreciated. This will not happen in PC(MacOS, Windows) or Android OS. Has anyone had any similar experience before? Context For context: I have two tags(positioned:absolute, with and height 100%) that are switched(by toggling z-indexs) to appear one on top of the other. This is to hide load, buffer and black screen from the user for better UX. If the next video is loaded and can play, then they are switched places. both tags have playsinline to enable inline play as required by webKit. both tags start out muted, muted is removed after play starts. video.play() is initiated after user grants mic permission Tech stack NextJS with Typescript, latest versions Testing on latest Chrome and Safari on iOS 16 fully updated
Posted
by beki.
Last updated
.