Create view-level services for media playback, complete with user controls, chapter navigation, and support for subtitles and closed captioning using AVKit.

AVKit Documentation

Posts under AVKit tag

90 Posts
Sort by:
Post not yet marked as solved
2 Replies
149 Views
After numerous trials and errors, we finally succeeded in implementing VR180. However, there is a problem. Videos played via a URL (Internet) connection experience significant lag. Initially, I thought it was a bitrate issue. But after various tests, I began to suspect that the problem might be with the internet connection processing..itself I tested the same video through both file opening (set up as a network drive) and URL (AWS) connections. Since AWS provides stable speeds, I concluded there is no issue there. The video files are 8K. The bitrate is between 80-90 Mbps. The conditions for decoding and implementing 8K are the same. Also, when I mirrored the video, there was significant lag. Both AFP and URL use the same wireless conditions. I assume the conditions for implementing 8K are the same. When mirroring, the AFP connection had no lag at all. Could it be that VisionOS's URL (Internet connection) is causing a high system load? I noticed that an app called AmazeVR allows videos to be downloaded before playing. Could this be because of the URL issue? If anyone knows, please respond.
Posted
by iron5bba.
Last updated
.
Post not yet marked as solved
1 Replies
839 Views
I am learning SwiftUI, I want to observe an AVPlayer status so I know when the videos is paused or not. My current approach is more less like this: I have VideosView that holds a list of a videos (in ZStack cards). VideoViews has a VideosViewModel. in VideosView i am calling in onAppear VideosViewModel.getItems... struct ItemModel: Identifiable, Codable, Hashable, Equatable { var id: String var author: String // video owner var url: URL? // url to the video var player: AVPlayer? // AVPlayer created based on self.url... mutating func setPlayer(_ avPlayer: AVPlayer) { self.player = avPlayer } } // vm class FeedViewModel: ObservableObject { @Published private(set) var items: [ItemModel] = [] func getItems() async { do { // fetch data from the API let data = try await dataService.fetchFeeds() // download and attach videos downloadFeedVideos(data) } catch { // .... } } private func downloadFeedVideos(_ feeds: [ItemModel]) { for index in feeds.indices { var item = feeds[index] if let videoURL = item.url { self.downloadQueue.queueDownloadIfFileNotExists( videoURL, DownloadOperation( session: URLSession.shared, downloadTaskURL: videoURL, completionHandler: { [weak self] (localURL, response, error) in guard let tempUrl = localURL else { return } let saveResult = self?.fileManagerService.saveInTemp(tempUrl, fileName: videoURL.lastPathComponent) switch saveResult { case .success(let savedURL): DispatchQueue.main.async { // maybe this is a wrong place to have it? item.setPlayer(AVPlayer(url: savedURL)) self?.items.append(item) if self?.items.count ?? 0 > 1 { // once first video is downloaded, use all device cores to fetch next videos // all newest iOS devices has 6 cores self?.downloadQueue.setMaxConcurrentOperationCount(.max) } } case .none: break case .failure(_): EventTracking().track("Video download fail", [ "id": item.id, "ulr": videoURL.absoluteString.decodeURL() ]) } }), { fileCacheURL in // file already downloaded DispatchQueue.main.async { item.setPlayer(AVPlayer(url: fileCacheURL)) self.items.append(item) } }) } } } } I found this article with some pseudo-code of how to track video playback state but I'm not sure how to implement it in my code.... https://developer.apple.com/documentation/avfoundation/media_playback/observing_playback_state
Posted
by breq.
Last updated
.
Post not yet marked as solved
0 Replies
81 Views
I'm using something similar to this example. import SwiftUI struct ContentView: View { @State private var toggle = false var body: some View { CustomParentView { Button { toggle.toggle() } label: { Text(toggle.description) } } } } struct CustomParentView<Content: View>: UIViewRepresentable { let content: Content @inlinable init(@ViewBuilder content: () -> Content) { self.content = content() } func makeUIView(context: Context) -> UIView { let view = UIView() let hostingController = context.coordinator.hostingController hostingController.view.frame = view.bounds hostingController.view.autoresizingMask = [.flexibleWidth, .flexibleHeight] view.addSubview(hostingController.view) return view } func updateUIView(_ uiView: UIView, context: Context) { context.coordinator.hostingController.rootView = self.content } class Coordinator: NSObject { var hostingController: UIHostingController<Content> init(hostingController: UIHostingController<Content>) { self.hostingController = hostingController } } func makeCoordinator() -> Coordinator { return Coordinator(hostingController: UIHostingController(rootView: content)) } } The only different thing is I'm using UIScrollView. When I have a @State width and call .frame(width) on the content, the content would stay with initial width even when width is changed. I tried: hostingController.sizingOptions = .intrinsicContentSize This time the size would change to correct size if I pinch zoom the content, but the initial size that trigger updateUIView would be .zero. This prevents me to center the content. Is there a way to dynamically set size and get correct rendering just like any child view of a normal SwiftUI view?
Posted Last updated
.
Post not yet marked as solved
2 Replies
151 Views
I added VideoPlayer view inside my project, but I noticed that during loading or with different aspect ratio the default color of this view is black. I would like to change it according to to my app background. Unfortunately using modifiers such as .background or .foregroundColor doesn't seem to change it. Is there a way to customize this color? struct PlayerLooperView: View { private let queuePlayer: AVQueuePlayer! private let playerLooper: AVPlayerLooper! init(url: URL) { let playerItem = AVPlayerItem(url: url) self.queuePlayer = AVQueuePlayer(items: [playerItem]) self.queuePlayer.isMuted = true self.playerLooper = AVPlayerLooper(player: queuePlayer, templateItem: playerItem) self.queuePlayer.play() } var body: some View { VideoPlayer(player: queuePlayer) .disabled(true) .scaledToFit() } }
Posted
by DrAma78.
Last updated
.
Post not yet marked as solved
0 Replies
178 Views
I am using Xcode Version 15.3 (15E204a) and different versions of Simulator runtimes (17.x, 16.x, 15.0) The app makes outgoing calls and can respond to incoming calls. After starting the call, ~2s pass before a hangup occurs. In the Console logs I see that CXEndCallAction was invoked by CallKit and the last suspicious log before invoking the CXEndCallAction is callservicesd Disconnecting call because there wont be a UI to host the call: &lt;CSDProviderCall 0x107054300 type=PhoneNumber, value=sdsddsdds, stat=Sending tStat=0, model=&lt;TUCallModel 0x103f661e0 hold=1 grp=1 ungrp=1&gt; ... This used to work before, but since upgrading to Xcode 15 and iOS 17.x it happens constantly on simulator versions 17.x, and sometimes on 16.x, whereas I wasn't able to reproduce it on 15.0 version. Can someone help me understand why this happens and how to fix it? I provided some logs down below, and I don't see similar logs in the cases when the call is okay and CallKit doesn't hangup it. Also, this does not happen on real devices From the time CXStartCallAction is invoked until the CallKit invokes CXEndCallAction, these are some of the error or warn logs that appear: callservicesd -AVSystemController- +[AVSystemController sharedInstance]: Failed to allocate AVSystemController, numberOfAttempts=3 callservicesd [WARN] +[AVSystemController sharedAVSystemController] returned nil value callservicesd [WARN] Not allowing requested start call action because a call with same UUID already exists callWithUUID: (omitted) callservicesd Error while determining process action for callSource: (omitted) callservicesd Determined that callSource: &lt;CXXPCCallSource 0x103d060a0, ...&gt;, should process action: &lt;CXStartCallAction 0x107232760 UUID=8D34853F-55DD-4DEC-97A7-551BFD27C924, error: Error Domain=com.apple.CallKit.error.requesttransaction Code=5 "(null)" callservicesd [0x103e417a0] invalidated after the last release of the connection object callservicesd [WARN] No paired device, so unable to send message UpdateCallContext callservicesd FaceTime caller ID (null) is not a valid outgoing relay caller ID callservicesd Attempting to find a valid outgoing caller ID in set of available outgoing caller IDs {( )} callservicesd Could not automatically select an outgoing caller ID; multiple telephone numbers are listed in the set of available outgoing caller IDs {( )} callservicesd Adding call &lt;CSDProviderCall 0x107054300 ...&gt; to dirty calls pool callservicesd Entitlement check: ... entitlementCapabilities={( "access-call-providers", "modify-calls", "access-call-capabilities", "access-calls" )}&gt; lacks capability 'access-screen-calls' callservicesd [WARN] ... but no dynamic identifier could be found (1) or no handoff user info exists (1). Not broadcasting frontmost call error com.apple.CallKit.CallDirectoryUnable to initialize CXCallDirectoryStore for reading: Error Domain=NSCocoaErrorDomain Code=513 "You don’t have permission to save the file “CallDirectory” in the folder “Library”." ... {Error Domain=NSPOSIXErrorDomain Code=13 "Permission denied"}} The logs provided are in order in which they are logged, but some of them are recurring After these logs there is still a message that CXStartCallAction is fullfilled: callservicesd Start call action fulfilled: &lt;CXStartCallAction 0x107231fe0 UUID=8D34853F-55DD-4DEC-97A7-551BFD27C924 ...&gt; After which the last suspicious log is logged before CXEndCallAction is invoked by CallKit: Disconnecting call because there wont be a UI to host the call: &lt;CSDProviderCall 0x107054300 ...&gt;
Posted
by kemica.
Last updated
.
Post not yet marked as solved
0 Replies
190 Views
I'm attempting to integrate DRM into the app. I've developed a prototype, but the delegate method shouldWaitForLoadingOfRequestedResource isn't being triggered on certain devices, although it functions correctly on others. Notably, it's invoked on Apple TV 4K (3rd generation) Wi-Fi (A2737) but not on Apple TV HD (A1625). Are there any specific configurations needed to ensure this method is invoked? let url = URL(string: RESOURCE_URL)! // Create the asset instance and the resource loader because we will be asked // for the license to playback DRM protected asset. let asset = AVURLAsset(url: url) let queue = DispatchQueue(label: CUSTOM_SERIAL_QUEUE_LABEL) asset.resourceLoader.setDelegate(self, queue: queue) // Create the player item and the player to play it back in. let playerItem = AVPlayerItem(asset: asset) let player = AVPlayer(playerItem: playerItem) // Create a new AVPlayerViewController and pass it a reference to the player. let controller = AVPlayerViewController() controller.player = player // Modally present the player and call the player's play() method when complete. present(controller, animated: true) { player.play() } } //Please note if your delegate method is not being called then you need to run on a REAL DEVICE func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool { // Getting data for KSM server. Get the URL from tha manifest, we wil need it later as it // contains the assetId required for the license request. guard let url = loadingRequest.request.url else { print(#function, "Unable to read URL from loadingRequest") loadingRequest.finishLoading(with: NSError(domain: "", code: -1, userInfo: nil)) return false } // Link to your certificate on BuyDRM's side. // Use the commented section if you want to refer the certificate from your bundle i.e. Store Locally /* guard let certificateURL = Bundle.main.url(forResource: "certificate", withExtension: "der"), let certificateData = try? Data(contentsOf: certificateURL) else { print("failed...", #function, "Unable to read the certificate data.") loadingRequest.finishLoading(with: NSError(domain: "com.domain.error", code: -2, userInfo: nil)) return false } */ guard let certificateData = try? Data(contentsOf: URL(string: CERTIFICATE_URL)!) else { print(#function, "Unable to read the certificate data.") loadingRequest.finishLoading(with: NSError(domain: "", code: -2, userInfo: nil)) return false } // The assetId from the main/variant manifest - skd://***, the *** part. Get the SPC based on the // already collected data i.e. certificate and the assetId guard let contentId = url.host, let contentIdData = contentId.data(using: String.Encoding.utf8) else { loadingRequest.finishLoading(with: NSError(domain: "", code: -3, userInfo: nil)) print(#function, "Unable to read the SPC data.") return false } guard let spcData = try? loadingRequest.streamingContentKeyRequestData(forApp: certificateData, contentIdentifier: contentIdData, options: nil) else { loadingRequest.finishLoading(with: NSError(domain: "", code: -3, userInfo: nil)) print(#function, "Unable to read the SPC data.") return false } // Prepare to get the license i.e. CKC. let requestUrl = CKC_URL let stringBody = "spc=\(spcData.base64EncodedString())&assetId=\(contentId)" let postData = NSData(data: stringBody.data(using: String.Encoding.utf8)!) // Make the POST request with customdata set to the authentication XML. var request = URLRequest(url: URL(string: requestUrl)!) request.httpMethod = "POST" request.httpBody = postData as Data request.allHTTPHeaderFields = ["customdata" : ACCESS_TOKEN] let configuration = URLSessionConfiguration.default let session = URLSession(configuration: configuration) let task = session.dataTask(with: request) { data, response, error in if let data = data { // The response from the KeyOS MultiKey License server may be an error inside JSON. do { let parsedData = try JSONSerialization.jsonObject(with: data) as! [String:Any] let errorId = parsedData["errorid"] as! String let errorMsg = parsedData["errormsg"] as! String print(#function, "License request failed with an error: \(errorMsg) [\(errorId)]") } catch let error as NSError { print(#function, "The response may be a license. Moving on.", error) } // The response from the KeyOS MultiKey License server is Base64 encoded. let dataRequest = loadingRequest.dataRequest! // This command sends the CKC to the player. dataRequest.respond(with: Data(base64Encoded: data)!) loadingRequest.finishLoading() } else { print(#function, error?.localizedDescription ?? "Error during CKC request.") } } task.resume() // Tell the AVPlayer instance to wait. We are working on getting what it wants. return true }
Posted Last updated
.
Post not yet marked as solved
0 Replies
165 Views
I'm trying to use AVPlayer to capture frames from a livestream that is remotely playing. Eventually I want to convert these frames to UIImages to be displayed. The code I have right now is not working because pixel_buffer doesn't have an actual value for some reason. When I print itemTime its value is continuously 0, which I think might be a potential cause of this issue. Would appreciate any help with getting this to work. import RealityKit import RealityKitContent import AVFoundation import AVKit class ViewController: UIViewController { let player = AVPlayer(url: URL(string: {webrtc stream link})!) let videoOutput = AVPlayerItemVideoOutput(pixelBufferAttributes: [String(kCVPixelBufferPixelFormatTypeKey): NSNumber(value: kCVPixelFormatType_32BGRA)]) override func viewDidLoad() { print("doing viewDidLoad") super.viewDidLoad() player.currentItem!.add(videoOutput) player.play() let displayLink = CADisplayLink(target: self, selector: #selector(displayLinkDidRefresh(link:))) displayLink.add(to: RunLoop.main, forMode: RunLoop.Mode.common) } @objc func displayLinkDidRefresh(link: CADisplayLink) { let itemTime = videoOutput.itemTime(forHostTime: CACurrentMediaTime()) if videoOutput.hasNewPixelBuffer(forItemTime: itemTime) { if let pixelBuffer = videoOutput.copyPixelBuffer(forItemTime: itemTime, itemTimeForDisplay: nil) { print("pixelBuffer \(pixelBuffer)") // yay, pixel buffer let image = CIImage(cvImageBuffer: pixelBuffer) // or maybe CIImage? print("CIImage \(image)") } } } } struct ImmersiveView: View { var body: some View { RealityView { content in if let scene = try? await Entity(named:"Immersive", in: realityKitContentBundle) { content.add(scene) } let viewcontroller = ViewController() viewcontroller.viewDidLoad() }
Posted
by nsthakur.
Last updated
.
Post not yet marked as solved
2 Replies
337 Views
Dear Apple Developer Forum Community, I hope this message finds you well. I am writing to seek assistance regarding an error I encountered while attempting to create a Vedic content in the app from the one YouTube link. I have been unsuccessful in resolving it. I am reaching out to the community in the hope that someone might have encountered a similar issue or have expertise in troubleshooting Xcode errors. Any guidance, suggestions, or solutions would be greatly appreciated. Thank you very much for your time and assistance. Sincerely, Zipzy [games]
Posted Last updated
.
Post marked as solved
8 Replies
596 Views
Im trying to programmatically alter a a video frame before applying it to geometry using VideoMaterial. What I'm finding is that the output appears as though no videoCompositor was applied to the playerItem. Is this expected behavior? Is there a work around besides using an ExportSession to bounce the movie to disk?
Posted Last updated
.
Post not yet marked as solved
0 Replies
273 Views
I want to develop an AI assistant ios application using whisper and chatGPT OpenAI apis. I am implementing these following steps. Audio-engine to record the user's voice Send audio chunk to Whisper for Speech to Text Send that text to chatgpt openAI to get response Now sending that response to Speech Synthesizer to speak response through built-in speaker In this process, i don't want to disable microphone. Because user can interrupt the speech synthesizer anytime he likes. It should be realtime and look like continuous call between the user and AI assistant. Problem: When user speaks, microphone takes the input and appends into the audioengine recording file. Then sends that chunk to whisper for transcribing, transcribed text is then sent to chatgpt api to get response and response is sent to speech synthesiser which generates an output on speaker. Issue is that the microphone again takes synthesiser voice from speaker, and create a loop. What should i possibly do to stop my microphone to not take the input from iphone speaker. Talking tom, callAnnie applications and many other ios applications are continuously using microphone and generating outputs from speaker without overlapping and loop. Suggest the possible ways. I tried to set all possible ways for setting audio-engine category and settings with record, playback, playandrecord etc. Nothing gives me the solution to avoid speaker voice into my microphone. Technically as I think of microphone should never take the device generated voices. What could be the possible solution. If my approach is wrong also i am open to plenty suggestions and guidance.
Posted
by iOSgeekk.
Last updated
.
Post not yet marked as solved
0 Replies
302 Views
How to implement Player in SwiftUI with support for parental controls? Sample Code: Working with Overlays and Parental Controls in tvOS I use AVPlayerViewController in SwiftUI. On cancel, rejecting the request, the screen is black. On change the channel, on move command up or down direction, I am replacing the current player item with new. Status is ready to play, status of the request is successful upon replacement and I set player to play. The screen is still black.
Posted
by ina4.
Last updated
.
Post not yet marked as solved
1 Replies
257 Views
In an iOS UNNotificationContentExtension with a media player, I have an AVPlayer which can either play a WAV or an MP4 remotely depending on the push payload userInfo dictionary. I have implemented mediaPlayPauseButtonFrame, mediaPlayPauseButtonTintColor, and mediaPlayPauseButtonType, have overridden canBecomeFirstResponder to force true, and set the view to becomeFirstResponder when the AVPlayer is added. I have implemented the UNNotificationContentExtension protocol's mediaPlay and mediaPause methods. I also have subscribed to the .AVPlayerItemDidPlayToEndTime (NS)Notification and I call a method on the VC when it returns, which calls mediaPause. When the AVPlayer reaches the end, the .AVPlayerItemDidPlayToEndTime Notification is properly emitted, my method is called, and mediaPause is called. However, the media play/pause button provided by UNNotificationContentExtension remains visibly in the "playing" state instead of changing to the "pause" state. The button correctly changes its display state when the user presses the play/pause button manually, so it works. And so, collective Obis Wan Kenobi, what am I doing wrong? I have tried resigning first responder, have no access to the button itself -- as far as I know -- and am wondering where to go next. (This is the only thing not working by the way.) Sanitized example: import UIKit import UserNotifications import UserNotificationsUI class NotificationViewController: UIViewController, UNNotificationContentExtension { // Constants private let viewModel = ... private var mediaPlayer: AVPlayer? private var mediaPlayerLayer: AVPlayerLayer? private var mediaPlayerItem: AVPlayerItem? { mediaPlayer?.currentItem } override var canBecomeFirstResponder: Bool { true } // MARK: - UNNotificationContentExtension var overrides var mediaPlayPauseButtonType: UNNotificationContentExtensionMediaPlayPauseButtonType { return .default } var mediaPlayPauseButtonFrame: CGRect { return CGRect(x: 0.0, y: 0.0, width: 50.0, height: 50.0) } var mediaPlayPauseButtonTintColor: UIColor { return .blue } ... func didReceive(_ notification: UNNotification) { ... // Process userInfo for url } ... @MainActor func playAudio(from: URL) async { let mediaPlayer = AVPlayer(url: url) let mediaPlayerLayer = AVPlayerLayer(player: audioPlayer) ... // view setup mediaPlayerLayer.frame = ... self.mediaPlayer = mediaPlayer self.mediaPlayerLayer = mediaPlayerLayer self.view.layer.addSublayer(mediaPlayerLayer) becomeFirstResponder() } // MARK: - UNNotificationContentExtension func mediaPlay() { mediaPlayer?.play() } func mediaPause() { mediaPlayer?.pause() } // MARK: - Utilities private func subscribe(to item: AVPlayerItem) { NotificationCenter.default.addObserver(self, selector: #selector(playedToEnd), name: .AVPlayerItemDidPlayToEndTime, object: item) } @objc func playedToEnd(notification: NSNotification) { mediaPause() } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
284 Views
I want to implement an immersive environment similar to AppleTV's Cinema environment for the video that plays in my app - currently, I want to use an AVPlayerViewController so that I don't have to build a control view or deal with aspect ratios (which I would have to do if I used VideoMaterial). To do this, it looks like I'll need to use the imagery from the video stream itself as an image for an ImageBasedLightComponent, but the API for that class seems restrict its input to an EnvironmentResource, which looks like it's meant to use an equirectangular still image that has to be part of the app bundle. Does anyone know how to achieve this effect? Where the "light" from the video being played in an AVPlayerViewController's player can be cast on 3D objects in the RealityKit scene? Is AppleTV doing something wild like combining an AVPlayerViewController and VideoMaterial? Where the VideoMaterial is layered onto the objects in the scene to simulate a light source? Thanks in advance!
Posted
by DotDotDot.
Last updated
.
Post not yet marked as solved
1 Replies
932 Views
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly. I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController. If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails. On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses. However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices. My questions are: Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad) Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)
Posted
by jblaker.
Last updated
.
Post not yet marked as solved
0 Replies
283 Views
I am using default AVPlayerviewcontroller with the default playercontrols, skip buttons for video streaming in tvOS app. Custom controls/buttons are not being used. Can we override the AVPlayerviewcontroller voice over accessibility text/behaviour in for the default player controls? I am not to find out any apple's documentation on this ands not sure even its possible.
Posted Last updated
.
Post not yet marked as solved
1 Replies
585 Views
Hi, In the Destinations sample code project and related WWDC talk on spatial video, it seems to imply that the video player will show 3D stereoscopic videos. However, in the Photos app there's a vignetting in the simulator (and marketing material) when viewing spatial video — a portal kind of effect. Without access to a device I'm wondering if my spatial videos are actually being played as 3D spatial videos in the AVPlayerController, since I'm not seeing the vignetting. I'm thinking that the vignetting is a photos specific visual effect, but wanted to double check to make sure I'm not misunderstanding something about AVPlayerController. Does anyone know if spatial videos played through AVPlayerController will appear as stereoscopic, even if the vignetting isn't there? Has anyone tried the Destinations sample code to play spatial videos on a device to confirm? thanks!
Posted
by Gregory_w.
Last updated
.
Post not yet marked as solved
0 Replies
358 Views
I am embedding SwiftUI VideoPlayer in a VStack and see that the screen goes black (i.e. the content disappears even though video player gets autorotated) when the device is rotated. The issue happens even when I use AVPlayerViewController (as UIViewControllerRepresentable). Is this a bug or I am doing something wrong? var videoURL:URL let player = AVPlayer() var body: some View { VStack { VideoPlayer(player: player) .frame(maxWidth:.infinity) .frame(height:300) .padding() .ignoresSafeArea() .background { Color.black } .onTapGesture { player.rate = player.rate == 0.0 ? 1.0 : 0.0 } Spacer() } .ignoresSafeArea() .background(content: { Color.black }) .onAppear { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: AVAudioSession.CategoryOptions.duckOthers) } catch { NSLog("Unable to set session category to playback") } let playerItem = AVPlayerItem(url: videoURL) player.replaceCurrentItem(with: playerItem) } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
345 Views
I am working on implementing tap gestures in a dynamic VideoPlayer made with AVKit. I intend to have it be when a video is viewed in a feed (this is for a social media app), the video plays without sound. Tapping on the video once enables sound, tapping on the video twice makes it full screen. Currently, the single tap works. However, the double tap isn't detected unless I tap on the top right corner of the video. import SwiftUI import AVKit struct VideoPlayerView: View { @StateObject private var viewModel: VideoPlayerViewModel init(url: URL, isFeedView: Bool = true) { _viewModel = StateObject(wrappedValue: .init(url: url, isFeedView: isFeedView)) } var body: some View { ZStack { if let player: AVPlayer = viewModel.player { VideoPlayer(player: player) .onAppear { // Start playing or resume from the last known position if in feed view if viewModel.isFeedView { if let lastKnownTime = viewModel.lastKnownTime { player.seek(to: CMTime(seconds: lastKnownTime, preferredTimescale: 600)) } player.play() player.volume = 0 // Set volume to 0 for feed view } } .onDisappear { // Pause the video and store the last known time viewModel.lastKnownTime = player.currentTime().seconds player.pause() } .contentShape(Rectangle()) .gesture(TapGesture(count: 2).onEnded { print("Double tap detected") viewModel.isFullScreen.toggle() }) .simultaneousGesture(TapGesture().onEnded { print("Single tap detected") player.volume = 1 // Set volume to 1 }) } } .maxSize() .fullScreenCover(isPresented: $viewModel.isFullScreen) { AVPlayerViewControllerRepresented(viewModel: viewModel) } } } class VideoPlayerViewModel: ObservableObject { @Published var player: AVPlayer? @Published var lastKnownTime: Double? @Published var isFullScreen: Bool = false @Published var isFeedView: Bool init(url: URL, isFeedView: Bool = true) { player = AVPlayer(url: url) lastKnownTime = nil self.isFeedView = isFeedView if isFeedView { registerForPlaybackEndNotification() } } private func registerForPlaybackEndNotification() { NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: player?.currentItem, queue: nil) { [weak self] _ in self?.videoDidFinish() } } private func videoDidFinish() { // Replay logic for feed view if isFeedView, let player = player { player.seek(to: .zero) player.play() } } } I tried using .contentShape(Rectangle()) as I read that it expands the detectable area for taps, but to no avail. How can I have it so that when I double tap anywhere in the video, it's detected and the video goes full screen?
Posted Last updated
.