Create view-level services for media playback, complete with user controls, chapter navigation, and support for subtitles and closed captioning using AVKit.

AVKit Documentation

Posts under AVKit tag

89 Posts
Sort by:
Post not yet marked as solved
2 Replies
244 Views
After numerous trials and errors, we finally succeeded in implementing VR180. However, there is a problem. Videos played via a URL (Internet) connection experience significant lag. Initially, I thought it was a bitrate issue. But after various tests, I began to suspect that the problem might be with the internet connection processing..itself I tested the same video through both file opening (set up as a network drive) and URL (AWS) connections. Since AWS provides stable speeds, I concluded there is no issue there. The video files are 8K. The bitrate is between 80-90 Mbps. The conditions for decoding and implementing 8K are the same. Also, when I mirrored the video, there was significant lag. Both AFP and URL use the same wireless conditions. I assume the conditions for implementing 8K are the same. When mirroring, the AFP connection had no lag at all. Could it be that VisionOS's URL (Internet connection) is causing a high system load? I noticed that an app called AmazeVR allows videos to be downloaded before playing. Could this be because of the URL issue? If anyone knows, please respond.
Posted
by
Post not yet marked as solved
0 Replies
149 Views
I'm using something similar to this example. import SwiftUI struct ContentView: View { @State private var toggle = false var body: some View { CustomParentView { Button { toggle.toggle() } label: { Text(toggle.description) } } } } struct CustomParentView<Content: View>: UIViewRepresentable { let content: Content @inlinable init(@ViewBuilder content: () -> Content) { self.content = content() } func makeUIView(context: Context) -> UIView { let view = UIView() let hostingController = context.coordinator.hostingController hostingController.view.frame = view.bounds hostingController.view.autoresizingMask = [.flexibleWidth, .flexibleHeight] view.addSubview(hostingController.view) return view } func updateUIView(_ uiView: UIView, context: Context) { context.coordinator.hostingController.rootView = self.content } class Coordinator: NSObject { var hostingController: UIHostingController<Content> init(hostingController: UIHostingController<Content>) { self.hostingController = hostingController } } func makeCoordinator() -> Coordinator { return Coordinator(hostingController: UIHostingController(rootView: content)) } } The only different thing is I'm using UIScrollView. When I have a @State width and call .frame(width) on the content, the content would stay with initial width even when width is changed. I tried: hostingController.sizingOptions = .intrinsicContentSize This time the size would change to correct size if I pinch zoom the content, but the initial size that trigger updateUIView would be .zero. This prevents me to center the content. Is there a way to dynamically set size and get correct rendering just like any child view of a normal SwiftUI view?
Posted
by
Post not yet marked as solved
0 Replies
250 Views
I am using Xcode Version 15.3 (15E204a) and different versions of Simulator runtimes (17.x, 16.x, 15.0) The app makes outgoing calls and can respond to incoming calls. After starting the call, ~2s pass before a hangup occurs. In the Console logs I see that CXEndCallAction was invoked by CallKit and the last suspicious log before invoking the CXEndCallAction is callservicesd Disconnecting call because there wont be a UI to host the call: &lt;CSDProviderCall 0x107054300 type=PhoneNumber, value=sdsddsdds, stat=Sending tStat=0, model=&lt;TUCallModel 0x103f661e0 hold=1 grp=1 ungrp=1&gt; ... This used to work before, but since upgrading to Xcode 15 and iOS 17.x it happens constantly on simulator versions 17.x, and sometimes on 16.x, whereas I wasn't able to reproduce it on 15.0 version. Can someone help me understand why this happens and how to fix it? I provided some logs down below, and I don't see similar logs in the cases when the call is okay and CallKit doesn't hangup it. Also, this does not happen on real devices From the time CXStartCallAction is invoked until the CallKit invokes CXEndCallAction, these are some of the error or warn logs that appear: callservicesd -AVSystemController- +[AVSystemController sharedInstance]: Failed to allocate AVSystemController, numberOfAttempts=3 callservicesd [WARN] +[AVSystemController sharedAVSystemController] returned nil value callservicesd [WARN] Not allowing requested start call action because a call with same UUID already exists callWithUUID: (omitted) callservicesd Error while determining process action for callSource: (omitted) callservicesd Determined that callSource: &lt;CXXPCCallSource 0x103d060a0, ...&gt;, should process action: &lt;CXStartCallAction 0x107232760 UUID=8D34853F-55DD-4DEC-97A7-551BFD27C924, error: Error Domain=com.apple.CallKit.error.requesttransaction Code=5 "(null)" callservicesd [0x103e417a0] invalidated after the last release of the connection object callservicesd [WARN] No paired device, so unable to send message UpdateCallContext callservicesd FaceTime caller ID (null) is not a valid outgoing relay caller ID callservicesd Attempting to find a valid outgoing caller ID in set of available outgoing caller IDs {( )} callservicesd Could not automatically select an outgoing caller ID; multiple telephone numbers are listed in the set of available outgoing caller IDs {( )} callservicesd Adding call &lt;CSDProviderCall 0x107054300 ...&gt; to dirty calls pool callservicesd Entitlement check: ... entitlementCapabilities={( "access-call-providers", "modify-calls", "access-call-capabilities", "access-calls" )}&gt; lacks capability 'access-screen-calls' callservicesd [WARN] ... but no dynamic identifier could be found (1) or no handoff user info exists (1). Not broadcasting frontmost call error com.apple.CallKit.CallDirectoryUnable to initialize CXCallDirectoryStore for reading: Error Domain=NSCocoaErrorDomain Code=513 "You don’t have permission to save the file “CallDirectory” in the folder “Library”." ... {Error Domain=NSPOSIXErrorDomain Code=13 "Permission denied"}} The logs provided are in order in which they are logged, but some of them are recurring After these logs there is still a message that CXStartCallAction is fullfilled: callservicesd Start call action fulfilled: &lt;CXStartCallAction 0x107231fe0 UUID=8D34853F-55DD-4DEC-97A7-551BFD27C924 ...&gt; After which the last suspicious log is logged before CXEndCallAction is invoked by CallKit: Disconnecting call because there wont be a UI to host the call: &lt;CSDProviderCall 0x107054300 ...&gt;
Posted
by
Post not yet marked as solved
2 Replies
236 Views
I added VideoPlayer view inside my project, but I noticed that during loading or with different aspect ratio the default color of this view is black. I would like to change it according to to my app background. Unfortunately using modifiers such as .background or .foregroundColor doesn't seem to change it. Is there a way to customize this color? struct PlayerLooperView: View { private let queuePlayer: AVQueuePlayer! private let playerLooper: AVPlayerLooper! init(url: URL) { let playerItem = AVPlayerItem(url: url) self.queuePlayer = AVQueuePlayer(items: [playerItem]) self.queuePlayer.isMuted = true self.playerLooper = AVPlayerLooper(player: queuePlayer, templateItem: playerItem) self.queuePlayer.play() } var body: some View { VideoPlayer(player: queuePlayer) .disabled(true) .scaledToFit() } }
Posted
by
Post not yet marked as solved
0 Replies
269 Views
I'm attempting to integrate DRM into the app. I've developed a prototype, but the delegate method shouldWaitForLoadingOfRequestedResource isn't being triggered on certain devices, although it functions correctly on others. Notably, it's invoked on Apple TV 4K (3rd generation) Wi-Fi (A2737) but not on Apple TV HD (A1625). Are there any specific configurations needed to ensure this method is invoked? let url = URL(string: RESOURCE_URL)! // Create the asset instance and the resource loader because we will be asked // for the license to playback DRM protected asset. let asset = AVURLAsset(url: url) let queue = DispatchQueue(label: CUSTOM_SERIAL_QUEUE_LABEL) asset.resourceLoader.setDelegate(self, queue: queue) // Create the player item and the player to play it back in. let playerItem = AVPlayerItem(asset: asset) let player = AVPlayer(playerItem: playerItem) // Create a new AVPlayerViewController and pass it a reference to the player. let controller = AVPlayerViewController() controller.player = player // Modally present the player and call the player's play() method when complete. present(controller, animated: true) { player.play() } } //Please note if your delegate method is not being called then you need to run on a REAL DEVICE func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool { // Getting data for KSM server. Get the URL from tha manifest, we wil need it later as it // contains the assetId required for the license request. guard let url = loadingRequest.request.url else { print(#function, "Unable to read URL from loadingRequest") loadingRequest.finishLoading(with: NSError(domain: "", code: -1, userInfo: nil)) return false } // Link to your certificate on BuyDRM's side. // Use the commented section if you want to refer the certificate from your bundle i.e. Store Locally /* guard let certificateURL = Bundle.main.url(forResource: "certificate", withExtension: "der"), let certificateData = try? Data(contentsOf: certificateURL) else { print("failed...", #function, "Unable to read the certificate data.") loadingRequest.finishLoading(with: NSError(domain: "com.domain.error", code: -2, userInfo: nil)) return false } */ guard let certificateData = try? Data(contentsOf: URL(string: CERTIFICATE_URL)!) else { print(#function, "Unable to read the certificate data.") loadingRequest.finishLoading(with: NSError(domain: "", code: -2, userInfo: nil)) return false } // The assetId from the main/variant manifest - skd://***, the *** part. Get the SPC based on the // already collected data i.e. certificate and the assetId guard let contentId = url.host, let contentIdData = contentId.data(using: String.Encoding.utf8) else { loadingRequest.finishLoading(with: NSError(domain: "", code: -3, userInfo: nil)) print(#function, "Unable to read the SPC data.") return false } guard let spcData = try? loadingRequest.streamingContentKeyRequestData(forApp: certificateData, contentIdentifier: contentIdData, options: nil) else { loadingRequest.finishLoading(with: NSError(domain: "", code: -3, userInfo: nil)) print(#function, "Unable to read the SPC data.") return false } // Prepare to get the license i.e. CKC. let requestUrl = CKC_URL let stringBody = "spc=\(spcData.base64EncodedString())&assetId=\(contentId)" let postData = NSData(data: stringBody.data(using: String.Encoding.utf8)!) // Make the POST request with customdata set to the authentication XML. var request = URLRequest(url: URL(string: requestUrl)!) request.httpMethod = "POST" request.httpBody = postData as Data request.allHTTPHeaderFields = ["customdata" : ACCESS_TOKEN] let configuration = URLSessionConfiguration.default let session = URLSession(configuration: configuration) let task = session.dataTask(with: request) { data, response, error in if let data = data { // The response from the KeyOS MultiKey License server may be an error inside JSON. do { let parsedData = try JSONSerialization.jsonObject(with: data) as! [String:Any] let errorId = parsedData["errorid"] as! String let errorMsg = parsedData["errormsg"] as! String print(#function, "License request failed with an error: \(errorMsg) [\(errorId)]") } catch let error as NSError { print(#function, "The response may be a license. Moving on.", error) } // The response from the KeyOS MultiKey License server is Base64 encoded. let dataRequest = loadingRequest.dataRequest! // This command sends the CKC to the player. dataRequest.respond(with: Data(base64Encoded: data)!) loadingRequest.finishLoading() } else { print(#function, error?.localizedDescription ?? "Error during CKC request.") } } task.resume() // Tell the AVPlayer instance to wait. We are working on getting what it wants. return true }
Posted
by
Post not yet marked as solved
0 Replies
212 Views
I'm trying to use AVPlayer to capture frames from a livestream that is remotely playing. Eventually I want to convert these frames to UIImages to be displayed. The code I have right now is not working because pixel_buffer doesn't have an actual value for some reason. When I print itemTime its value is continuously 0, which I think might be a potential cause of this issue. Would appreciate any help with getting this to work. import RealityKit import RealityKitContent import AVFoundation import AVKit class ViewController: UIViewController { let player = AVPlayer(url: URL(string: {webrtc stream link})!) let videoOutput = AVPlayerItemVideoOutput(pixelBufferAttributes: [String(kCVPixelBufferPixelFormatTypeKey): NSNumber(value: kCVPixelFormatType_32BGRA)]) override func viewDidLoad() { print("doing viewDidLoad") super.viewDidLoad() player.currentItem!.add(videoOutput) player.play() let displayLink = CADisplayLink(target: self, selector: #selector(displayLinkDidRefresh(link:))) displayLink.add(to: RunLoop.main, forMode: RunLoop.Mode.common) } @objc func displayLinkDidRefresh(link: CADisplayLink) { let itemTime = videoOutput.itemTime(forHostTime: CACurrentMediaTime()) if videoOutput.hasNewPixelBuffer(forItemTime: itemTime) { if let pixelBuffer = videoOutput.copyPixelBuffer(forItemTime: itemTime, itemTimeForDisplay: nil) { print("pixelBuffer \(pixelBuffer)") // yay, pixel buffer let image = CIImage(cvImageBuffer: pixelBuffer) // or maybe CIImage? print("CIImage \(image)") } } } } struct ImmersiveView: View { var body: some View { RealityView { content in if let scene = try? await Entity(named:"Immersive", in: realityKitContentBundle) { content.add(scene) } let viewcontroller = ViewController() viewcontroller.viewDidLoad() }
Posted
by
Post not yet marked as solved
0 Replies
333 Views
I want to develop an AI assistant ios application using whisper and chatGPT OpenAI apis. I am implementing these following steps. Audio-engine to record the user's voice Send audio chunk to Whisper for Speech to Text Send that text to chatgpt openAI to get response Now sending that response to Speech Synthesizer to speak response through built-in speaker In this process, i don't want to disable microphone. Because user can interrupt the speech synthesizer anytime he likes. It should be realtime and look like continuous call between the user and AI assistant. Problem: When user speaks, microphone takes the input and appends into the audioengine recording file. Then sends that chunk to whisper for transcribing, transcribed text is then sent to chatgpt api to get response and response is sent to speech synthesiser which generates an output on speaker. Issue is that the microphone again takes synthesiser voice from speaker, and create a loop. What should i possibly do to stop my microphone to not take the input from iphone speaker. Talking tom, callAnnie applications and many other ios applications are continuously using microphone and generating outputs from speaker without overlapping and loop. Suggest the possible ways. I tried to set all possible ways for setting audio-engine category and settings with record, playback, playandrecord etc. Nothing gives me the solution to avoid speaker voice into my microphone. Technically as I think of microphone should never take the device generated voices. What could be the possible solution. If my approach is wrong also i am open to plenty suggestions and guidance.
Posted
by
Post not yet marked as solved
0 Replies
359 Views
How to implement Player in SwiftUI with support for parental controls? Sample Code: Working with Overlays and Parental Controls in tvOS I use AVPlayerViewController in SwiftUI. On cancel, rejecting the request, the screen is black. On change the channel, on move command up or down direction, I am replacing the current player item with new. Status is ready to play, status of the request is successful upon replacement and I set player to play. The screen is still black.
Posted
by
Post not yet marked as solved
1 Replies
303 Views
In an iOS UNNotificationContentExtension with a media player, I have an AVPlayer which can either play a WAV or an MP4 remotely depending on the push payload userInfo dictionary. I have implemented mediaPlayPauseButtonFrame, mediaPlayPauseButtonTintColor, and mediaPlayPauseButtonType, have overridden canBecomeFirstResponder to force true, and set the view to becomeFirstResponder when the AVPlayer is added. I have implemented the UNNotificationContentExtension protocol's mediaPlay and mediaPause methods. I also have subscribed to the .AVPlayerItemDidPlayToEndTime (NS)Notification and I call a method on the VC when it returns, which calls mediaPause. When the AVPlayer reaches the end, the .AVPlayerItemDidPlayToEndTime Notification is properly emitted, my method is called, and mediaPause is called. However, the media play/pause button provided by UNNotificationContentExtension remains visibly in the "playing" state instead of changing to the "pause" state. The button correctly changes its display state when the user presses the play/pause button manually, so it works. And so, collective Obis Wan Kenobi, what am I doing wrong? I have tried resigning first responder, have no access to the button itself -- as far as I know -- and am wondering where to go next. (This is the only thing not working by the way.) Sanitized example: import UIKit import UserNotifications import UserNotificationsUI class NotificationViewController: UIViewController, UNNotificationContentExtension { // Constants private let viewModel = ... private var mediaPlayer: AVPlayer? private var mediaPlayerLayer: AVPlayerLayer? private var mediaPlayerItem: AVPlayerItem? { mediaPlayer?.currentItem } override var canBecomeFirstResponder: Bool { true } // MARK: - UNNotificationContentExtension var overrides var mediaPlayPauseButtonType: UNNotificationContentExtensionMediaPlayPauseButtonType { return .default } var mediaPlayPauseButtonFrame: CGRect { return CGRect(x: 0.0, y: 0.0, width: 50.0, height: 50.0) } var mediaPlayPauseButtonTintColor: UIColor { return .blue } ... func didReceive(_ notification: UNNotification) { ... // Process userInfo for url } ... @MainActor func playAudio(from: URL) async { let mediaPlayer = AVPlayer(url: url) let mediaPlayerLayer = AVPlayerLayer(player: audioPlayer) ... // view setup mediaPlayerLayer.frame = ... self.mediaPlayer = mediaPlayer self.mediaPlayerLayer = mediaPlayerLayer self.view.layer.addSublayer(mediaPlayerLayer) becomeFirstResponder() } // MARK: - UNNotificationContentExtension func mediaPlay() { mediaPlayer?.play() } func mediaPause() { mediaPlayer?.pause() } // MARK: - Utilities private func subscribe(to item: AVPlayerItem) { NotificationCenter.default.addObserver(self, selector: #selector(playedToEnd), name: .AVPlayerItemDidPlayToEndTime, object: item) } @objc func playedToEnd(notification: NSNotification) { mediaPause() } }
Posted
by
Post not yet marked as solved
0 Replies
330 Views
I want to implement an immersive environment similar to AppleTV's Cinema environment for the video that plays in my app - currently, I want to use an AVPlayerViewController so that I don't have to build a control view or deal with aspect ratios (which I would have to do if I used VideoMaterial). To do this, it looks like I'll need to use the imagery from the video stream itself as an image for an ImageBasedLightComponent, but the API for that class seems restrict its input to an EnvironmentResource, which looks like it's meant to use an equirectangular still image that has to be part of the app bundle. Does anyone know how to achieve this effect? Where the "light" from the video being played in an AVPlayerViewController's player can be cast on 3D objects in the RealityKit scene? Is AppleTV doing something wild like combining an AVPlayerViewController and VideoMaterial? Where the VideoMaterial is layered onto the objects in the scene to simulate a light source? Thanks in advance!
Posted
by
Post not yet marked as solved
2 Replies
393 Views
Dear Apple Developer Forum Community, I hope this message finds you well. I am writing to seek assistance regarding an error I encountered while attempting to create a Vedic content in the app from the one YouTube link. I have been unsuccessful in resolving it. I am reaching out to the community in the hope that someone might have encountered a similar issue or have expertise in troubleshooting Xcode errors. Any guidance, suggestions, or solutions would be greatly appreciated. Thank you very much for your time and assistance. Sincerely, Zipzy [games]
Posted
by
Post marked as solved
8 Replies
661 Views
Im trying to programmatically alter a a video frame before applying it to geometry using VideoMaterial. What I'm finding is that the output appears as though no videoCompositor was applied to the playerItem. Is this expected behavior? Is there a work around besides using an ExportSession to bounce the movie to disk?
Posted
by
Post not yet marked as solved
0 Replies
320 Views
I am using default AVPlayerviewcontroller with the default playercontrols, skip buttons for video streaming in tvOS app. Custom controls/buttons are not being used. Can we override the AVPlayerviewcontroller voice over accessibility text/behaviour in for the default player controls? I am not to find out any apple's documentation on this ands not sure even its possible.
Posted
by
Post not yet marked as solved
2 Replies
659 Views
Hi, In the Destinations sample code project and related WWDC talk on spatial video, it seems to imply that the video player will show 3D stereoscopic videos. However, in the Photos app there's a vignetting in the simulator (and marketing material) when viewing spatial video — a portal kind of effect. Without access to a device I'm wondering if my spatial videos are actually being played as 3D spatial videos in the AVPlayerController, since I'm not seeing the vignetting. I'm thinking that the vignetting is a photos specific visual effect, but wanted to double check to make sure I'm not misunderstanding something about AVPlayerController. Does anyone know if spatial videos played through AVPlayerController will appear as stereoscopic, even if the vignetting isn't there? Has anyone tried the Destinations sample code to play spatial videos on a device to confirm? thanks!
Posted
by
Post not yet marked as solved
0 Replies
402 Views
I am embedding SwiftUI VideoPlayer in a VStack and see that the screen goes black (i.e. the content disappears even though video player gets autorotated) when the device is rotated. The issue happens even when I use AVPlayerViewController (as UIViewControllerRepresentable). Is this a bug or I am doing something wrong? var videoURL:URL let player = AVPlayer() var body: some View { VStack { VideoPlayer(player: player) .frame(maxWidth:.infinity) .frame(height:300) .padding() .ignoresSafeArea() .background { Color.black } .onTapGesture { player.rate = player.rate == 0.0 ? 1.0 : 0.0 } Spacer() } .ignoresSafeArea() .background(content: { Color.black }) .onAppear { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: AVAudioSession.CategoryOptions.duckOthers) } catch { NSLog("Unable to set session category to playback") } let playerItem = AVPlayerItem(url: videoURL) player.replaceCurrentItem(with: playerItem) } }
Posted
by
Post not yet marked as solved
0 Replies
384 Views
I am working on implementing tap gestures in a dynamic VideoPlayer made with AVKit. I intend to have it be when a video is viewed in a feed (this is for a social media app), the video plays without sound. Tapping on the video once enables sound, tapping on the video twice makes it full screen. Currently, the single tap works. However, the double tap isn't detected unless I tap on the top right corner of the video. import SwiftUI import AVKit struct VideoPlayerView: View { @StateObject private var viewModel: VideoPlayerViewModel init(url: URL, isFeedView: Bool = true) { _viewModel = StateObject(wrappedValue: .init(url: url, isFeedView: isFeedView)) } var body: some View { ZStack { if let player: AVPlayer = viewModel.player { VideoPlayer(player: player) .onAppear { // Start playing or resume from the last known position if in feed view if viewModel.isFeedView { if let lastKnownTime = viewModel.lastKnownTime { player.seek(to: CMTime(seconds: lastKnownTime, preferredTimescale: 600)) } player.play() player.volume = 0 // Set volume to 0 for feed view } } .onDisappear { // Pause the video and store the last known time viewModel.lastKnownTime = player.currentTime().seconds player.pause() } .contentShape(Rectangle()) .gesture(TapGesture(count: 2).onEnded { print("Double tap detected") viewModel.isFullScreen.toggle() }) .simultaneousGesture(TapGesture().onEnded { print("Single tap detected") player.volume = 1 // Set volume to 1 }) } } .maxSize() .fullScreenCover(isPresented: $viewModel.isFullScreen) { AVPlayerViewControllerRepresented(viewModel: viewModel) } } } class VideoPlayerViewModel: ObservableObject { @Published var player: AVPlayer? @Published var lastKnownTime: Double? @Published var isFullScreen: Bool = false @Published var isFeedView: Bool init(url: URL, isFeedView: Bool = true) { player = AVPlayer(url: url) lastKnownTime = nil self.isFeedView = isFeedView if isFeedView { registerForPlaybackEndNotification() } } private func registerForPlaybackEndNotification() { NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: player?.currentItem, queue: nil) { [weak self] _ in self?.videoDidFinish() } } private func videoDidFinish() { // Replay logic for feed view if isFeedView, let player = player { player.seek(to: .zero) player.play() } } } I tried using .contentShape(Rectangle()) as I read that it expands the detectable area for taps, but to no avail. How can I have it so that when I double tap anywhere in the video, it's detected and the video goes full screen?
Posted
by
Post not yet marked as solved
2 Replies
447 Views
I download a mp4 file into the date folder. I am able to run an AVPlayer and view the video in the app. After some time video won't play. If I enable UIFileSharingEnabled and check if file exists, I can see it's there but then it won't play even from accessing it through Files. If I copy the file from iOS to Mac then I can pay the video on Mac, but not anymore on iOS. If I delete the app, install again and download the video while UIFileSharingEnabled and if it's played in the app it can be played in the Files app, but again after some time it's not playable again... I can see only first frame. I can even see the length of video. It happens to multiple videos. Any clues?
Posted
by
Post not yet marked as solved
0 Replies
511 Views
I'm working on a custom spatial video player that uses AVSampleBufferDisplayLayer as render layer. When I feed it with CMSampleBuffer that output from VTCompressionSession using new encoding API it can display normally but I don't know if it can work in VisionPro. Anyone has idea?
Posted
by