VideoToolbox

RSS for tag

Work directly with hardware-accelerated video encoding and decoding capabilities using VideoToolbox.

VideoToolbox Documentation

Posts under VideoToolbox tag

30 Posts
Sort by:
Post not yet marked as solved
0 Replies
76 Views
I have an image viewing app with support for avif (and avis) images. I'm trying to figure out if the recent bug in CoreMedia (dav1d) affects my app. The apple security update: https://support.apple.com/en-gb/HT214097 The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845), where c is the dav1d context. With some reverse engineering, the way I see CMPhoto calling into VideoToolBox (which internally calls into AV1SW.videodecoder, which is a wrapper around dav1d), the max frame delay is hardcoded to 1 in the dav1d settings which intern means that c->n_fc in dav1d is always 1. The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845). From my understand, this should mean that my app isn't affected. The apple security update however clearly mentions that "Processing an image may lead to arbitrary code execution". Surely I'm missing something?
Posted Last updated
.
Post not yet marked as solved
5 Replies
2.2k Views
Recently I've been trying to play some AV1-encoded streams on my iPhone 15 Pro Max. First, I check for hardware support: VTIsHardwareDecodeSupported(kCMVideoCodecType_AV1); // YES Then I need to create a CMFormatDescription in order to pass it into a VTDecompressionSession. I've tried the following: { mediaType:'vide' mediaSubType:'av01' mediaSpecific: { codecType: 'av01' dimensions: 394 x 852 } extensions: {{ CVFieldCount = 1; CVImageBufferChromaLocationBottomField = Left; CVImageBufferChromaLocationTopField = Left; CVPixelAspectRatio = { HorizontalSpacing = 1; VerticalSpacing = 1; }; FullRangeVideo = 0; }} } but VTDecompressionSessionCreate gives me error -8971 (codecExtensionNotFoundErr, I assume). So it has something to do with the extensions dictionary? I can't find anywhere which set of extensions is necessary for it to work 😿. VideoToolbox has convenient functions for creating descriptions of AVC and HEVC streams (CMVideoFormatDescriptionCreateFromH264ParameterSets and CMVideoFormatDescriptionCreateFromHEVCParameterSets), but not for AV1. As of today I am using XCode 15.0 with iOS 17.0.0 SDK.
Posted
by mrlvsva.
Last updated
.
Post not yet marked as solved
0 Replies
161 Views
I am Using VideoToolbox VTCompressionSession For Encoding The Frame in H264 Format Which I will send through web socket to a browser. The Received Frames Will be Decoded and Output Will be rendered in the Website. Now, when using Some encoders the video is rendered always with four frame latency. How Frame is sent to server : start>------------ f1 ------------ f2 ------------ f3 ------------ f4 ------------- f5 ... How rendering is happening : start>-------------------------------------------------------------------------- f1 ------------ f2 ------------ f3 ------------ f4 ----------- ... This Sometime becomes two frame latency and Sometime it becomes sixteen frame latency so the usability is getting affected. Im using this configuration in videotoolbox's VTCompressionSession: kVTCompressionPropertyKey_AverageBitRate=3MB kVTCompressionPropertyKey_ExpectedFrameRate=24 kVTCompressionPropertyKey_RealTime=true kVTCompressionPropertyKey_ProfileLevel=kVTProfileLevel_H264_High_AutoLevel kVTCompressionPropertyKey_AllowFrameReordering = false kVTCompressionPropertyKey_MaxKeyFrameInterval=1000 With Same Configuration i am able to achieve 1 in - 1 out with com.apple.videotoolbox.videoencoder.h264.gva. This Issue Is replication with Encoder com.apple.videotoolbox.videoencoder.ave.avc Not Sure if its Encoder Specific. I have also seen that there are difference in VUI Parameters between encoded output of both encoders. I want to know if there is something i could do to solve this issue from the Encoder Configuration or another API which is provided by the VideoToolBox to ensure that frames are decoded and rendered at the same time by Decoder. Thanks in Advance....
Posted
by d4redevil.
Last updated
.
Post not yet marked as solved
9 Replies
430 Views
So I've been trying for weeks now to implement a compression mechanism into my app project that compresses MV-HEVC video files in-app without stripping videos of their 3D properties, but every single implementation I have tried has either stripped the encoded MV-HEVC video file of its 3D properties (making the video monoscopic), or has crashed with a fatal error. I've read the Reading multiview 3D video files and Converting side-by-side 3D video to multiview HEVC documentation files, but was unable to myself come out with anything useful. My question therefore is: How do you go about compressing/encoding an MV-HEVC video file in-app whilst preserving the stereoscopic/3D properties of that MV-HEVC video file? Below is the best implementation I was able to come up with (which simply compresses uploaded MV-HEVC videos with an arbitrary bit rate). With this implementation (my compressVideo function), the MV-HEVC files that go through it are compressed fine, but the final result is the loss of that MV-HEVC video file's stereoscopic/3D properties. If anyone could point me in the right direction with anything it would be greatly, greatly appreciated. My current implementation (that strips MV-HEVC videos of their stereoscopic/3D properties): static func compressVideo(sourceUrl: URL, bitrate: Int, completion: @escaping (Result<URL, Error>) -> Void) { let asset = AVAsset(url: sourceUrl) asset.loadTracks(withMediaType: .video) { videoTracks, videoError in guard let videoTrack = videoTracks?.first, videoError == nil else { completion(.failure(videoError ?? NSError(domain: "VideoUploader", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to load video track"]))) return } asset.loadTracks(withMediaType: .audio) { audioTracks, audioError in guard let audioTrack = audioTracks?.first, audioError == nil else { completion(.failure(audioError ?? NSError(domain: "VideoUploader", code: -2, userInfo: [NSLocalizedDescriptionKey: "Failed to load audio track"]))) return } let outputUrl = sourceUrl.deletingLastPathComponent().appendingPathComponent(UUID().uuidString).appendingPathExtension("mov") guard let assetReader = try? AVAssetReader(asset: asset), let assetWriter = try? AVAssetWriter(outputURL: outputUrl, fileType: .mov) else { completion(.failure(NSError(domain: "VideoUploader", code: -3, userInfo: [NSLocalizedDescriptionKey: "AssetReader/Writer initialization failed"]))) return } let videoReaderSettings: [String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32ARGB] let videoSettings: [String: Any] = [ AVVideoCompressionPropertiesKey: [AVVideoAverageBitRateKey: bitrate], AVVideoCodecKey: AVVideoCodecType.hevc, AVVideoHeightKey: videoTrack.naturalSize.height, AVVideoWidthKey: videoTrack.naturalSize.width ] let assetReaderVideoOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings) let assetReaderAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil) if assetReader.canAdd(assetReaderVideoOutput) { assetReader.add(assetReaderVideoOutput) } else { completion(.failure(NSError(domain: "VideoUploader", code: -4, userInfo: [NSLocalizedDescriptionKey: "Couldn't add video output reader"]))) return } if assetReader.canAdd(assetReaderAudioOutput) { assetReader.add(assetReaderAudioOutput) } else { completion(.failure(NSError(domain: "VideoUploader", code: -5, userInfo: [NSLocalizedDescriptionKey: "Couldn't add audio output reader"]))) return } let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil) let videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings) videoInput.transform = videoTrack.preferredTransform assetWriter.shouldOptimizeForNetworkUse = true assetWriter.add(videoInput) assetWriter.add(audioInput) assetReader.startReading() assetWriter.startWriting() assetWriter.startSession(atSourceTime: CMTime.zero) let videoInputQueue = DispatchQueue(label: "videoQueue") let audioInputQueue = DispatchQueue(label: "audioQueue") videoInput.requestMediaDataWhenReady(on: videoInputQueue) { while videoInput.isReadyForMoreMediaData { if let sample = assetReaderVideoOutput.copyNextSampleBuffer() { videoInput.append(sample) } else { videoInput.markAsFinished() if assetReader.status == .completed { assetWriter.finishWriting { completion(.success(outputUrl)) } } break } } } audioInput.requestMediaDataWhenReady(on: audioInputQueue) { while audioInput.isReadyForMoreMediaData { if let sample = assetReaderAudioOutput.copyNextSampleBuffer() { audioInput.append(sample) } else { audioInput.markAsFinished() break } } } } } }
Posted Last updated
.
Post not yet marked as solved
2 Replies
299 Views
Hi everyone, I need to add spatial video maker in my app which was wrote in objective-c. I found some reference code by swift, can you help me with converting the code to objective -c? let left = CMTaggedBuffer( tags: [.stereoView(.leftEye), .videoLayerID(leftEyeLayerIndex)], pixelBuffer: leftEyeBuffer) let right = CMTaggedBuffer( tags: [.stereoView(.rightEye), .videoLayerID(rightEyeLayerIndex)], pixelBuffer: rightEyeBuffer) let result = adaptor.appendTaggedBuffers( [left, right], withPresentationTime: leftPresentationTs)
Posted
by pinkywon.
Last updated
.
Post not yet marked as solved
0 Replies
231 Views
Is AVQT capable of being used to measure encoding quality of PQ or HLG based content beyond SDR? If so, how am I able to leverage it. If not, is there a roadmap for timing to enable this type of tool?
Posted
by dgbaer.
Last updated
.
Post not yet marked as solved
1 Replies
435 Views
First of all, I tried MobileVLCKit but there is too much delay Then I wrote a UDPManager class and I am writing my codes below. I would be very happy if anyone has information and wants to direct me. Broadcast code ffmpeg -f avfoundation -video_size 1280x720 -framerate 30 -i "0" -c:v libx264 -preset medium -tune zerolatency -f mpegts "udp://127.0.0.1:6000?pkt_size=1316" Live View Code (almost 0 delay) ffplay -fflags nobuffer -flags low_delay -probesize 32 -analyzeduration 1 -strict experimental -framedrop -f mpegts -vf setpts=0 udp://127.0.0.1:6000 OR mpv udp://127.0.0.1:6000 --no-cache --untimed --no-demuxer-thread --vd-lavc-threads=1 UDPManager import Foundation import AVFoundation import CoreMedia import VideoDecoder import SwiftUI import Network import Combine import CocoaAsyncSocket import VideoToolbox class UDPManager: NSObject, ObservableObject, GCDAsyncUdpSocketDelegate { private let host: String private let port: UInt16 private var socket: GCDAsyncUdpSocket? @Published var videoOutput: CMSampleBuffer? init(host: String, port: UInt16) { self.host = host self.port = port } func connectUDP() { do { socket = GCDAsyncUdpSocket(delegate: self, delegateQueue: .global()) //try socket?.connect(toHost: host, onPort: port) try socket?.bind(toPort: port) try socket?.enableBroadcast(true) try socket?.enableReusePort(true) try socket?.beginReceiving() } catch { print("UDP soketi oluşturma hatası: \(error)") } } func closeUDP() { socket?.close() } func udpSocket(_ sock: GCDAsyncUdpSocket, didConnectToAddress address: Data) { print("UDP Bağlandı.") } func udpSocket(_ sock: GCDAsyncUdpSocket, didNotConnect error: Error?) { print("UDP soketi bağlantı hatası: \(error?.localizedDescription ?? "Bilinmeyen hata")") } func udpSocket(_ sock: GCDAsyncUdpSocket, didReceive data: Data, fromAddress address: Data, withFilterContext filterContext: Any?) { if !data.isEmpty { DispatchQueue.main.async { self.videoOutput = self.createSampleBuffer(from: data) } } } func createSampleBuffer(from data: Data) -> CMSampleBuffer? { var blockBuffer: CMBlockBuffer? var status = CMBlockBufferCreateWithMemoryBlock( allocator: kCFAllocatorDefault, memoryBlock: UnsafeMutableRawPointer(mutating: (data as NSData).bytes), blockLength: data.count, blockAllocator: kCFAllocatorNull, customBlockSource: nil, offsetToData: 0, dataLength: data.count, flags: 0, blockBufferOut: &blockBuffer) if status != noErr { return nil } var sampleBuffer: CMSampleBuffer? let sampleSizeArray = [data.count] status = CMSampleBufferCreateReady( allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, formatDescription: nil, sampleCount: 1, sampleTimingEntryCount: 0, sampleTimingArray: nil, sampleSizeEntryCount: 1, sampleSizeArray: sampleSizeArray, sampleBufferOut: &sampleBuffer) if status != noErr { return nil } return sampleBuffer } } I didn't know how to convert the data object to video, so I searched and found this code and wanted to try it func createSampleBuffer(from data: Data) -> CMSampleBuffer? { var blockBuffer: CMBlockBuffer? var status = CMBlockBufferCreateWithMemoryBlock( allocator: kCFAllocatorDefault, memoryBlock: UnsafeMutableRawPointer(mutating: (data as NSData).bytes), blockLength: data.count, blockAllocator: kCFAllocatorNull, customBlockSource: nil, offsetToData: 0, dataLength: data.count, flags: 0, blockBufferOut: &blockBuffer) if status != noErr { return nil } var sampleBuffer: CMSampleBuffer? let sampleSizeArray = [data.count] status = CMSampleBufferCreateReady( allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, formatDescription: nil, sampleCount: 1, sampleTimingEntryCount: 0, sampleTimingArray: nil, sampleSizeEntryCount: 1, sampleSizeArray: sampleSizeArray, sampleBufferOut: &sampleBuffer) if status != noErr { return nil } return sampleBuffer } And I tried to make CMSampleBuffer a player but it just shows a white screen and doesn't work struct SampleBufferPlayerView: UIViewRepresentable { typealias UIViewType = UIView var sampleBuffer: CMSampleBuffer func makeUIView(context: Context) -> UIView { let view = UIView(frame: .zero) let displayLayer = AVSampleBufferDisplayLayer() displayLayer.videoGravity = .resizeAspectFill view.layer.addSublayer(displayLayer) context.coordinator.displayLayer = displayLayer return view } func updateUIView(_ uiView: UIView, context: Context) { context.coordinator.sampleBuffer = sampleBuffer context.coordinator.updateSampleBuffer() } func makeCoordinator() -> Coordinator { Coordinator() } class Coordinator { var displayLayer: AVSampleBufferDisplayLayer? var sampleBuffer: CMSampleBuffer? func updateSampleBuffer() { guard let displayLayer = displayLayer, let sampleBuffer = sampleBuffer else { return } if displayLayer.isReadyForMoreMediaData { displayLayer.enqueue(sampleBuffer) } else { displayLayer.requestMediaDataWhenReady(on: .main) { if displayLayer.isReadyForMoreMediaData { displayLayer.enqueue(sampleBuffer) print("isReadyForMoreMediaData") } } } } } } And I tried to use it but I couldn't figure it out, can anyone help me? struct ContentView: View { // udp://@127.0.0.1:6000 @ObservedObject var udpManager = UDPManager(host: "127.0.0.1", port: 6000) var body: some View { VStack { if let buffer = udpManager.videoOutput{ SampleBufferDisplayLayerView(sampleBuffer: buffer) .frame(width: 300, height: 200) } } .onAppear(perform: { udpManager.connectUDP() }) } }
Posted
by OVRIDOO.
Last updated
.
Post not yet marked as solved
1 Replies
265 Views
Does Video Toolbox’s compression session yield data I can decompress on a different device that doesn’t have Apple’s decompression? i.e. so I can network data to other devices that aren’t necessarily Apple? or is the format proprietary rather than just regular h.264 (for example)? If I can decompress without video toolbox, may I have reference to some examples for how to do this using cross-platform APIs? Maybe FFMPEG has something?
Posted Last updated
.
Post not yet marked as solved
2 Replies
393 Views
Hey Developers, I'm on the hunt for a new Apple laptop geared towards coding, and I'd love to tap into your collective wisdom. If you have recommendations or personal experiences with a specific model that excels in the coding realm, please share your insights. Looking for optimal performance and a seamless coding experience. Your input is gold – thanks a bunch!
Posted Last updated
.
Post not yet marked as solved
4 Replies
888 Views
I am trying to set up HLS with MV HEVC. I have an MV HEVC MP4 converted with AVAssetWriter that plays as a "spatial video" in Photos in the simulator. I've used ffmpeg to fragment the video for HLS (sample m3u8 file below). The HLS of the mp4 plays on a VideoMaterial with an AVPlayer in the simulator, but it is hard to determine if the streamed video is stereo. Is there any guidance on confirming that the streamed mp4 video is properly being read as stereo? Additionally, I see that REQ-VIDEO-LAYOUT is required for multivariant HLS. However if there is ONLY stereo video in the playlist is it needed? Are there any other configurations need to make the device read as stereo? Sample m3u8 playlist #EXTM3U #EXT-X-VERSION:3 #EXT-X-TARGETDURATION:13 #EXT-X-MEDIA-SEQUENCE:0 #EXTINF:12.512500, sample_video0.ts #EXTINF:8.341667, sample_video1.ts #EXTINF:12.512500, sample_video2.ts #EXTINF:8.341667, sample_video3.ts #EXTINF:8.341667, sample_video4.ts #EXTINF:12.433222, sample_video5.ts #EXT-X-ENDLIST
Posted
by altonelli.
Last updated
.
Post not yet marked as solved
0 Replies
293 Views
我使用VideoToolBox库进行编码后再通过NDI发送到网络上,是可以成功再苹果电脑上接收到ndi源屏显示画面的,但是在windows上只能ndi源名称,并没有画面显示。 我想知道是不是使用VideoToolBox库无法在windows上进行正确编码,这个问题需要如何解决
Posted Last updated
.
Post not yet marked as solved
2 Replies
403 Views
iOS17, the encoder sets an average bit rate of 5Mbps. in the first 25 minutes:the encoding rate is normal. 25 minutes-30minutes:The encoding bit rate will be reduced to 4M but can be restoredl. 30 minutes-70minutes:the encoding rate is normal. 70minutes-late:the bit rate will suddenly drop to 1Mbps and cannot be restored. As shown in the figure below, the yellow line is the frame rate and the green line is the code rate. The Code show as below - (void)_setBitrate:(NSUInteger)bitrate forSession:(VTCompressionSessionRef)session { NSParameterAssert(session && bitrate); OSStatus status = VTSessionSetProperty(session, kVTCompressionPropertyKey_AverageBitRate, (__bridge CFTypeRef)@(bitrate)); if (status != noErr) NSLog(@"set AverageBitRate error"); NSArray *limit = @[@(bitrate * 1.5/8), @(1)]; status = VTSessionSetProperty(session, kVTCompressionPropertyKey_DataRateLimits, (__bridge CFArrayRef)limit); if (status != noErr) NSLog(@"set DataRateLimits error"); } The problem only occurs in iOS17. Does anyone know what the reason is?
Posted Last updated
.
Post not yet marked as solved
0 Replies
322 Views
We are currently working on a real-time, low-latency solution for video conferencing scenarios and have encountered some issues with the current implementation of the encoder. We need a feature enhancement for the Videotoolbox encoder. In our use case, we need to control the encoding quality, which requires setting the maximum encoding QP. However, the kVTCompressionPropertyKey_MaxAllowedFrameQP only takes effect in the kVTVideoEncoderSpecification_EnableLowLatencyRateControl mode. In this mode, when the maximum QP is limited and the bitrate is insufficient, the encoder will drop frames. Our desired scenario is for the encoder to not actively drop frames when the maximum QP is limited. Instead, when the bitrate is insufficient, the encoder should be able to encode the frame with the maximum QP, allowing the frame size to be larger. This would provide a more seamless experience for users in video conferencing situations, where maintaining consistent video quality is crucial. It is worth noting that Android has already implemented this feature in Android 12, which demonstrates the value and feasibility of this enhancement. We kindly request that you consider adding support for external control of frame dropping in the Videotoolbox encoder to accommodate our needs. This enhancement would greatly benefit our project and others that require real-time, low-latency video encoding solutions.
Posted
by Runze.
Last updated
.
Post not yet marked as solved
0 Replies
400 Views
I wish to parse the bitstream of HEVC video with alpha (specific video format reference WWDC2019: https://developer.apple.com/videos/play/wwdc2019/506). Taking the 'puppets_with_alpha_hevc.mov' file from 'Using HEVC Video with Alpha' as an example, I would first extract the HEVC bitstream, then parse its fields. When it comes to the VPS field, as I reach the vps_extension, I find that the bitstream in 'puppets_with_alpha_hevc.mov' does not conform to the HEVC standard document, preventing further parsing. Besides the 'HEVC Video with Alpha Interoperability Profile.pdf', are there any more detailed documents describing the HEVC video with alpha format? Also, is there anyone who can encode or decode HEVC with alpha videos on systems other than macOS?
Posted
by PaulDirac.
Last updated
.
Post not yet marked as solved
0 Replies
403 Views
I'm working on a MV-HEVC transcoder, based on the VTEncoderForTranscoding sample code. In swift the following code snippet generates a linker error on macOS 14.0 and 14.1. let err = VTCompressionSessionEncodeMultiImageFrame(compressionSession, taggedBuffers: taggedBuffers, presentationTimeStamp: pts, duration: .invalid, frameProperties: nil, infoFlagsOut: nil) { (status: OSStatus, infoFlags: VTEncodeInfoFlags, sbuf: CMSampleBuffer?) -> Void in outputHandler(status, infoFlags, sbuf, thisFrameNumber) } Error: ld: Undefined symbols: VideoToolbox.VTCompressionSessionEncodeMultiImageFrame(_: __C.VTCompressionSessionRef, taggedBuffers: [CoreMedia.CMTaggedBuffer], presentationTimeStamp: __C.CMTime, duration: __C.CMTime, frameProperties: __C.CFDictionaryRef?, infoFlagsOut: Swift.UnsafeMutablePointer<__C.VTEncodeInfoFlags>?, outputHandler: (Swift.Int32, __C.VTEncodeInfoFlags, __C.CMSampleBufferRef?) -> ()) -> Swift.Int32, referenced from: (3) suspend resume partial function for VTEncoderForTranscoding_Swift.(compressFrames in _FE7277D5F28D8DABDFC10EA0164D825D)(from: VTEncoderForTranscoding_Swift.VideoSource, options: VTEncoderForTranscoding_Swift.Options, expectedFrameRate: Swift.Float, outputHandler: @Sendable (Swift.Int32, __C.VTEncodeInfoFlags, __C.CMSampleBufferRef?, Swift.Int) -> ()) async throws -> () in VTEncoderForTranscoding.o Using VTCompressionSessionEncodeMultiImageFrameWithOutputHandler in ObjC doesn't trigger a linker error. Anybody knows how to get it to work in Swift?
Posted
by map.
Last updated
.
Post not yet marked as solved
1 Replies
582 Views
In case when I have locked white balance and custom exposure, on black background when I introduce new object in view, both objects become brighter. How to turn off this feature or compensate for that change in a performant way? This is how I configure the session, note that Im setting a video format which supports at least 180 fps which is required for my needs. private func configureSession() { self.sessionQueue.async { [self] in //MARK: Init session guard let session = try? validSession() else { fatalError("Session is unexpectedly nil") } session.beginConfiguration() guard let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for:AVMediaType.video, position: .back) else { fatalError("Video Device is unexpectedly nil") } guard let videoDeviceInput: AVCaptureDeviceInput = try? AVCaptureDeviceInput(device:device) else { fatalError("videoDeviceInput is unexpectedly nil") } guard session.canAddInput(videoDeviceInput) else { fatalError("videoDeviceInput could not be added") } session.addInput(videoDeviceInput) self.videoDeviceInput = videoDeviceInput self.videoDevice = device //MARK: Connect session IO let dataOutput = AVCaptureVideoDataOutput() dataOutput.setSampleBufferDelegate(self, queue: sampleBufferQueue) session.automaticallyConfiguresCaptureDeviceForWideColor = false guard session.canAddOutput(dataOutput) else { fatalError("Could not add video data output") } session.addOutput(dataOutput) dataOutput.alwaysDiscardsLateVideoFrames = true dataOutput.videoSettings = [ String(kCVPixelBufferPixelFormatTypeKey): pixelFormat.rawValue ] if let captureConnection = dataOutput.connection(with: .video) { captureConnection.preferredVideoStabilizationMode = .off captureConnection.isEnabled = true } else { fatalError("No Capture Connection for the session") } //MARK: Configure AVCaptureDevice do { try device.lockForConfiguration() } catch { fatalError(error.localizedDescription) } if let format = format(fps: fps, minWidth: minWidth, format: pixelFormat) { // 180FPS, YUV layout device.activeFormat = format device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(fps)) device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(fps)) } else { fatalError("Compatible format not found") } device.activeColorSpace = .sRGB device.isGlobalToneMappingEnabled = false device.automaticallyAdjustsVideoHDREnabled = false device.automaticallyAdjustsFaceDrivenAutoExposureEnabled = false device.isFaceDrivenAutoExposureEnabled = false device.setFocusModeLocked(lensPosition: 0.4) device.isSubjectAreaChangeMonitoringEnabled = false device.exposureMode = AVCaptureDevice.ExposureMode.custom let exp = CMTime(value: Int64(40), timescale: 100_000) let isoValue = min(max(40, device.activeFormat.minISO), device.activeFormat.maxISO) device.setExposureModeCustom(duration: exp, iso: isoValue) { t in } device.setWhiteBalanceModeLocked(with: AVCaptureDevice.WhiteBalanceGains(redGain: 1.0, greenGain: 1.0, blueGain: 1.0)) { (timestamp:CMTime) -> Void in } device.unlockForConfiguration() session.commitConfiguration() onAVSessionReady() } } This post (https://stackoverflow.com/questions/34511431/ios-avfoundation-different-photo-brightness-with-the-same-manual-exposure-set) suggests that the effect can be mitigated by settings camera exposure to .locked right after setting device.setExposureModeCustom(). This works properly only if used with async api and still does not influence the effect. Async approach: private func onAVSessionReady() { guard let device = device() else { fatalError("Device is unexpectedly nil") } guard let sesh = try? validSession() else { fatalError("Device is unexpectedly nil") } MCamSession.shared.activeFormat = device.activeFormat MCamSession.shared.currentDevice = device self.observer = SPSDeviceKVO(device: device, session: sesh) self.start() Task { await lockCamera(device) } } private func lockCamera(_ device: AVCaptureDevice) async { do { try device.lockForConfiguration() } catch { fatalError(error.localizedDescription) } _ = await device.setFocusModeLocked(lensPosition: 0.4) let exp = CMTime(value: Int64(40), timescale: 100_000) let isoValue = min(max(40, device.activeFormat.minISO), device.activeFormat.maxISO) _ = await device.setExposureModeCustom(duration: exp, iso: isoValue) _ = await device.setWhiteBalanceModeLocked(with: AVCaptureDevice.WhiteBalanceGains(redGain: 1.0, greenGain: 1.0, blueGain: 1.0)) device.exposureMode = AVCaptureDevice.ExposureMode.locked device.unlockForConfiguration() } private func configureSession() { // same session init as before ... onAVSessionReady() }
Posted
by linkov.
Last updated
.
Post not yet marked as solved
3 Replies
2.8k Views
As some have clocked, this was added in a recent SDK release... kCMVideoCodecType_AV1 ...does anyone know if and when AV1 decode support, even if software-only, is going to be available on Apple platforms? At the moment, one must decode using dav1d, (which is pretty performant, to be fair) but are we expecting at least software AV1 support on existing hardware any time soon, does anybody know?
Posted
by oviano.
Last updated
.
Post not yet marked as solved
1 Replies
451 Views
Can you provide any information on why we might be getting OSStatus error code 268435465 while using Apple's Video Toolbox framework functionalities and how can we avoid getting it?
Posted
by csemre.
Last updated
.