VideoToolbox

RSS for tag

Work directly with hardware-accelerated video encoding and decoding capabilities using VideoToolbox.

VideoToolbox Documentation

Posts under VideoToolbox tag

31 Posts
Sort by:
Post not yet marked as solved
3 Replies
2.9k Views
As some have clocked, this was added in a recent SDK release... kCMVideoCodecType_AV1 ...does anyone know if and when AV1 decode support, even if software-only, is going to be available on Apple platforms? At the moment, one must decode using dav1d, (which is pretty performant, to be fair) but are we expecting at least software AV1 support on existing hardware any time soon, does anybody know?
Posted
by
Post not yet marked as solved
4 Replies
1.1k Views
When I doing hardware videotoolbox encoding, it always failed in some mac machines. But it works in some other mac machines. I can not find the different of the machines. In problem machine, when I tried use ffmpeg executable do h265/h264 hardware encoding directly, it has issue. tried use ffmpeg executable h265/h264 software encoding, it has no issue. The error is like this: ./ffmpeg -f rawvideo -s 352x288 -pix_fmt yuv420p -i akiyo_cif.yuv -c:v hevc_videotoolbox test.mp4 [rawvideo @ 0x7f80b9704780] Estimating duration from bitrate, this may be inaccurate Input #0, rawvideo, from 'akiyo_cif.yuv': Duration: 00:00:12.00, start: 0.000000, bitrate: 30412 kb/s Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 352x288, 30412 kb/s, 25 tbr, 25 tbn Stream mapping: Stream #0:0 -> #0:0 (rawvideo (native) -> hevc (hevc_videotoolbox)) Press [q] to stop, [?] for help [hevc_videotoolbox @ 0x7f80ba905640] Error encoding frame: -12905 [hevc_videotoolbox @ 0x7f80ba905640] popping: -542398533 [vost#0:0/hevc_videotoolbox @ 0x7f80ba9052c0] Error initializing output stream: Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Conversion failed! Did more tests in problem machine and Check the processes in mac. The first launched VTEncoderXPCService process always has something wrong. The later launched VTEncoderXPCService can work as expected Hi Apple Expert, Do you know what's the reason is? Thanks.
Posted
by
Post marked as solved
2 Replies
1k Views
Hello, I am attempting to simultaneously stream video to a remote client and run inference on a neural network (on the same video frame) locally. I have done this in other platforms, using Gstreamer on linux and on android using libstreaming for compression and packetization. I've attempted this now on iPhone using ffmpeg to stream and a capture session to feed the neural network but I run into the problem of multiple camera access. Most of the posts I see are concerned with receiving RTP streams in iOS but I need to do the opposite. As I am new to iOS and Swift I was hoping someone could provide method for RTP packetization? Any library recommendations or example code for something similar? Best,
Posted
by
Post not yet marked as solved
1 Replies
734 Views
Hi, I'm using VideoToolBox for h264 or HEVC video encoding. Thanks proper algorithm I used to change dynamically the video bitrate (by adjusting AverageBitrate and DataRateLimits) When running application on iOS 16.4 and further, video bitrate is no more respected by video encoder. Obtained bitrate is generally 70% percent lower than expected value. Is there a new setting to retrieve previous video encoder behaviour? Best regards
Posted
by
Post not yet marked as solved
2 Replies
854 Views
Hello, I'm trying to play an H.265 video stream, containing no I-frames but using GDR. No video is shown using VTDecompressionSessionDecodeFrame, other H.265 videos work fine. In the device log I can see the following errors: AppleAVD: VADecodeFrame(): isRandomAccessSkipPicture fail followed by An unknown error occurred (-17694) Is it possible to play such a video stream? Do I need to configure CMVideoFormatDescription in some way to enable it? Thanks for your help.
Posted
by
Post not yet marked as solved
1 Replies
729 Views
To my surprise, there is nothing information about VTPixelRotationSession on the Internet. I tried Baidu and Bing, and I got nothing about it. Here is my code: var rotationSession: VTPixelRotationSession! override func viewDidLoad() { let state = VTPixelRotationSessionCreate(kCFAllocatorDefault, &rotationSession) if state != 0 { return } VTSessionSetProperty(rotationSession, key: kVTPixelRotationPropertyKey_Rotation, value: kVTRotation_CW90) } func rotate(sourceBuffer: CVImageBuffer) { let pixelFormat = CVPixelBufferGetPixelFormatType(sourceBuffer) var rotatedImage: CVPixelBuffer? CVPixelBufferCreate(kCFAllocatorDefault, 640, 480, pixelFormat, nil, &rotatedImage) let state = VTPixelRotationSessionRotateImage(rotationSession, sourceBuffer, rotatedImage) if state == kCVReturnSuccess { ...... } } The rotatedImageBuffer does not get any pixels of the sourceImageBuffer after calling VTPixelRotationSessionRotateImage. Anyone can explain this issue? Apple official does not give any examples about it.
Posted
by
Post not yet marked as solved
0 Replies
510 Views
Use iOS17 VTCompressionSessionEncodeFrame beta5 version of video coding, if App switch back to the foreground from the background, Can appear kVTInvalidSessionErr reset according to the error invoked when VTCompressionSessionInvalidate cause gets stuck, if anyone else encountered this kind of situation, how should solve?
Posted
by
Post not yet marked as solved
1 Replies
623 Views
In case when I have locked white balance and custom exposure, on black background when I introduce new object in view, both objects become brighter. How to turn off this feature or compensate for that change in a performant way? This is how I configure the session, note that Im setting a video format which supports at least 180 fps which is required for my needs. private func configureSession() { self.sessionQueue.async { [self] in //MARK: Init session guard let session = try? validSession() else { fatalError("Session is unexpectedly nil") } session.beginConfiguration() guard let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for:AVMediaType.video, position: .back) else { fatalError("Video Device is unexpectedly nil") } guard let videoDeviceInput: AVCaptureDeviceInput = try? AVCaptureDeviceInput(device:device) else { fatalError("videoDeviceInput is unexpectedly nil") } guard session.canAddInput(videoDeviceInput) else { fatalError("videoDeviceInput could not be added") } session.addInput(videoDeviceInput) self.videoDeviceInput = videoDeviceInput self.videoDevice = device //MARK: Connect session IO let dataOutput = AVCaptureVideoDataOutput() dataOutput.setSampleBufferDelegate(self, queue: sampleBufferQueue) session.automaticallyConfiguresCaptureDeviceForWideColor = false guard session.canAddOutput(dataOutput) else { fatalError("Could not add video data output") } session.addOutput(dataOutput) dataOutput.alwaysDiscardsLateVideoFrames = true dataOutput.videoSettings = [ String(kCVPixelBufferPixelFormatTypeKey): pixelFormat.rawValue ] if let captureConnection = dataOutput.connection(with: .video) { captureConnection.preferredVideoStabilizationMode = .off captureConnection.isEnabled = true } else { fatalError("No Capture Connection for the session") } //MARK: Configure AVCaptureDevice do { try device.lockForConfiguration() } catch { fatalError(error.localizedDescription) } if let format = format(fps: fps, minWidth: minWidth, format: pixelFormat) { // 180FPS, YUV layout device.activeFormat = format device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(fps)) device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(fps)) } else { fatalError("Compatible format not found") } device.activeColorSpace = .sRGB device.isGlobalToneMappingEnabled = false device.automaticallyAdjustsVideoHDREnabled = false device.automaticallyAdjustsFaceDrivenAutoExposureEnabled = false device.isFaceDrivenAutoExposureEnabled = false device.setFocusModeLocked(lensPosition: 0.4) device.isSubjectAreaChangeMonitoringEnabled = false device.exposureMode = AVCaptureDevice.ExposureMode.custom let exp = CMTime(value: Int64(40), timescale: 100_000) let isoValue = min(max(40, device.activeFormat.minISO), device.activeFormat.maxISO) device.setExposureModeCustom(duration: exp, iso: isoValue) { t in } device.setWhiteBalanceModeLocked(with: AVCaptureDevice.WhiteBalanceGains(redGain: 1.0, greenGain: 1.0, blueGain: 1.0)) { (timestamp:CMTime) -> Void in } device.unlockForConfiguration() session.commitConfiguration() onAVSessionReady() } } This post (https://stackoverflow.com/questions/34511431/ios-avfoundation-different-photo-brightness-with-the-same-manual-exposure-set) suggests that the effect can be mitigated by settings camera exposure to .locked right after setting device.setExposureModeCustom(). This works properly only if used with async api and still does not influence the effect. Async approach: private func onAVSessionReady() { guard let device = device() else { fatalError("Device is unexpectedly nil") } guard let sesh = try? validSession() else { fatalError("Device is unexpectedly nil") } MCamSession.shared.activeFormat = device.activeFormat MCamSession.shared.currentDevice = device self.observer = SPSDeviceKVO(device: device, session: sesh) self.start() Task { await lockCamera(device) } } private func lockCamera(_ device: AVCaptureDevice) async { do { try device.lockForConfiguration() } catch { fatalError(error.localizedDescription) } _ = await device.setFocusModeLocked(lensPosition: 0.4) let exp = CMTime(value: Int64(40), timescale: 100_000) let isoValue = min(max(40, device.activeFormat.minISO), device.activeFormat.maxISO) _ = await device.setExposureModeCustom(duration: exp, iso: isoValue) _ = await device.setWhiteBalanceModeLocked(with: AVCaptureDevice.WhiteBalanceGains(redGain: 1.0, greenGain: 1.0, blueGain: 1.0)) device.exposureMode = AVCaptureDevice.ExposureMode.locked device.unlockForConfiguration() } private func configureSession() { // same session init as before ... onAVSessionReady() }
Posted
by
Post marked as solved
3 Replies
831 Views
I use VTCompressionSession and set the average bit rate by kVTCompressionPropertyKey_AverageBitRate and kVTCompressionPropertyKey_DataRateLimits. The code like this: VTCompressionSessionRef vtSession = session; if (vtSession == NULL) { vtSession = _encoderSession; } if (vtSession == nil) { return; } int tmp = bitrate; int bytesTmp = tmp * 0.15; int durationTmp = 1; CFNumberRef bitrateRef = CFNumberCreate(NULL, kCFNumberSInt32Type, &tmp); CFNumberRef bytes = CFNumberCreate(NULL, kCFNumberSInt32Type, &bytesTmp); CFNumberRef duration = CFNumberCreate(NULL, kCFNumberSInt32Type, &durationTmp); if ([self isSupportPropertyWithSession:vtSession key:kVTCompressionPropertyKey_AverageBitRate]) { [self setSessionPropertyWithSession:vtSession key:kVTCompressionPropertyKey_AverageBitRate value:bitrateRef]; }else { NSLog(@"Video Encoder: set average bitRate error"); } NSLog(@"Video Encoder: set bitrate bytes = %d, _bitrate = %d",bytesTmp, bitrate); CFMutableArrayRef limit = CFArrayCreateMutable(NULL, 2, &kCFTypeArrayCallBacks); CFArrayAppendValue(limit, bytes); CFArrayAppendValue(limit, duration); if([self isSupportPropertyWithSession:vtSession key:kVTCompressionPropertyKey_DataRateLimits]) { OSStatus ret = VTSessionSetProperty(vtSession, kVTCompressionPropertyKey_DataRateLimits, limit); if(ret != noErr){ NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:ret userInfo:nil]; NSLog(@"Video Encoder: set DataRateLimits failed with %s",error.description.UTF8String); } }else { NSLog(@"Video Encoder: set data rate limits error"); } CFRelease(bitrateRef); CFRelease(limit); CFRelease(bytes); CFRelease(duration); } This work fine on iOS16 and below. But on iOS17 the bitrate of the generate video file is much lower than the value I set. For exmaple, I set biterate 600k but on iOS17 the encoded video bitrate is lower than 150k. What went wrong?
Posted
by
Post not yet marked as solved
6 Replies
1.2k Views
0 VideoToolbox ___vtDecompressionSessionRemote_DecodeFrameCommon_block_invoke_2() 1 libdispatch.dylib __dispatch_client_callout() 2 libdispatch.dylib __dispatch_client_callout() 3 libdispatch.dylib __dispatch_lane_barrier_sync_invoke_and_complete() 4 VideoToolbox _vtDecompressionSessionRemote_DecodeFrameCommon() After the release of iOS 17 & iOS 16.7, thousands of crashes were added as videotoolbox each day. Has anyone encountered similar problems?
Posted
by
Post not yet marked as solved
5 Replies
2.6k Views
Recently I've been trying to play some AV1-encoded streams on my iPhone 15 Pro Max. First, I check for hardware support: VTIsHardwareDecodeSupported(kCMVideoCodecType_AV1); // YES Then I need to create a CMFormatDescription in order to pass it into a VTDecompressionSession. I've tried the following: { mediaType:'vide' mediaSubType:'av01' mediaSpecific: { codecType: 'av01' dimensions: 394 x 852 } extensions: {{ CVFieldCount = 1; CVImageBufferChromaLocationBottomField = Left; CVImageBufferChromaLocationTopField = Left; CVPixelAspectRatio = { HorizontalSpacing = 1; VerticalSpacing = 1; }; FullRangeVideo = 0; }} } but VTDecompressionSessionCreate gives me error -8971 (codecExtensionNotFoundErr, I assume). So it has something to do with the extensions dictionary? I can't find anywhere which set of extensions is necessary for it to work 😿. VideoToolbox has convenient functions for creating descriptions of AVC and HEVC streams (CMVideoFormatDescriptionCreateFromH264ParameterSets and CMVideoFormatDescriptionCreateFromHEVCParameterSets), but not for AV1. As of today I am using XCode 15.0 with iOS 17.0.0 SDK.
Posted
by
Post not yet marked as solved
1 Replies
494 Views
Can you provide any information on why we might be getting OSStatus error code 268435465 while using Apple's Video Toolbox framework functionalities and how can we avoid getting it?
Posted
by
Post not yet marked as solved
0 Replies
436 Views
I'm working on a MV-HEVC transcoder, based on the VTEncoderForTranscoding sample code. In swift the following code snippet generates a linker error on macOS 14.0 and 14.1. let err = VTCompressionSessionEncodeMultiImageFrame(compressionSession, taggedBuffers: taggedBuffers, presentationTimeStamp: pts, duration: .invalid, frameProperties: nil, infoFlagsOut: nil) { (status: OSStatus, infoFlags: VTEncodeInfoFlags, sbuf: CMSampleBuffer?) -> Void in outputHandler(status, infoFlags, sbuf, thisFrameNumber) } Error: ld: Undefined symbols: VideoToolbox.VTCompressionSessionEncodeMultiImageFrame(_: __C.VTCompressionSessionRef, taggedBuffers: [CoreMedia.CMTaggedBuffer], presentationTimeStamp: __C.CMTime, duration: __C.CMTime, frameProperties: __C.CFDictionaryRef?, infoFlagsOut: Swift.UnsafeMutablePointer<__C.VTEncodeInfoFlags>?, outputHandler: (Swift.Int32, __C.VTEncodeInfoFlags, __C.CMSampleBufferRef?) -> ()) -> Swift.Int32, referenced from: (3) suspend resume partial function for VTEncoderForTranscoding_Swift.(compressFrames in _FE7277D5F28D8DABDFC10EA0164D825D)(from: VTEncoderForTranscoding_Swift.VideoSource, options: VTEncoderForTranscoding_Swift.Options, expectedFrameRate: Swift.Float, outputHandler: @Sendable (Swift.Int32, __C.VTEncodeInfoFlags, __C.CMSampleBufferRef?, Swift.Int) -> ()) async throws -> () in VTEncoderForTranscoding.o Using VTCompressionSessionEncodeMultiImageFrameWithOutputHandler in ObjC doesn't trigger a linker error. Anybody knows how to get it to work in Swift?
Posted
by
map
Post not yet marked as solved
0 Replies
429 Views
I wish to parse the bitstream of HEVC video with alpha (specific video format reference WWDC2019: https://developer.apple.com/videos/play/wwdc2019/506). Taking the 'puppets_with_alpha_hevc.mov' file from 'Using HEVC Video with Alpha' as an example, I would first extract the HEVC bitstream, then parse its fields. When it comes to the VPS field, as I reach the vps_extension, I find that the bitstream in 'puppets_with_alpha_hevc.mov' does not conform to the HEVC standard document, preventing further parsing. Besides the 'HEVC Video with Alpha Interoperability Profile.pdf', are there any more detailed documents describing the HEVC video with alpha format? Also, is there anyone who can encode or decode HEVC with alpha videos on systems other than macOS?
Posted
by
Post not yet marked as solved
0 Replies
356 Views
We are currently working on a real-time, low-latency solution for video conferencing scenarios and have encountered some issues with the current implementation of the encoder. We need a feature enhancement for the Videotoolbox encoder. In our use case, we need to control the encoding quality, which requires setting the maximum encoding QP. However, the kVTCompressionPropertyKey_MaxAllowedFrameQP only takes effect in the kVTVideoEncoderSpecification_EnableLowLatencyRateControl mode. In this mode, when the maximum QP is limited and the bitrate is insufficient, the encoder will drop frames. Our desired scenario is for the encoder to not actively drop frames when the maximum QP is limited. Instead, when the bitrate is insufficient, the encoder should be able to encode the frame with the maximum QP, allowing the frame size to be larger. This would provide a more seamless experience for users in video conferencing situations, where maintaining consistent video quality is crucial. It is worth noting that Android has already implemented this feature in Android 12, which demonstrates the value and feasibility of this enhancement. We kindly request that you consider adding support for external control of frame dropping in the Videotoolbox encoder to accommodate our needs. This enhancement would greatly benefit our project and others that require real-time, low-latency video encoding solutions.
Posted
by
Post not yet marked as solved
2 Replies
449 Views
iOS17, the encoder sets an average bit rate of 5Mbps. in the first 25 minutes:the encoding rate is normal. 25 minutes-30minutes:The encoding bit rate will be reduced to 4M but can be restoredl. 30 minutes-70minutes:the encoding rate is normal. 70minutes-late:the bit rate will suddenly drop to 1Mbps and cannot be restored. As shown in the figure below, the yellow line is the frame rate and the green line is the code rate. The Code show as below - (void)_setBitrate:(NSUInteger)bitrate forSession:(VTCompressionSessionRef)session { NSParameterAssert(session && bitrate); OSStatus status = VTSessionSetProperty(session, kVTCompressionPropertyKey_AverageBitRate, (__bridge CFTypeRef)@(bitrate)); if (status != noErr) NSLog(@"set AverageBitRate error"); NSArray *limit = @[@(bitrate * 1.5/8), @(1)]; status = VTSessionSetProperty(session, kVTCompressionPropertyKey_DataRateLimits, (__bridge CFArrayRef)limit); if (status != noErr) NSLog(@"set DataRateLimits error"); } The problem only occurs in iOS17. Does anyone know what the reason is?
Posted
by
Post not yet marked as solved
0 Replies
329 Views
我使用VideoToolBox库进行编码后再通过NDI发送到网络上,是可以成功再苹果电脑上接收到ndi源屏显示画面的,但是在windows上只能ndi源名称,并没有画面显示。 我想知道是不是使用VideoToolBox库无法在windows上进行正确编码,这个问题需要如何解决
Posted
by