VideoToolbox

RSS for tag

Work directly with hardware-accelerated video encoding and decoding capabilities using VideoToolbox.

VideoToolbox Documentation

Posts under VideoToolbox tag

31 Posts
Sort by:
Post not yet marked as solved
2 Replies
577 Views
Hey Developers, I'm on the hunt for a new Apple laptop geared towards coding, and I'd love to tap into your collective wisdom. If you have recommendations or personal experiences with a specific model that excels in the coding realm, please share your insights. Looking for optimal performance and a seamless coding experience. Your input is gold – thanks a bunch!
Posted
by
Post not yet marked as solved
1 Replies
573 Views
First of all, I tried MobileVLCKit but there is too much delay Then I wrote a UDPManager class and I am writing my codes below. I would be very happy if anyone has information and wants to direct me. Broadcast code ffmpeg -f avfoundation -video_size 1280x720 -framerate 30 -i "0" -c:v libx264 -preset medium -tune zerolatency -f mpegts "udp://127.0.0.1:6000?pkt_size=1316" Live View Code (almost 0 delay) ffplay -fflags nobuffer -flags low_delay -probesize 32 -analyzeduration 1 -strict experimental -framedrop -f mpegts -vf setpts=0 udp://127.0.0.1:6000 OR mpv udp://127.0.0.1:6000 --no-cache --untimed --no-demuxer-thread --vd-lavc-threads=1 UDPManager import Foundation import AVFoundation import CoreMedia import VideoDecoder import SwiftUI import Network import Combine import CocoaAsyncSocket import VideoToolbox class UDPManager: NSObject, ObservableObject, GCDAsyncUdpSocketDelegate { private let host: String private let port: UInt16 private var socket: GCDAsyncUdpSocket? @Published var videoOutput: CMSampleBuffer? init(host: String, port: UInt16) { self.host = host self.port = port } func connectUDP() { do { socket = GCDAsyncUdpSocket(delegate: self, delegateQueue: .global()) //try socket?.connect(toHost: host, onPort: port) try socket?.bind(toPort: port) try socket?.enableBroadcast(true) try socket?.enableReusePort(true) try socket?.beginReceiving() } catch { print("UDP soketi oluşturma hatası: \(error)") } } func closeUDP() { socket?.close() } func udpSocket(_ sock: GCDAsyncUdpSocket, didConnectToAddress address: Data) { print("UDP Bağlandı.") } func udpSocket(_ sock: GCDAsyncUdpSocket, didNotConnect error: Error?) { print("UDP soketi bağlantı hatası: \(error?.localizedDescription ?? "Bilinmeyen hata")") } func udpSocket(_ sock: GCDAsyncUdpSocket, didReceive data: Data, fromAddress address: Data, withFilterContext filterContext: Any?) { if !data.isEmpty { DispatchQueue.main.async { self.videoOutput = self.createSampleBuffer(from: data) } } } func createSampleBuffer(from data: Data) -> CMSampleBuffer? { var blockBuffer: CMBlockBuffer? var status = CMBlockBufferCreateWithMemoryBlock( allocator: kCFAllocatorDefault, memoryBlock: UnsafeMutableRawPointer(mutating: (data as NSData).bytes), blockLength: data.count, blockAllocator: kCFAllocatorNull, customBlockSource: nil, offsetToData: 0, dataLength: data.count, flags: 0, blockBufferOut: &blockBuffer) if status != noErr { return nil } var sampleBuffer: CMSampleBuffer? let sampleSizeArray = [data.count] status = CMSampleBufferCreateReady( allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, formatDescription: nil, sampleCount: 1, sampleTimingEntryCount: 0, sampleTimingArray: nil, sampleSizeEntryCount: 1, sampleSizeArray: sampleSizeArray, sampleBufferOut: &sampleBuffer) if status != noErr { return nil } return sampleBuffer } } I didn't know how to convert the data object to video, so I searched and found this code and wanted to try it func createSampleBuffer(from data: Data) -> CMSampleBuffer? { var blockBuffer: CMBlockBuffer? var status = CMBlockBufferCreateWithMemoryBlock( allocator: kCFAllocatorDefault, memoryBlock: UnsafeMutableRawPointer(mutating: (data as NSData).bytes), blockLength: data.count, blockAllocator: kCFAllocatorNull, customBlockSource: nil, offsetToData: 0, dataLength: data.count, flags: 0, blockBufferOut: &blockBuffer) if status != noErr { return nil } var sampleBuffer: CMSampleBuffer? let sampleSizeArray = [data.count] status = CMSampleBufferCreateReady( allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, formatDescription: nil, sampleCount: 1, sampleTimingEntryCount: 0, sampleTimingArray: nil, sampleSizeEntryCount: 1, sampleSizeArray: sampleSizeArray, sampleBufferOut: &sampleBuffer) if status != noErr { return nil } return sampleBuffer } And I tried to make CMSampleBuffer a player but it just shows a white screen and doesn't work struct SampleBufferPlayerView: UIViewRepresentable { typealias UIViewType = UIView var sampleBuffer: CMSampleBuffer func makeUIView(context: Context) -> UIView { let view = UIView(frame: .zero) let displayLayer = AVSampleBufferDisplayLayer() displayLayer.videoGravity = .resizeAspectFill view.layer.addSublayer(displayLayer) context.coordinator.displayLayer = displayLayer return view } func updateUIView(_ uiView: UIView, context: Context) { context.coordinator.sampleBuffer = sampleBuffer context.coordinator.updateSampleBuffer() } func makeCoordinator() -> Coordinator { Coordinator() } class Coordinator { var displayLayer: AVSampleBufferDisplayLayer? var sampleBuffer: CMSampleBuffer? func updateSampleBuffer() { guard let displayLayer = displayLayer, let sampleBuffer = sampleBuffer else { return } if displayLayer.isReadyForMoreMediaData { displayLayer.enqueue(sampleBuffer) } else { displayLayer.requestMediaDataWhenReady(on: .main) { if displayLayer.isReadyForMoreMediaData { displayLayer.enqueue(sampleBuffer) print("isReadyForMoreMediaData") } } } } } } And I tried to use it but I couldn't figure it out, can anyone help me? struct ContentView: View { // udp://@127.0.0.1:6000 @ObservedObject var udpManager = UDPManager(host: "127.0.0.1", port: 6000) var body: some View { VStack { if let buffer = udpManager.videoOutput{ SampleBufferDisplayLayerView(sampleBuffer: buffer) .frame(width: 300, height: 200) } } .onAppear(perform: { udpManager.connectUDP() }) } }
Posted
by
Post not yet marked as solved
1 Replies
334 Views
Does Video Toolbox’s compression session yield data I can decompress on a different device that doesn’t have Apple’s decompression? i.e. so I can network data to other devices that aren’t necessarily Apple? or is the format proprietary rather than just regular h.264 (for example)? If I can decompress without video toolbox, may I have reference to some examples for how to do this using cross-platform APIs? Maybe FFMPEG has something?
Posted
by
Post not yet marked as solved
0 Replies
298 Views
Is AVQT capable of being used to measure encoding quality of PQ or HLG based content beyond SDR? If so, how am I able to leverage it. If not, is there a roadmap for timing to enable this type of tool?
Posted
by
Post not yet marked as solved
2 Replies
377 Views
Hi everyone, I need to add spatial video maker in my app which was wrote in objective-c. I found some reference code by swift, can you help me with converting the code to objective -c? let left = CMTaggedBuffer( tags: [.stereoView(.leftEye), .videoLayerID(leftEyeLayerIndex)], pixelBuffer: leftEyeBuffer) let right = CMTaggedBuffer( tags: [.stereoView(.rightEye), .videoLayerID(rightEyeLayerIndex)], pixelBuffer: rightEyeBuffer) let result = adaptor.appendTaggedBuffers( [left, right], withPresentationTime: leftPresentationTs)
Posted
by
Post not yet marked as solved
9 Replies
618 Views
So I've been trying for weeks now to implement a compression mechanism into my app project that compresses MV-HEVC video files in-app without stripping videos of their 3D properties, but every single implementation I have tried has either stripped the encoded MV-HEVC video file of its 3D properties (making the video monoscopic), or has crashed with a fatal error. I've read the Reading multiview 3D video files and Converting side-by-side 3D video to multiview HEVC documentation files, but was unable to myself come out with anything useful. My question therefore is: How do you go about compressing/encoding an MV-HEVC video file in-app whilst preserving the stereoscopic/3D properties of that MV-HEVC video file? Below is the best implementation I was able to come up with (which simply compresses uploaded MV-HEVC videos with an arbitrary bit rate). With this implementation (my compressVideo function), the MV-HEVC files that go through it are compressed fine, but the final result is the loss of that MV-HEVC video file's stereoscopic/3D properties. If anyone could point me in the right direction with anything it would be greatly, greatly appreciated. My current implementation (that strips MV-HEVC videos of their stereoscopic/3D properties): static func compressVideo(sourceUrl: URL, bitrate: Int, completion: @escaping (Result<URL, Error>) -> Void) { let asset = AVAsset(url: sourceUrl) asset.loadTracks(withMediaType: .video) { videoTracks, videoError in guard let videoTrack = videoTracks?.first, videoError == nil else { completion(.failure(videoError ?? NSError(domain: "VideoUploader", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to load video track"]))) return } asset.loadTracks(withMediaType: .audio) { audioTracks, audioError in guard let audioTrack = audioTracks?.first, audioError == nil else { completion(.failure(audioError ?? NSError(domain: "VideoUploader", code: -2, userInfo: [NSLocalizedDescriptionKey: "Failed to load audio track"]))) return } let outputUrl = sourceUrl.deletingLastPathComponent().appendingPathComponent(UUID().uuidString).appendingPathExtension("mov") guard let assetReader = try? AVAssetReader(asset: asset), let assetWriter = try? AVAssetWriter(outputURL: outputUrl, fileType: .mov) else { completion(.failure(NSError(domain: "VideoUploader", code: -3, userInfo: [NSLocalizedDescriptionKey: "AssetReader/Writer initialization failed"]))) return } let videoReaderSettings: [String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32ARGB] let videoSettings: [String: Any] = [ AVVideoCompressionPropertiesKey: [AVVideoAverageBitRateKey: bitrate], AVVideoCodecKey: AVVideoCodecType.hevc, AVVideoHeightKey: videoTrack.naturalSize.height, AVVideoWidthKey: videoTrack.naturalSize.width ] let assetReaderVideoOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings) let assetReaderAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil) if assetReader.canAdd(assetReaderVideoOutput) { assetReader.add(assetReaderVideoOutput) } else { completion(.failure(NSError(domain: "VideoUploader", code: -4, userInfo: [NSLocalizedDescriptionKey: "Couldn't add video output reader"]))) return } if assetReader.canAdd(assetReaderAudioOutput) { assetReader.add(assetReaderAudioOutput) } else { completion(.failure(NSError(domain: "VideoUploader", code: -5, userInfo: [NSLocalizedDescriptionKey: "Couldn't add audio output reader"]))) return } let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil) let videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings) videoInput.transform = videoTrack.preferredTransform assetWriter.shouldOptimizeForNetworkUse = true assetWriter.add(videoInput) assetWriter.add(audioInput) assetReader.startReading() assetWriter.startWriting() assetWriter.startSession(atSourceTime: CMTime.zero) let videoInputQueue = DispatchQueue(label: "videoQueue") let audioInputQueue = DispatchQueue(label: "audioQueue") videoInput.requestMediaDataWhenReady(on: videoInputQueue) { while videoInput.isReadyForMoreMediaData { if let sample = assetReaderVideoOutput.copyNextSampleBuffer() { videoInput.append(sample) } else { videoInput.markAsFinished() if assetReader.status == .completed { assetWriter.finishWriting { completion(.success(outputUrl)) } } break } } } audioInput.requestMediaDataWhenReady(on: audioInputQueue) { while audioInput.isReadyForMoreMediaData { if let sample = assetReaderAudioOutput.copyNextSampleBuffer() { audioInput.append(sample) } else { audioInput.markAsFinished() break } } } } } }
Posted
by
Post not yet marked as solved
0 Replies
234 Views
I am Using VideoToolbox VTCompressionSession For Encoding The Frame in H264 Format Which I will send through web socket to a browser. The Received Frames Will be Decoded and Output Will be rendered in the Website. Now, when using Some encoders the video is rendered always with four frame latency. How Frame is sent to server : start>------------ f1 ------------ f2 ------------ f3 ------------ f4 ------------- f5 ... How rendering is happening : start>-------------------------------------------------------------------------- f1 ------------ f2 ------------ f3 ------------ f4 ----------- ... This Sometime becomes two frame latency and Sometime it becomes sixteen frame latency so the usability is getting affected. Im using this configuration in videotoolbox's VTCompressionSession: kVTCompressionPropertyKey_AverageBitRate=3MB kVTCompressionPropertyKey_ExpectedFrameRate=24 kVTCompressionPropertyKey_RealTime=true kVTCompressionPropertyKey_ProfileLevel=kVTProfileLevel_H264_High_AutoLevel kVTCompressionPropertyKey_AllowFrameReordering = false kVTCompressionPropertyKey_MaxKeyFrameInterval=1000 With Same Configuration i am able to achieve 1 in - 1 out with com.apple.videotoolbox.videoencoder.h264.gva. This Issue Is replication with Encoder com.apple.videotoolbox.videoencoder.ave.avc Not Sure if its Encoder Specific. I have also seen that there are difference in VUI Parameters between encoded output of both encoders. I want to know if there is something i could do to solve this issue from the Encoder Configuration or another API which is provided by the VideoToolBox to ensure that frames are decoded and rendered at the same time by Decoder. Thanks in Advance....
Posted
by
Post not yet marked as solved
0 Replies
239 Views
I have an image viewing app with support for avif (and avis) images. I'm trying to figure out if the recent bug in CoreMedia (dav1d) affects my app. The apple security update: https://support.apple.com/en-gb/HT214097 The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845), where c is the dav1d context. With some reverse engineering, the way I see CMPhoto calling into VideoToolBox (which internally calls into AV1SW.videodecoder, which is a wrapper around dav1d), the max frame delay is hardcoded to 1 in the dav1d settings which intern means that c->n_fc in dav1d is always 1. The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845). From my understand, this should mean that my app isn't affected. The apple security update however clearly mentions that "Processing an image may lead to arbitrary code execution". Surely I'm missing something?
Posted
by
Post marked as solved
2 Replies
138 Views
I have been seeing some crash reports for my app on some devices (not all of them). The crash occurs while converting a CVPixelBuffer captured from Video to a JPG using VTCreateCGImageFromCVPixelBuffer from VideoToolBox. I have not been able to reproduce the crash on local devices, even under adverse memory conditions (many apps running in the background). The field crash reports show that VTCreateCGImageFromCVPixelBuffer does the conversion in another thread and that thread crashed at call to vConvert_420Yp8_CbCr8ToARGB8888_vec. Any suggestions on how to debug this further would be helpful.
Posted
by
Post not yet marked as solved
3 Replies
124 Views
I am a bit confused on whether certain Video Toolbox (VT) encoders support hardware acceleration or not. When I query the list of VT encoders (VTCopyVideoEncoderList(nil,&encoderList)) on an iPhone 14 Pro device, for avc1 (AVC / H.264) and hevc1 (HEVC / H.265) encoders, the kVTVideoEncoderList_IsHardwareAccelerated flag is not there, which -based on the documentation found on the VTVideoEncoderList.h- means that the encoders do not support hardware acceleration: optional. CFBoolean. If present and set to kCFBooleanTrue, indicates that the encoder is hardware accelerated. In fact, no encoders from this list return this flag as true and most of them do not include the flag at all on their dictionaries. On the other hand, when I create a compression session using the VTCompressionSessionCreate() and pass the kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder as true in the encoder specifications, after querying the kVTCompressionPropertyKey_UsingHardwareAcceleratedVideoEncoder using the following code, I get a CFBoolean value of true for both H.264 and H.265 encoder. In fact, I get a true value (for both of the aforementioned encoders) even if I don't specify the kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder during the creation of the compression session (note here that this flag was introduced in iOS 17.4 ^1). So the question is: Are those encoders actually hardware accelerated on my device, and if so, why isn't that reflected on the VTCopyVideoEncoderList() call?
Posted
by
Post not yet marked as solved
0 Replies
63 Views
I'm trying to cast the screen from an iOS device to an Android device. I'm leveraging ReplayKit on iOS to capture the screen and VideoToolbox for compressing the captured video data into H.264 format using CMSampleBuffers. Both iOS and Android are configured for H.264 compression and decompression. While screen casting works flawlessly within the same platform (iOS to iOS or Android to Android), I'm encountering an error ("not in avi mode") on the Android receiver when casting from iOS. My research suggests that the underlying container formats for H.264 might differ between iOS and Android. Data transmission over the TCP socket seems to be functioning correctly. My question is: Is there a way to ensure a common container format for H.264 compression and decompression across iOS and Android platforms? Here's a breakdown of the iOS sender details: Device: iPhone 13 mini running iOS 17 Development Environment: Xcode 15 with a minimum deployment target of iOS 16 Screen Capture: ReplayKit for capturing the screen and obtaining CMSampleBuffers Video Compression: VideoToolbox for H.264 compression Compression Properties: kVTCompressionPropertyKey_ConstantBitRate: 6144000 (bitrate) kVTCompressionPropertyKey_ProfileLevel: kVTProfileLevel_H264_Main_AutoLevel (profile and level) kVTCompressionPropertyKey_MaxKeyFrameInterval: 60 (maximum keyframe interval) kVTCompressionPropertyKey_RealTime: true (real-time encoding) kVTCompressionPropertyKey_Quality: 1 (lowest quality) NAL Unit Handling: Custom header is added to NAL units Android Receiver Details: Device: RedMi 7A running Android 10 Video Decoding: MediaCodec API for receiving and decoding the H.264 stream
Posted
by