Render advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.

Metal Documentation

Posts under Metal tag

278 Posts
Sort by:
Post not yet marked as solved
9 Replies
3.4k Views
Trying to call [MTLDevice newTextureWithDescriptor:iosurface:plane:] on an Apple Silicon mac, where the descriptor specifies MTLStorageModeShared, I am getting a failed assertion error: -[MTLDebugDevice newTextureWithDescriptor:iosurface:plane:]:2387: failed assertion `Texture Descriptor Validation IOSurface textures must use MTLStorageModeManaged I don't really understand why we have this limitation, MTLStorageModeShared textures are supported on Apple silicon (despite the documentation in https://developer.apple.com/documentation/metal/mtldevice/1433378-newtexturewithdescriptor?language=objc which claims overwise.
Posted
by
Post not yet marked as solved
2 Replies
1.7k Views
Hi, I'm aiming to render frames as close as possible to the presentation time - it's for a smartphone-based VR headset (Google Cardboard style) where ideally there is a "late warp" just before presenting a new frame that applies both lens distortion and also orientation correction to reduce the error in the predicted head pose by leveraging the very latest motion sensor data. So leaving it as late as possible gives better pose predictions. This late warp is a pretty simple pass - just a textured mesh, so it's typically <2ms of GPU time. Thanks to the Developer Labs it's been suggested I could use a compute shader for the warp so it can share GPU resources with any ongoing rendering work too (as Metal doesn't have a public per-queue priority to allow for pre-emption of other rendering work, which is the way this is generally handled on Android). What I'm trying to figure out now is how best to schedule the rendering. With CAMetalLayer maximumDrawableCount set to 2, you're pretty much guaranteed that the frame will be displayed on the next vsync if rendering completes quickly enough. However sometimes the system seems to hold onto the drawables a bit longer than expected which blocks getNextDrawable. With maximumDrawableCount of 3, it seems easy enough to maintain 60FPS but looking in instruments the CPU to display latency varies - there are times where its around 50ms (ie already 2 frames in the queue to be presented first, waitForNextDrawable blocks), some periods where it's 30 ms (generally 1 other frame queued) and sometimes where it drops down to the ideal 16ms or less. Is there any way to call present that will just drop any other queued frames in the layer? I've tried presentDrawable:drawable atTime:0 and afterMinimumDuration:0 but to no avail. It seems like with CAMetalLayer I'll just have to addPresentedHandler blocks to keep track of how many are queued in the display so I can ensure the queue is generally empty before presenting the next frame. A related question is the deadline for completing the rendering. The CAMetalLayer is in the compositing fast path, but it seems rendering needs to still complete (ie all the GPU workload finished) around 5ms before the next vsync for it to be displayed on the next vsync. I suspect there's a deadline for the frame just in case it needs to be composited but any hints / ideas for handling that would be appreciated. It seems to be slightly device-specific; somewhat unexpectedly, the iPod touch 7 latches frames that finish much closer to the vsync time than the iPhone 12 Pro. I've also just come across AVSampleBufferDisplayLayer that I'm taking a look at now. It seems to offer better control of the queue, and still enables the compositing fast path, but I can't actually see any feedback like addPresentedHandler to be able to judge what the deadline is to have a frame shown in the next vsync.
Posted
by
Post marked as solved
3 Replies
2.6k Views
So I'm trying to make a simple scene with some geometry of sorts and a movable camera. So far I've been able to render basic geometry in 2D alongside transforming set geometry using matrices. Following this I moved on to the Calculating Primitive Visibility Using Depth Testing Sample ... also smooth sailing. Then I had my first go at transforming positions between different coordinate spaces. I didn't get quite far with my rather blurry memory from OpenGL, all dough when I compared my view and projection matrix with the ones from the OpenGL glm::lookAt() and glm::perspective() functions there seemed to be no fundamental differences. Figuring Metal doing things differently I browsed the Metal Sample Code library for a sample containing a first-person camera. The only one I could find was Rendering Terrain Dynamically with Argument Buffers. Luckily it contained code for calculating view and projection matrices, which seemed to differ from my code. But I still have problems Problem Description When positioning the camera right in front of the geometry the view as well as the projection matrix produce seemingly accurate results: Camera Positon(0, 0, 1); Camera Directio(0, 0, -1) When moving further away though, parts of the scene are being wrongfully culled. Notably the ones farther away from the camera: Camera Position(0, 0, 2); Camera Direction(0, 0, -1) Rotating the Camera also produces confusing results: Camera Position: (0, 0, 1); Camera Direction: (cos(250°), 0, sin(250°)), yes I converted to radians My Suspicions The Projection isn't converting the vertices from view space to Normalised Device Coordinates correctly. Also when comparing two first two images, the lower part of the triangle seems to get bigger as the camera moves away which also doesn't appear to be right. Obviously the view matrix is also not correct as I'm pretty sure what's describe above isn't supposed to happen. Code Samples MainShader.metal #include <metal_stdlib> #include <Shared/Primitives.h> #include <Shared/MainRendererShared.h> using namespace metal; struct transformed_data {     float4 position [[position]];     float4 color; }; vertex transformed_data vertex_shader(uint vertex_id [[vertex_id]],                                       constant _vertex *vertices [[buffer(0)]],                                       constant _uniforms& uniforms [[buffer(1)]]) {     transformed_data output;     float3 dir = {0, 0, -1};     float3 inEye = float3{ 0, 0, 1 }; // position     float3 inTo = inEye + dir; // position + direction     float3 inUp = float3{ 0, 1, 0};          float3 z = normalize(inTo - inEye);     float3 x = normalize(cross(inUp, z));     float3 y = cross(z, x);     float3 t = (float3) { -dot(x, inEye), -dot(y, inEye), -dot(z, inEye) };     float4x4 viewm = float4x4(float4 { x.x, y.x, z.x, 0 },                               float4 { x.y, y.y, z.y, 0 },                               float4 { x.z, y.z, z.z, 0 },                               float4 { t.x, t.y, t.z, 1 });          float _nearPlane = 0.1f;     float _farPlane = 100.0f;     float _aspectRatio = uniforms.viewport_size.x / uniforms.viewport_size.y;     float va_tan = 1.0f / tan(0.6f * 3.14f * 0.5f);     float ys = va_tan;     float xs = ys / _aspectRatio;     float zs = _farPlane / (_farPlane - _nearPlane);     float4x4 projectionm = float4x4((float4){ xs,  0,  0, 0},                                     (float4){  0, ys,  0, 0},                                     (float4){  0,  0, zs, 1},                                     (float4){  0,  0, -_nearPlane * zs, 0 } );          float4 projected = (projectionm*viewm) * float4(vertices[vertex_id].position,1);     vector_float2 viewport_dim = vector_float2(uniforms.viewport_size);     output.position = vector_float4(0.0, 0.0, 0.0, 1.0);     output.position.xy = projected.xy / (viewport_dim / 2);     output.position.z = projected.z;     output.color = vertices[vertex_id].color;     return output; } fragment float4 fragment_shader(transformed_data in [[stage_in]]) {return in.color;} These are the vertices definitions let triangle_vertices = [_vertex(position: [ 480.0, -270.0, 1.0], color: [1.0, 0.0, 0.0, 1.0]),                          _vertex(position: [-480.0, -270.0, 1.0], color: [0.0, 1.0, 0.0, 1.0]),                          _vertex(position: [   0.0,  270.0, 0.0], color: [0.0, 0.0, 1.0, 1.0])] // TO-DO: make this use 4 vertecies and 6 indecies let quad_vertices = [_vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0])] This is the initialisation code of the depth stencil descriptor and state _view.depthStencilPixelFormat = MTLPixelFormat.depth32Float _view.clearDepth = 1.0 // other render initialisation code let depth_stencil_descriptor = MTLDepthStencilDescriptor() depth_stencil_descriptor.depthCompareFunction = MTLCompareFunction.lessEqual depth_stencil_descriptor.isDepthWriteEnabled = true; depth_stencil_state = try! _view.device!.makeDepthStencilState(descriptor: depth_stencil_descriptor)! So if you have any idea on why its not working or have some code of your own that's working or know of any public samples containing a working first-person camera, feel free to help me out. Thank you in advance! (please ignore any spelling or similar mistakes, english is not my primary language)
Posted
by
Post not yet marked as solved
1 Replies
1.3k Views
Hi, I want to begin by saying thank you Apple for making the Spatial framework! Please add a million more features ;-) I'm using the following code to make an object "look at" another point, but at a particular rotation the object "flips" its rotations. See a video here: https://www.dropbox.com/s/5irxt0gxou4c2j6/QuaternionFlip.mov?dl=0 I shake the mouse cursor when it happens to make it obvious to you. import Spatial let lookAtRotation = Rotation3D(eye: Point3D(position), target: Point3D(x: 0, y: 0, z: 0), up: Vector3D(x: 0, y: 1, z: 0)) myObj.quaternion = lookAtRotation.quaternion So my question is why is this happening, and how can I fix it? thx
Posted
by
MKX
Post not yet marked as solved
1 Replies
2.1k Views
SPECIFIC ISSUE ENCOUNTERED I'm playing VR videos through my app using Metal graphics API with Cardboard XR Plugin for Unity. After the recent iOS 16 update (and Xcode 14 update too), videos in stereoscopic mode were flipped upside down and backwards. After trying to change sides manually in code, I only managed to show correct sides (it's not all upside down anymore), but when I turn the phone UP, the view is moving DOWN to the ground, and vice versa. Same issue for left-right phone moving. Also Unity-made popup question is shown on the wrong side (backside - shown in the video attachment) Here is the video of the issue for inverted (upside down flip) view: https://www.dropbox.com/s/wacnknu5wf4cif1/Everything%20upside%20down.mp4?dl=0 Here is the video of inverted moving: https://www.dropbox.com/s/7re3u1a5q81flfj/Inverted%20moving.mp4?dl=0 IMPORTANT: I did manage few times fixing it to work on local build, but when I build it for TestFlight, it is always inverted. WHAT I SUSPECT I found numerous other developers encountered this issue when they were using Metal. Back in the days when OpenGL ES 2 and 3 were still supported by Apple, it did fix the issue switching on one of those. But now since only Metal is supported with new Unity, there is no workaround, and also I would like to use Metal. DEVICE multiple iPhones running multiple iOS 16 versions has this issue. Specific OS version is 16.1 EXPECTED BEHAVIOR VR videos should show right side (not upside down image) and moving up should show upper part of the video, and vice versa. Same goes for left and right move. Currently everything is flipped, but not every time the same kind of flip. Sometimes in rare cases it's even shown correctly. VERSIONS USED What version of Google Cardboard are you using? Cardboard XR Plugin 1.18.1 What version of Unity are you using? 2022.1.13f1
Posted
by
Post not yet marked as solved
6 Replies
1.3k Views
Hi We're working on an app that uses RealityKit to present products in the customer's home. We have a bug where every once in a while (somewhere around 1 in 100 runs), all entities are rendered using the green channel only (see image below). It seems to happen to occur in all entities in the ARView, regardless of model or material type. Due to the flaky nature of this bug, I found it really hard to debug, and I can't seem to rule out an internal issue in RealityKit. Did anyone run into similar issues or have any hints of where to look for the culprit?
Posted
by
Post marked as solved
3 Replies
1.2k Views
"Source is unavailble" I want to test Metal shader debugger. I downloaded this sample code from Apple: https://developer.apple.com/documentation/metal/compute_passes/processing_a_texture_in_a_compute_function?language=objc In the project build settings, "Metal compiler - Build Options" -> "Produce Debugging Information", I set the value to "Yes, include source code". Then run and take a capture. In the Metal debugger, when I hit the debug button on the draw call, i get an error "Source is unavailable". Clicking the "import source" button in the dialog doesn't solve the issue. What am i doing wrong? My workstation: Mac mini M1 2020, Mac OS Ventura 13.1, XCode 14.1 (14B47b)
Posted
by
Post not yet marked as solved
1 Replies
1.7k Views
I use the rendering pipeline urp 12.1.7, and then use the unity2021.3.11f1 version to export the xcode14.2 project, and then run it on the iPhone 11 pro max (16.3), and then click the "M" button to perform gpu capture workload. Once this operation is performed, the memory usage will rise sharply, triggering the out of memory. For example, the memory of the game itself is about 1.3G. Once the gpu capture is performed, it will become more than 2.3G, resulting in the inability to profile the game.
Posted
by
Post marked as solved
3 Replies
1.2k Views
I'm looking to get a GPU to use for Object capture. The requirements are an AMD GPU with 4GB of VRAM and ray tracing support, the rx 580 seems to be able to do ray tracing from what I've found online but looks like someone had an issue with a 580X here https://developer.apple.com/forums/thread/689891
Posted
by
Post not yet marked as solved
2 Replies
438 Views
Hello, How do we support behaviors in out custom parameters in FxPlug? A simple example would simply be recreating the float parameter. Once we have done that we would like to support multi-dimensional vectors of floats. Thanks, Nikki
Posted
by
Post not yet marked as solved
15 Replies
4.9k Views
Hello everyone! After some time to think about I proceed with graphics api, I figured opengl will be my first since I'm completely new to graphics programming. As in my last post you may find, I was speaking on moltenvk and might just use metal instead, along with the demos I found using metal. So for now, and I know this is said MANY TIMES, apple deprecated opengl but wish to use it because I'm new to graphics programming and want to develop an app(a rendering engine really) for the iPhone 14 Pro Max and macOS Ventura 13.2(I think this is the latest). So what do you guys think? Can I still use opengl es on the 14 max, along with opengl 4+ on latest macOS even though is deprecated?
Posted
by
Post not yet marked as solved
1 Replies
767 Views
Hi, I am generating a Metal library that I build using the command line tools on macOS for iphoneos, following the instructions here. Then I serialise this to a binary blob that I load at runtime, which seems to work ok as everything renders as expected. When I am doing a frame capture and open up a shader function it tries to load the symbols and fails. I tried pointing it to the directory (and the file) containing the symbols file, but it never resolves those. In the bottom half of the Import External Sources dialogue there is one entry in the Library | Debug Info section: The library name is Library 0x21816b5dc0 and below Debug Info it says Invalid UUID. The validation layer doesn't flag any invalid behaviour so I am a bit lost and not sure what to try next?
Posted
by
Post not yet marked as solved
1 Replies
1k Views
Hello, In one of my apps, I'm trying to modify the pixel buffer from a ProRAW capture to then write the modified DNG. This is what I try to do: After capturing a ProRAW photo, I work in the delegate function func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { ... } In here I can access the photo.pixelBuffer and get its base address: guard let buffer = photo.pixelBuffer else { return } CVPixelBufferLockBaseAddress(buffer, []) let pixelFormat = CVPixelBufferGetPixelFormatType(buffer) // I check that the pixel format corresponds with ProRAW . This is successful, the code enters the if block if (pixelFormat == kCVPixelFormatType_64RGBALE) { guard let pointer = CVPixelBufferGetBaseAddress(buffer) else { return } // We have 16bits per component, 4 components let count = CVPixelBufferGetWidth(buffer) * CVPixelBufferGetHeight(buffer) * 4 let mutable = pointer.bindMemory(to: UInt16.self, capacity: count) // As a test, I want to replace all pixels with 65000 to get a white image let finalBufferArray : [Float] = Array.init(repeating: 65000, count: count) vDSP_vfixu16(finalBufferArray, 1, mutable, 1, vDSP_Length(finalBufferArray.count)) // I create an vImage Pixel buffer. Note that I'm referencing the photo.pixelBuffer to be sure that I modified the underlying pixelBuffer of the AVCapturePhoto object let imageBuffer = vImage.PixelBuffer<vImage.Interleaved16Ux4>(referencing: photo.pixelBuffer!, planeIndex: 0) // Inspect the CGImage let cgImageFormat = vImage_CGImageFormat(bitsPerComponent: 16, bitsPerPixel: 64, colorSpace: CGColorSpace(name: CGColorSpace.displayP3)!, bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue | CGBitmapInfo.byteOrder16Little.rawValue))! let cgImage = imageBuffer.makeCGImage(cgImageFormat: cgImageFormat)! // I send the CGImage to the main view controller. This is successful, I can see a white image when rendering the CGImage into a UIImage. This lets me think that I successfully modified the photo.pixelBuffer firingFrameDelegate?.didSendCGImage(image: cgImage) } // Now I try to write data. Unfortunately, this does not work. The photo.fileDataRepresentation() writes the data corresponding to the original, unmodified pixelBuffer `if let photoData = photo.fileDataRepresentation() { // Sending the data to the view controller and rendering it in the UIImage displays the original photo, not the modified pixelBuffer firingFrameDelegate?.didSendData(data: photoData) thisPhotoData = photoData }` CVPixelBufferUnlockBaseAddress(buffer, []) The same happens if I try to write the data to disk. The DNG file displays the original photo and not the data corresponding to the modified photo.pixelBuffer. Do you know why this code should not work? Do you have any ideas on how I can modify the ProRAW pixel buffer so that I can write the modified buffer into a DNG file? My goal is to write a modified file, so, I'm not sure I can use CoreImage of vImage to output a ProRAW file.
Posted
by
Post not yet marked as solved
9 Replies
2.2k Views
I'm trying to debug my metal shaders in Xcode 14.2. However clicking "Capture metal GPU" while debugging recently started showing the following error: Capturing MTLPipelineLibrary is not supported. Unsupported method: -[MTLDevice newPipelineLibraryWithFilePath:error:] To enable capturing, disable calls to unsupported APIs and relaunch your application. I can't find any info about MTLPipelineLibrary or how to disable it. I've also confirmed that Metal GPU Frame Capture is enabled in my build What's causing this issue and how can I work around it so I can debug my shaders again?
Posted
by
Post not yet marked as solved
3 Replies
1.4k Views
I have an older App using SpriteKit and have updated to Xcode 14.3. Compilation and linking is ok. App never gets to my code but crashes in AppDelegate with: Metal API Validation Enabled -[MTLDebugDevice newLibraryWithURL:error:]:2250: failed assertion `url must not be nil.' I do not anywhere explicitly init() or call Metal. Using LLDB at the point of crash, I tried to peek into the App Bundle but po Bundle.main.paths(forResourcesOfType: "URL", inDirectory: "nil") 0 elements likewise: po Bundle.main.paths(forResourcesOfType: "*", inDirectory: "nil") 0 elements I suspect a build script or preformed bundle.main got screwed up, but I do not know how to investigate. I should that this seems to be isolated to one MacBook Pro, compiles and runs fine on an iMac! Any thoughts?
Posted
by
Post not yet marked as solved
2 Replies
1.3k Views
We recently observed antialiasing issues on iOS devices since this week. Works fine on desktop and Android. Refer to screenshots below. Is there a specific flag we need to enable/disable to fix this? we are using babylonjs as 3d renderer it work on webgl
Posted
by
Post marked as solved
1 Replies
1.2k Views
I'm trying to use the randomTensor function from MPS graph to initialize the weights of a fully connected layer. I can create the graph and run inference using the randomly initialized values, but when I try to train and update these randomly initialized weights, I'm hitting a crash: Assertion failed: (isa<To>(Val) && "cast<Ty>() argument of incompatible type!"), function cast, file Casting.h, line 578. I can train the graph if I instead initialize the weights myself on the CPU, but I thought using the randomTensor functions would be faster/allow initialization to occur on the GPU. Here's my code for building the graph including both methods of weight initialization: func buildGraph(variables: inout [MPSGraphTensor]) -> (MPSGraphTensor, MPSGraphTensor, MPSGraphTensor, MPSGraphTensor) { let inputPlaceholder = graph.placeholder(shape: [2], dataType: .float32, name: nil) let labelPlaceholder = graph.placeholder(shape: [1], name: nil) // This works for inference but not training let descriptor = MPSGraphRandomOpDescriptor(distribution: .uniform, dataType: .float32)! let weightTensor = graph.randomTensor(withShape: [2, 1], descriptor: descriptor, seed: 2, name: nil) // This works for inference and training // let weights = [Float](repeating: 1, count: 2) // let weightTensor = graph.variable(with: Data(bytes: weights, count: 2 * MemoryLayout<Float32>.size), shape: [2, 1], dataType: .float32, name: nil) variables += [weightTensor] let output = graph.matrixMultiplication(primary: inputPlaceholder, secondary: weightTensor, name: nil) let loss = graph.softMaxCrossEntropy(output, labels: labelPlaceholder, axis: -1, reuctionType: .sum, name: nil) return (inputPlaceholder, labelPlaceholder, output, loss) } And to run the graph I have the following in my sample view controller: override func viewDidLoad() { super.viewDidLoad() var variables: [MPSGraphTensor] = [] let (inputPlaceholder, labelPlaceholder, output, loss) = buildGraph(variables: &variables) let gradients = graph.gradients(of: loss, with: variables, name: nil) let learningRate = graph.constant(0.001, dataType: .float32) var updateOps: [MPSGraphOperation] = [] for (key, value) in gradients { let updates = graph.stochasticGradientDescent(learningRate: learningRate, values: key, gradient: value, name: nil) let assign = graph.assign(key, tensor: updates, name: nil) updateOps += [assign] } let commandBuffer = MPSCommandBuffer(commandBuffer: Self.commandQueue.makeCommandBuffer()!) let executionDesc = MPSGraphExecutionDescriptor() executionDesc.completionHandler = { (resultsDictionary, nil) in for (key, value) in resultsDictionary { var output: [Float] = [0] value.mpsndarray().readBytes(&output, strideBytes: nil) print(output) } } let inputDesc = MPSNDArrayDescriptor(dataType: .float32, shape: [2]) let input = MPSNDArray(device: Self.device, descriptor: inputDesc) var inputArray: [Float] = [1, 2] input.writeBytes(&inputArray, strideBytes: nil) let source = MPSGraphTensorData(input) let labelMPSArray = MPSNDArray(device: Self.device, descriptor: MPSNDArrayDescriptor(dataType: .float32, shape: [1])) var labelArray: [Float] = [1] labelMPSArray.writeBytes(&labelArray, strideBytes: nil) let label = MPSGraphTensorData(labelMPSArray) // This runs inference and works // graph.encode(to: commandBuffer, feeds: [inputPlaceholder: source], targetTensors: [output], targetOperations: [], executionDescriptor: executionDesc) // // commandBuffer.commit() // commandBuffer.waitUntilCompleted() // This trains but does not work graph.encode( to: commandBuffer, feeds: [inputPlaceholder: source, labelPlaceholder: label], targetTensors: [], targetOperations: updateOps, executionDescriptor: executionDesc) commandBuffer.commit() commandBuffer.waitUntilCompleted() } And a few other relevant variables are created at the class scope: let graph = MPSGraph() static let device = MTLCreateSystemDefaultDevice()! static let commandQueue = device.makeCommandQueue()! How can I use these randomTensor functions on MPSGraph to randomly initialize weights for training?
Posted
by
Post marked as solved
2 Replies
460 Views
In the Metal's profiler I get this suggestion: "Texture:0x12880c010 "MTKView Depth"" has storage mode 'Private' but was a transient render target accessed exclusively by the GPU Consider changing the storage mode to 'Memoryless'. This texture is created by MTKView automatically if depthStencilPixelFormat property is set to a meaningful value. It is even possible to control the texture usage by setting depthStencilAttachmentTextureUsage property. But I can't see how can I change the storage mode of this texture. It seems that the MTKView should somehow set the right storage mode automatically as this excerpt from the documentation suggests: ...the view automatically creates those textures for you and configures them as part of any render passes that the view creates. But in my case it certainly fails to take into account that in my pipeline I don't read from this texture. So the question is how can I change the storage mode of the depth texture of MTKView to .memoryless?
Posted
by