3D Graphics

RSS for tag

Discuss integrating three-dimensional graphics into your app.

Posts under 3D Graphics tag

30 Posts
Sort by:
Post not yet marked as solved
4 Replies
687 Views
Hi there - Where would a dev go these days to get an initial understanding of SceneKit? The WWDC videos linked in various places seem to be gone?! For example, the SceneKit page at developer.apple.com lists features a session videos link that comes up without any result, https://developer.apple.com/scenekit/ Any advice..? Cheers, Jan
Posted
by JayMcBee.
Last updated
.
Post not yet marked as solved
0 Replies
199 Views
Greetings, I'm a new developer and would like to understand exactly how XCode, SwiftUI, Reality Kit, ARKit, Reality Composer Pro and Unity work together to create a cosmology app in 3D? I have created a working solar system using Javascript and html and WebGL for the 3D stuff. I would now like to carry that over to the Apple Vision Pro. Can someone tell me what software frameworks, and api's in the Apple ecosystem I can use to code that? Many thanks
Posted
by Jeqhe.
Last updated
.
Post not yet marked as solved
0 Replies
266 Views
With the advent of the third dimension, I wanted to know wether if it's currently possible to display the flat swiftUI Views with some thickness in xrOS? While the .frame(depth: CGFloat?) does the job for Views in general, I am eager for a more granular level of control at the pixel-specific level. I was hoping that there are lower level APIs to achieve this & I've looked into the fairly new layerEffect shader API, yet it seems it's incapable of setting the depths of pixels...
Posted
by Treata.
Last updated
.
Post not yet marked as solved
4 Replies
410 Views
Hello, I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?). I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it. If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this. Thanks for any hints!
Posted Last updated
.
Post not yet marked as solved
1 Replies
393 Views
I want to transport my creations from UE/Unity to the APVpro environment. Is it possible right now? How can I do this? (Beg your pardon but, I have no skills in programming and I've never touched an Apple Develop kit software. I'm just a 3D artist.)
Posted Last updated
.
Post not yet marked as solved
0 Replies
406 Views
We scan the room using the RoomPlan API, and after the scan, we obtain objects with a white color along with shadows and shading. However, upon updating the color of these objects, we experience a loss of shadows and shading. RoomPlan scan After Update
Posted Last updated
.
Post not yet marked as solved
0 Replies
569 Views
I am trying to control the orientation of a box in Scene Kit (iOS) using gestures. I am using the translation in x and y to update the x and y rotation of the SCNNode. After a long search I have realised that x and y rotation will always lead to z rotation, thanks to this excellent post: [https://gamedev.stackexchange.com/questions/136174/im-rotating-an-object-on-two-axes-so-why-does-it-keep-twisting-around-the-thir?newreg=130c66c673f848a7be2873bf675573a9) So I am trying to get the z rotation causes, and then remove this from my object by applying the inverse quaternion however when I rotate the object 90 deg around x, and then 90 deg around Y it behaves VERY weirdly. It is almost behaving as it is in gimbal lock, but I did not think that using quaternion in the way that I am would cause gimbal lock in this way. I am sure it is something I am missing, or perhaps I am not able to remove the z rotation in this way. Thanks! I have added a video of the strange behaviour here [https://github.com/marcusraty/RotationExample/blob/main/Example.MP4) And the code example is here [https://github.com/marcusraty/RotationExample)
Posted Last updated
.
Post not yet marked as solved
0 Replies
638 Views
Hi everyone, I hope this a right place to ask questions like this. I have an app which uses WEBGL scene implemented with three.js. At some point of loading, the app crashes (page reloads), which usually indicates that device ran out of memory reserved for this tab. This webgl scene however is fairly light compared to other scenes that load without any issues. How do I debug this? Is it possible to reallocate more memory before page is loaded, or is there a simple way to reduce memory consumption? I have very limited control over 3d scene, and it doesn't use heavy assets (mostly simple geometry with textures on them)
Posted
by Sergei27.
Last updated
.
Post not yet marked as solved
0 Replies
546 Views
I have created a react-three-fiber web-app which uses webgl canvas. When the canvas is forced to rerender due to external change, a new canvas context is being created and the previous one is not lost. This leads to safari refresh and crash as the no of active canvas contexts goes beyond the max limit alongwith the memory. This issue is specific to Safari only and (chrome only on iOS). Does Safari have a different garbage collection mechanism and not include webgl context clear automatically? If it doesnt, is there an API to invoke the same?
Posted Last updated
.
Post not yet marked as solved
0 Replies
468 Views
I have a spherical HDR image that is being used for environment lighting in a SceneKit scene. I want to rotate the environment image. To set the environment lighting, I use the lightingEnvironment SCNMaterialProperty. This works fine, and my scene is lit using the IBL. As with all SCNMaterialProperty, I expect that I can use the contentsTransform property to rotate or transform the HDR. So I set it as follows: lightingEnvironment.contentsTransform = SCNMatrix4MakeRotation((45.0).degreesAsRadians, 0.0, 1.0, 0.0) My expectation is that the lighting environment would rotate 45 degrees in Y, but it doesn't change at all. Even if I throw in a completely random transform on all axis, there is no apparent change. To test if there is a change, I added a chrome ball and a diffuse ball to my scene and I'm comparing reflections on the chrome ball, and lighting on the diffuse ball. There is no change on either. It doesn't matter where I set the contentsTransform, it doesn't work. I had intended to set it from the renderer(_:updateAtTime:) method on the SCNRendererDelegate, so that I can rotate the IBL to match the point of view of the scene, but even if I transform the environment immediately after it is set, there is never a change. Is this a bug? Or am I doing something entirely wrong? Has anyone on here ever managed to get this to work?
Posted
by Matt Cox.
Last updated
.
Post not yet marked as solved
0 Replies
501 Views
While experimenting with AR view for different Product We came across an issue with Apples AR viewer where for glass (PBR Opacity) it is causing black patch to appear behind (Maybe Shadow). https://sketchfab.com/3d-models/welcome-5ba96662ba8d4774951f33fead4bf9db https://sketchfab.com/3d-models/candel-91b2059634e0478eb93777b0b2a726e9 We Tried to find work around but after doing multiple test but with all our experiments we came to the conclusion that Apple AR viewer is not able recognize the glass material and adjust the ground shadow as required.
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.4k Views
After the iOS 17 update, objects rendered in SceneKit that have both a normal map and morph targets do not render correctly. The shading and lighting appear dark and without reflections. Using a normal map without morph targets or having morph targets on an object without using a normal map works fine. However, the combination of using both breaks the rendering. Using diffuse, normal map and a morpher: Diffuse and normal, NO morpher:
Posted
by Ginada.
Last updated
.
Post not yet marked as solved
1 Replies
563 Views
We have a content creation application that uses SceneKit for rendering. In our application, we have a 3D view (non-AR), and an AR "mode" the user can go into. Currently we use a SCNView and an ARSCNView to achieve this. Our application currently targets iOS and MacOS (with AR only on iOS). With VisionOS on the horizon, we're trying to bring the tech stack up to date, as SceneKit no longer seems to be supported, and isn't supported at all on VisionOS. We'd like to use RealityKit for 3D rendering on all platforms; MacOS, iOS and VisionOS, in non-AR and AR mode where appropriate. So far this hasn't been too difficult. The greatest challenge has been adding gesture support to replace the allowsCameraControl option on the SCNView, as no such option on ARView. However, now we get to control shading, we're hitting a bit of a roadblock. When viewing the scene in Non-AR mode, we would like to add a ground plane underneath the object that only displays a shadow - in other words, it's opacity would be determined by light contribution. I've had a dig through the CustomMaterial API and it seems extremely primitive - there doesn't seem any way to get light information for a particular fragment, unless I'm missing something? Additionally, we support a custom shader that we can apply as materials. This custom shader allows the properties of the material to vary depending on the light contribution, light incidence angle...etc. Looking at the CustomMaterial, the API seems to be defining a CustomMaterial, whereas as guess we want to customise the BRDF calculation. We achieve this in SceneKit using a series of shader modifiers hooked into the various SCNShaderModifierEntryPoint. On VisionOS of course the lack of support for CustomMaterial is a shame, but I would hope something similar can be achieved with RealityComposer? We can live with the lack of custom material, but the shadow catcher is a killer for adoption for us. I'd even accept a different limited features on VisionOS, as long as we can matching our existing feature set on existing platforms. What am I missing?
Posted
by Matt Cox.
Last updated
.
Post not yet marked as solved
2 Replies
1.7k Views
Dear Apple Team and everyone who has experience with MapKit. I am building an app where I need to hide some 3D models and replace them with my custom 3D meshes using SceneKit. Up until now I was using Mapbox it allows to get mesh row data to reconstruct all maps 3D. Is there something like this possible with MapKit? Use cases Say you navigated to Kennedy Space Center Launch Complex 39 and there is no 3D model of actual building. I would like to be able to hide simple massing and replace it with my model. In 3D Satellite VIew some areas have detailed meshes. Say London The Queen's Walk. I would like to make specific area flat so I can place my 3D model on top of Satellite 3D View to illustrate new structure or building. Last one. Is it possible to change existing buildings colours? I know it is possible transparency Thank you @apple
Posted
by artpen.
Last updated
.
Post not yet marked as solved
0 Replies
469 Views
Hey there fellas, i am a beginner on ios trying to find a way to capture/extract depthdata from a captured image in my photo gallery. I have been using xcode to achieve this task but i am particularly new to swift so i am having troubles. I need the depthdata from the image to work on it and to be able to manipulate it.
Posted
by Musadiq.
Last updated
.
Post not yet marked as solved
1 Replies
1.2k Views
Hi, I want to begin by saying thank you Apple for making the Spatial framework! Please add a million more features ;-) I'm using the following code to make an object "look at" another point, but at a particular rotation the object "flips" its rotations. See a video here: https://www.dropbox.com/s/5irxt0gxou4c2j6/QuaternionFlip.mov?dl=0 I shake the mouse cursor when it happens to make it obvious to you. import Spatial let lookAtRotation = Rotation3D(eye: Point3D(position), target: Point3D(x: 0, y: 0, z: 0), up: Vector3D(x: 0, y: 1, z: 0)) myObj.quaternion = lookAtRotation.quaternion So my question is why is this happening, and how can I fix it? thx
Posted
by MKX.
Last updated
.
Post not yet marked as solved
0 Replies
561 Views
I write iOS plug in to integrate MetalFX Spatial Upscaling to Unity URP project. C# Code in Unity: namespace UnityEngine.Rendering.Universal { /// /// Renders the post-processing effect stack. /// internal class PostProcessPass : ScriptableRenderPass { RenderTexture _dstRT = null; [DllImport ("__Internal")] private static extern void MetalFX_SpatialScaling (IntPtr srcTexture, IntPtr dstTexture, IntPtr outTexture); } } void RenderFinalPass(CommandBuffer cmd, ref RenderingData renderingData) { // ...... case ImageUpscalingFilter.MetalFX: { var upscaleRtDesc = tempRtDesc; upscaleRtDesc.width = cameraData.pixelWidth; upscaleRtDesc.height = cameraData.pixelHeight; RenderingUtils.ReAllocateIfNeeded(ref m_UpscaledTarget, upscaleRtDesc, FilterMode.Point, TextureWrapMode.Clamp, name: "_UpscaledTexture"); var metalfxInputSize = new Vector2(cameraData.cameraTargetDescriptor.width, cameraData.cameraTargetDescriptor.height); if (_dstRT == null) { _dstRT = new RenderTexture(upscaleRtDesc.width, upscaleRtDesc.height, 0, RenderTextureFormat.ARGB32); _dstRT.Create(); } // call native plugin cmd.SetRenderTarget(m_UpscaledTarget, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare); MetalFX_SpatialScaling(sourceTex.rt.GetNativeTexturePtr(), m_UpscaledTarget.rt.GetNativeTexturePtr(), _dstRT.GetNativeTexturePtr()); Graphics.CopyTexture(_dstRT, m_UpscaledTarget.rt); sourceTex = m_UpscaledTarget; PostProcessUtils.SetSourceSize(cmd, upscaleRtDesc); break; } // ..... } Objective-c Code in iOS: head file: #import <Foundation/Foundation.h> #import <MetalFX/MTLFXSpatialScaler.h> @protocol MTLTexture; @protocol MTLDevice; API_AVAILABLE(ios(16.0)) @interface MetalFXDelegate : NSObject { int mode; id _device; id _commandQueue; id _outTexture; id _mfxSpatialScaler; id _mfxSpatialEncoder; }; (void)SpatialScaling: (MTLTextureRef) srcTexture dstTexure: (MTLTextureRef) dstTexture outTexure: (MTLTextureRef) outTexture; (void)saveTexturePNG: (MTLTextureRef) texture url: (CFURLRef) url; @end m file: #import "MetalFXOC.h" @implementation MetalFXDelegate (id)init { self = [super init]; return self; } static MetalFXDelegate* delegateObject = nil; (void)SpatialScaling: (MTLTextureRef) srcTexture dstTexture: (MTLTextureRef) dstTexture outTexture: (MTLTextureRef) outTexture { int width = (int)srcTexture.width; int height = (int)srcTexture.height; int dstWidth = (int)dstTexture.width; int dstHeight = (int)dstTexture.height; if (_mfxSpatialScaler == nil) { MTLFXSpatialScalerDescriptor* desc; desc = [[MTLFXSpatialScalerDescriptor alloc]init]; desc.inputWidth = width; desc.inputHeight = height; desc.outputWidth = dstWidth; ///_screenWidth desc.outputHeight = dstHeight; ///_screenHeight desc.colorTextureFormat = srcTexture.pixelFormat; desc.outputTextureFormat = dstTexture.pixelFormat; if (@available(iOS 16.0, *)) { desc.colorProcessingMode = MTLFXSpatialScalerColorProcessingModePerceptual; } else { // Fallback on earlier versions } _device = MTLCreateSystemDefaultDevice(); _mfxSpatialScaler = [desc newSpatialScalerWithDevice:_device]; if (_mfxSpatialScaler == nil) { return; } _commandQueue = [_device newCommandQueue]; MTLTextureDescriptor *texdesc = [[MTLTextureDescriptor alloc] init]; texdesc.width = (int)dstTexture.width; texdesc.height = (int)dstTexture.height; texdesc.storageMode = MTLStorageModePrivate; texdesc.usage = MTLTextureUsageRenderTarget | MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite; texdesc.pixelFormat = dstTexture.pixelFormat; _outTexture = [_device newTextureWithDescriptor:texdesc]; } id upscaleCommandBuffer = [_commandQueue commandBuffer]; upscaleCommandBuffer.label = @"Upscale Command Buffer"; _mfxSpatialScaler.colorTexture = srcTexture; _mfxSpatialScaler.outputTexture = _outTexture; [_mfxSpatialScaler encodeToCommandBuffer:upscaleCommandBuffer]; // outTexture = _outTexture; id textureCommandBuffer = [_commandQueue commandBuffer]; id _mfxSpatialEncoder =[textureCommandBuffer blitCommandEncoder]; [_mfxSpatialEncoder copyFromTexture:_outTexture toTexture:outTexture]; [_mfxSpatialEncoder endEncoding]; [upscaleCommandBuffer commit]; } @end extern "C" { void MetalFX_SpatialScaling(void* srcTexturePtr, void* dstTexturePtr, void* outTexturePtr) { if (delegateObject == nil) { if (@available(iOS 16.0, *)) { delegateObject = [[MetalFXDelegate alloc] init]; } else { // Fallback on earlier versions } } if (srcTexturePtr == nil || dstTexturePtr == nil || outTexturePtr == nil) { return; } id<MTLTexture> srcTexture = (__bridge id<MTLTexture>)(void *)srcTexturePtr; id<MTLTexture> dstTexture = (__bridge id<MTLTexture>)(void *)dstTexturePtr; id<MTLTexture> outTexture = (__bridge id<MTLTexture>)(void *)outTexturePtr; if (@available(iOS 16.0, *)) { [delegateObject SpatialScaling: srcTexture dstTexture: dstTexture outTexture: outTexture]; } else { // Fallback on earlier versions } return; } } With the C# and objective code, the appear on screen is black. If I save the MTLTexture to PNG in ios plug in, the PNG is ok(not black), so I think the outTexture: outTexture write to unity is failed.
Posted
by Hsuehnj.
Last updated
.
Post marked as solved
3 Replies
981 Views
Hello everyone! I have a small concern about one little thing when it comes to programming in metal. There are some models that I wish to use along with animations and skins on them, the file extension for them is called gltf. glTF has been used in a number of projects such as unity and unreal engine and godot and blender. I was wondering if metal supports this file extension or not. Anyone here knows the answer?
Posted Last updated
.
Post not yet marked as solved
0 Replies
563 Views
Hi, I trying to use Metal cpp, but I have compile error: ISO C++ requires the name after '::' to be found in the same scope as the name before '::' metal-cpp/Foundation/NSSharedPtr.hpp(162): template <class _Class> _NS_INLINE NS::SharedPtr<_Class>::~SharedPtr() { if (m_pObject) { m_pObject->release(); } } Use of old-style cast metal-cpp/Foundation/NSObject.hpp(149): template <class _Dst> _NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj) { #ifdef __OBJC__ return (__bridge _Dst)pObj; #else return (_Dst)pObj; #endif // __OBJC__ } XCode Project was generated using CMake: target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20) target_compile_options(${MODULE_NAME} PRIVATE "-Wgnu-anonymous-struct" "-Wold-style-cast" "-Wdtor-name" "-Wpedantic" "-Wno-gnu" ) May be need to set some CMake flags for C++ compiler ?
Posted Last updated
.