3D Graphics

RSS for tag

Discuss integrating three-dimensional graphics into your app.

Posts under 3D Graphics tag

30 Posts
Sort by:
Post not yet marked as solved
4 Replies
2.6k Views
Hi, I am trying to build and run the HelloPhotogrammetry app that is associated with WWDC21 session 10076 (available for download here). But when I run the app, I get the following error message: A GPU with supportsRaytracing is required I have a Mac Pro (2019) with an AMD Radeon Pro 580X 8 GB graphics card and 96 GB RAM. According to the requirements slide in the WWDC session, this should be sufficient. Is this a configuration issue or do I actually need a different graphics card (and if so, which one?). Thanks in advance.
Posted
by
Post marked as solved
3 Replies
2.5k Views
So I'm trying to make a simple scene with some geometry of sorts and a movable camera. So far I've been able to render basic geometry in 2D alongside transforming set geometry using matrices. Following this I moved on to the Calculating Primitive Visibility Using Depth Testing Sample ... also smooth sailing. Then I had my first go at transforming positions between different coordinate spaces. I didn't get quite far with my rather blurry memory from OpenGL, all dough when I compared my view and projection matrix with the ones from the OpenGL glm::lookAt() and glm::perspective() functions there seemed to be no fundamental differences. Figuring Metal doing things differently I browsed the Metal Sample Code library for a sample containing a first-person camera. The only one I could find was Rendering Terrain Dynamically with Argument Buffers. Luckily it contained code for calculating view and projection matrices, which seemed to differ from my code. But I still have problems Problem Description When positioning the camera right in front of the geometry the view as well as the projection matrix produce seemingly accurate results: Camera Positon(0, 0, 1); Camera Directio(0, 0, -1) When moving further away though, parts of the scene are being wrongfully culled. Notably the ones farther away from the camera: Camera Position(0, 0, 2); Camera Direction(0, 0, -1) Rotating the Camera also produces confusing results: Camera Position: (0, 0, 1); Camera Direction: (cos(250°), 0, sin(250°)), yes I converted to radians My Suspicions The Projection isn't converting the vertices from view space to Normalised Device Coordinates correctly. Also when comparing two first two images, the lower part of the triangle seems to get bigger as the camera moves away which also doesn't appear to be right. Obviously the view matrix is also not correct as I'm pretty sure what's describe above isn't supposed to happen. Code Samples MainShader.metal #include <metal_stdlib> #include <Shared/Primitives.h> #include <Shared/MainRendererShared.h> using namespace metal; struct transformed_data {     float4 position [[position]];     float4 color; }; vertex transformed_data vertex_shader(uint vertex_id [[vertex_id]],                                       constant _vertex *vertices [[buffer(0)]],                                       constant _uniforms& uniforms [[buffer(1)]]) {     transformed_data output;     float3 dir = {0, 0, -1};     float3 inEye = float3{ 0, 0, 1 }; // position     float3 inTo = inEye + dir; // position + direction     float3 inUp = float3{ 0, 1, 0};          float3 z = normalize(inTo - inEye);     float3 x = normalize(cross(inUp, z));     float3 y = cross(z, x);     float3 t = (float3) { -dot(x, inEye), -dot(y, inEye), -dot(z, inEye) };     float4x4 viewm = float4x4(float4 { x.x, y.x, z.x, 0 },                               float4 { x.y, y.y, z.y, 0 },                               float4 { x.z, y.z, z.z, 0 },                               float4 { t.x, t.y, t.z, 1 });          float _nearPlane = 0.1f;     float _farPlane = 100.0f;     float _aspectRatio = uniforms.viewport_size.x / uniforms.viewport_size.y;     float va_tan = 1.0f / tan(0.6f * 3.14f * 0.5f);     float ys = va_tan;     float xs = ys / _aspectRatio;     float zs = _farPlane / (_farPlane - _nearPlane);     float4x4 projectionm = float4x4((float4){ xs,  0,  0, 0},                                     (float4){  0, ys,  0, 0},                                     (float4){  0,  0, zs, 1},                                     (float4){  0,  0, -_nearPlane * zs, 0 } );          float4 projected = (projectionm*viewm) * float4(vertices[vertex_id].position,1);     vector_float2 viewport_dim = vector_float2(uniforms.viewport_size);     output.position = vector_float4(0.0, 0.0, 0.0, 1.0);     output.position.xy = projected.xy / (viewport_dim / 2);     output.position.z = projected.z;     output.color = vertices[vertex_id].color;     return output; } fragment float4 fragment_shader(transformed_data in [[stage_in]]) {return in.color;} These are the vertices definitions let triangle_vertices = [_vertex(position: [ 480.0, -270.0, 1.0], color: [1.0, 0.0, 0.0, 1.0]),                          _vertex(position: [-480.0, -270.0, 1.0], color: [0.0, 1.0, 0.0, 1.0]),                          _vertex(position: [   0.0,  270.0, 0.0], color: [0.0, 0.0, 1.0, 1.0])] // TO-DO: make this use 4 vertecies and 6 indecies let quad_vertices = [_vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0])] This is the initialisation code of the depth stencil descriptor and state _view.depthStencilPixelFormat = MTLPixelFormat.depth32Float _view.clearDepth = 1.0 // other render initialisation code let depth_stencil_descriptor = MTLDepthStencilDescriptor() depth_stencil_descriptor.depthCompareFunction = MTLCompareFunction.lessEqual depth_stencil_descriptor.isDepthWriteEnabled = true; depth_stencil_state = try! _view.device!.makeDepthStencilState(descriptor: depth_stencil_descriptor)! So if you have any idea on why its not working or have some code of your own that's working or know of any public samples containing a working first-person camera, feel free to help me out. Thank you in advance! (please ignore any spelling or similar mistakes, english is not my primary language)
Posted
by
Post not yet marked as solved
1 Replies
1.3k Views
Hi, I want to begin by saying thank you Apple for making the Spatial framework! Please add a million more features ;-) I'm using the following code to make an object "look at" another point, but at a particular rotation the object "flips" its rotations. See a video here: https://www.dropbox.com/s/5irxt0gxou4c2j6/QuaternionFlip.mov?dl=0 I shake the mouse cursor when it happens to make it obvious to you. import Spatial let lookAtRotation = Rotation3D(eye: Point3D(position), target: Point3D(x: 0, y: 0, z: 0), up: Vector3D(x: 0, y: 1, z: 0)) myObj.quaternion = lookAtRotation.quaternion So my question is why is this happening, and how can I fix it? thx
Posted
by
MKX
Post not yet marked as solved
2 Replies
1.8k Views
Dear Apple Team and everyone who has experience with MapKit. I am building an app where I need to hide some 3D models and replace them with my custom 3D meshes using SceneKit. Up until now I was using Mapbox it allows to get mesh row data to reconstruct all maps 3D. Is there something like this possible with MapKit? Use cases Say you navigated to Kennedy Space Center Launch Complex 39 and there is no 3D model of actual building. I would like to be able to hide simple massing and replace it with my model. In 3D Satellite VIew some areas have detailed meshes. Say London The Queen's Walk. I would like to make specific area flat so I can place my 3D model on top of Satellite 3D View to illustrate new structure or building. Last one. Is it possible to change existing buildings colours? I know it is possible transparency Thank you @apple
Posted
by
Post marked as solved
3 Replies
1.2k Views
I'm looking to get a GPU to use for Object capture. The requirements are an AMD GPU with 4GB of VRAM and ray tracing support, the rx 580 seems to be able to do ray tracing from what I've found online but looks like someone had an issue with a 580X here https://developer.apple.com/forums/thread/689891
Posted
by
Post not yet marked as solved
1 Replies
1k Views
Pre-planning a project to use multiple 360 cameras setup un a grid to generate an immersive experience, hoping to use photogrammetry to generate 3D images of objects inside the grid. beeconcern.ca wants to expand their bee gardens, and theconcern.ca wants to use it to make a live immersive apiary experience. Still working out the best method for compiling, editing, rendering; have been leaning towards UE5, but still seeking advice.
Posted
by
Post not yet marked as solved
0 Replies
632 Views
I'm using a Mid-2014 Macbook Pro with Intel Iris graphics. There seems to be a problem when running Metal shader programs that use global constant arrays. I've reproduced the problem by making a small modification to the Learn-Metal-CPP tutorial. I've specifically modified the MSL shader program in "01-primitive.cpp" to get each triangle vertex's position and color from a global array defined in the shader itself. The shader's constant array values are identical to the values being passed in as vertex arrays in the original tutorial. I'd expect the resulting image to look like the original tutorial, which looked like this: However, my version of the program that uses the shader global arrays produces the following result: Here is my shader source that produced the wrong 2nd result above. You can replace the shader in Learn-Metal-CPP's 01-primitive.cpp with my shader to reproduce my result (on my hardware at least): #include <metal_stdlib> using namespace metal; constant float2 myGlobalPositions[3] = { float2(-0.8, 0.8), float2(0.0,-0.8), float2(0.8,0.8) }; constant float3 myGlobalColors[3] = { float3(1.0, 0.3, 0.2), float3(0.8, 1.0, 0.0), float3(0.8, 0.0, 1.0) }; struct v2f { float4 position [[position]]; float3 color; }; v2f vertex vertexMain( uint vertexId [[vertex_id]], device const float3* positions [[buffer(0)]], device const float3* colors [[buffer(1)]] ) { v2f o; // This uses neither of the global const arrays. It produces the correct result. // o.position = float4( positions[ vertexId ], 1.0 ); // o.color = colors[ vertexId ]; // This does not use myGlobalPositions. It produces the correct result. // o.position = float4( positions[ vertexId ], 1.0 ); // o.color = myGlobalColors[vertexId]; // This uses myGlobalPositions and myGlobalColors. IT PRODUCES THE WRONG RESULT. o.position = float4( myGlobalPositions[vertexId], 0.0, 1.0); o.color = myGlobalColors[vertexId]; return o; } float4 fragment fragmentMain( v2f in [[stage_in]] ) { return float4( in.color, 1.0 ); } I believe the issue has something to do with the alignment of the shader global array data. If I mess around with the sizes of the global arrays, I can sometimes make it produce the correct result. For example, making myGlobalColors start at a 32-byte-aligned boundary seems to produce the correct results. I've also attached my full source for 01-primitive.cpp in case that helps. Has anyone run into this issue? What would it take to get a fix for it? 01-primitive.cpp
Posted
by
Post not yet marked as solved
0 Replies
504 Views
Hello I try to understand the movement of a character with physics. To do this, I imported max into the fox2 file provided by Apple. I apply a .static physics to it and I have a floor with static physics and static blocks to test collisions everything works very well except that Max is above the ground. He doesn't touch my ground. I couldn't understand why until I had the physicsShapes displayed in the debug options. With that I see that if max does not touch the ground it is because the automatic shape is below Max and this shape touches the ground. So I would like to know why the shape is shifted downwards and how to correct this problem? I did tests and the problem seems to come from physicsBody?.mass. If I remove the mass, the shape is correct but when I move my character it crosses the walls and when I put it on it is well stopped by the static boxes... Someone with an idea of how to correct this problem? This is my simplify code import SceneKit import PlaygroundSupport // create a scene view with an empty scene var sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 300, height: 300)) var scene = SCNScene() sceneView.scene = scene // start a live preview of that view PlaygroundPage.current.liveView = sceneView // default lighting sceneView.autoenablesDefaultLighting = true sceneView.allowsCameraControl = true sceneView.debugOptions = [.showPhysicsShapes] // a camera var cameraNode = SCNNode() cameraNode.camera = SCNCamera() cameraNode.position = SCNVector3(x: 0, y: 0, z: 3) scene.rootNode.addChildNode(cameraNode) // Make floor node let floorNode = SCNNode() let floor = SCNFloor() floor.reflectivity = 0.25 floorNode.geometry = floor floorNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil) scene.rootNode.addChildNode(floorNode) //add character guard let fichier = SCNScene(named: "max.scn") else { fatalError("failed to load Max.scn") } guard let character = fichier.rootNode.childNode(withName: "Max_rootNode", recursively: true) else { fatalError("Failed to find Max_rootNode") } scene.rootNode.addChildNode(character) character.position = SCNVector3(0, 0, 0) character.physicsBody = SCNPhysicsBody(type: .static, shape: nil) character.physicsBody?.mass = 5 Thank you!
Posted
by
Post not yet marked as solved
0 Replies
548 Views
When trying to import Spatial in a project targeting macOS, I see the following error message: Cannot load underlying module for 'Spatial'. However, when I do the exact same import in an iOS project, it works fine. In the macOS project, Xcode offers to autocomplete the word 'Spatial', so it knows about it, but I can't use it. I've also tried adding the Spatial framework to the macOS project (note I never added anything to the iOS project where this works). When adding, there is no Spatial.Framework listed. There is libswiftSpatial.tbd, but there is no corresponding libswiftSpatial.dylib in \usr\lib\swift as the .tbd file says there should be. I'm kind of a n00b at this, and I don't understand what incredibly obvious thing I'm missing. The docs say Spatial is OK for macOS 13.0+, I have 13.4. I'm using Xcode 14.3.1. The doc page refers to Spatial as both a Framework, and as a module, and give no help on incorporating it into an Xcode project. Any help is appreciated, thanks. -C
Posted
by
Post not yet marked as solved
0 Replies
775 Views
Hello With Unity you can import an animation and apply it to any character. For example, I import a walking animation and I can apply it to all my characters. Is there an equivalent with SceneKit? I would like to apply animations by programming without having to import for each character specifically Thanks
Posted
by
Post not yet marked as solved
5 Replies
1.4k Views
After the iOS 17 update, objects rendered in SceneKit that have both a normal map and morph targets do not render correctly. The shading and lighting appear dark and without reflections. Using a normal map without morph targets or having morph targets on an object without using a normal map works fine. However, the combination of using both breaks the rendering. Using diffuse, normal map and a morpher: Diffuse and normal, NO morpher:
Posted
by
Post not yet marked as solved
0 Replies
815 Views
I am developing a web application that leverages WebGL to display 3D content. The app would benefit from tracking headset movement when viewing the 2D page as a Window while wearing Vision Pro. This would ultimately allow me a way to convey the idea of the Window acting as a portal into a virtual environment, as the rendered perspective of the 3D environment would match that of the user wearing the headset. This is a generic request/goal, as it would be applicable to any browser and any 6dof device, but I am interested in knowing if it is currently possible with Vision Pro (and the Simulator) and its version of Safari for "spatial computing". I can track the head movement while in a WebXR XR or "immersive" session, but I would like to be able to track it without going into VR mode. Is this possible? If so, how and using which tools?
Posted
by
Post marked as solved
3 Replies
1k Views
Hello everyone! I have a small concern about one little thing when it comes to programming in metal. There are some models that I wish to use along with animations and skins on them, the file extension for them is called gltf. glTF has been used in a number of projects such as unity and unreal engine and godot and blender. I was wondering if metal supports this file extension or not. Anyone here knows the answer?
Posted
by
Post not yet marked as solved
0 Replies
608 Views
Hi, I trying to use Metal cpp, but I have compile error: ISO C++ requires the name after '::' to be found in the same scope as the name before '::' metal-cpp/Foundation/NSSharedPtr.hpp(162): template <class _Class> _NS_INLINE NS::SharedPtr<_Class>::~SharedPtr() { if (m_pObject) { m_pObject->release(); } } Use of old-style cast metal-cpp/Foundation/NSObject.hpp(149): template <class _Dst> _NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj) { #ifdef __OBJC__ return (__bridge _Dst)pObj; #else return (_Dst)pObj; #endif // __OBJC__ } XCode Project was generated using CMake: target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20) target_compile_options(${MODULE_NAME} PRIVATE "-Wgnu-anonymous-struct" "-Wold-style-cast" "-Wdtor-name" "-Wpedantic" "-Wno-gnu" ) May be need to set some CMake flags for C++ compiler ?
Posted
by
Post not yet marked as solved
0 Replies
605 Views
I write iOS plug in to integrate MetalFX Spatial Upscaling to Unity URP project. C# Code in Unity: namespace UnityEngine.Rendering.Universal { /// /// Renders the post-processing effect stack. /// internal class PostProcessPass : ScriptableRenderPass { RenderTexture _dstRT = null; [DllImport ("__Internal")] private static extern void MetalFX_SpatialScaling (IntPtr srcTexture, IntPtr dstTexture, IntPtr outTexture); } } void RenderFinalPass(CommandBuffer cmd, ref RenderingData renderingData) { // ...... case ImageUpscalingFilter.MetalFX: { var upscaleRtDesc = tempRtDesc; upscaleRtDesc.width = cameraData.pixelWidth; upscaleRtDesc.height = cameraData.pixelHeight; RenderingUtils.ReAllocateIfNeeded(ref m_UpscaledTarget, upscaleRtDesc, FilterMode.Point, TextureWrapMode.Clamp, name: "_UpscaledTexture"); var metalfxInputSize = new Vector2(cameraData.cameraTargetDescriptor.width, cameraData.cameraTargetDescriptor.height); if (_dstRT == null) { _dstRT = new RenderTexture(upscaleRtDesc.width, upscaleRtDesc.height, 0, RenderTextureFormat.ARGB32); _dstRT.Create(); } // call native plugin cmd.SetRenderTarget(m_UpscaledTarget, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare); MetalFX_SpatialScaling(sourceTex.rt.GetNativeTexturePtr(), m_UpscaledTarget.rt.GetNativeTexturePtr(), _dstRT.GetNativeTexturePtr()); Graphics.CopyTexture(_dstRT, m_UpscaledTarget.rt); sourceTex = m_UpscaledTarget; PostProcessUtils.SetSourceSize(cmd, upscaleRtDesc); break; } // ..... } Objective-c Code in iOS: head file: #import <Foundation/Foundation.h> #import <MetalFX/MTLFXSpatialScaler.h> @protocol MTLTexture; @protocol MTLDevice; API_AVAILABLE(ios(16.0)) @interface MetalFXDelegate : NSObject { int mode; id _device; id _commandQueue; id _outTexture; id _mfxSpatialScaler; id _mfxSpatialEncoder; }; (void)SpatialScaling: (MTLTextureRef) srcTexture dstTexure: (MTLTextureRef) dstTexture outTexure: (MTLTextureRef) outTexture; (void)saveTexturePNG: (MTLTextureRef) texture url: (CFURLRef) url; @end m file: #import "MetalFXOC.h" @implementation MetalFXDelegate (id)init { self = [super init]; return self; } static MetalFXDelegate* delegateObject = nil; (void)SpatialScaling: (MTLTextureRef) srcTexture dstTexture: (MTLTextureRef) dstTexture outTexture: (MTLTextureRef) outTexture { int width = (int)srcTexture.width; int height = (int)srcTexture.height; int dstWidth = (int)dstTexture.width; int dstHeight = (int)dstTexture.height; if (_mfxSpatialScaler == nil) { MTLFXSpatialScalerDescriptor* desc; desc = [[MTLFXSpatialScalerDescriptor alloc]init]; desc.inputWidth = width; desc.inputHeight = height; desc.outputWidth = dstWidth; ///_screenWidth desc.outputHeight = dstHeight; ///_screenHeight desc.colorTextureFormat = srcTexture.pixelFormat; desc.outputTextureFormat = dstTexture.pixelFormat; if (@available(iOS 16.0, *)) { desc.colorProcessingMode = MTLFXSpatialScalerColorProcessingModePerceptual; } else { // Fallback on earlier versions } _device = MTLCreateSystemDefaultDevice(); _mfxSpatialScaler = [desc newSpatialScalerWithDevice:_device]; if (_mfxSpatialScaler == nil) { return; } _commandQueue = [_device newCommandQueue]; MTLTextureDescriptor *texdesc = [[MTLTextureDescriptor alloc] init]; texdesc.width = (int)dstTexture.width; texdesc.height = (int)dstTexture.height; texdesc.storageMode = MTLStorageModePrivate; texdesc.usage = MTLTextureUsageRenderTarget | MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite; texdesc.pixelFormat = dstTexture.pixelFormat; _outTexture = [_device newTextureWithDescriptor:texdesc]; } id upscaleCommandBuffer = [_commandQueue commandBuffer]; upscaleCommandBuffer.label = @"Upscale Command Buffer"; _mfxSpatialScaler.colorTexture = srcTexture; _mfxSpatialScaler.outputTexture = _outTexture; [_mfxSpatialScaler encodeToCommandBuffer:upscaleCommandBuffer]; // outTexture = _outTexture; id textureCommandBuffer = [_commandQueue commandBuffer]; id _mfxSpatialEncoder =[textureCommandBuffer blitCommandEncoder]; [_mfxSpatialEncoder copyFromTexture:_outTexture toTexture:outTexture]; [_mfxSpatialEncoder endEncoding]; [upscaleCommandBuffer commit]; } @end extern "C" { void MetalFX_SpatialScaling(void* srcTexturePtr, void* dstTexturePtr, void* outTexturePtr) { if (delegateObject == nil) { if (@available(iOS 16.0, *)) { delegateObject = [[MetalFXDelegate alloc] init]; } else { // Fallback on earlier versions } } if (srcTexturePtr == nil || dstTexturePtr == nil || outTexturePtr == nil) { return; } id<MTLTexture> srcTexture = (__bridge id<MTLTexture>)(void *)srcTexturePtr; id<MTLTexture> dstTexture = (__bridge id<MTLTexture>)(void *)dstTexturePtr; id<MTLTexture> outTexture = (__bridge id<MTLTexture>)(void *)outTexturePtr; if (@available(iOS 16.0, *)) { [delegateObject SpatialScaling: srcTexture dstTexture: dstTexture outTexture: outTexture]; } else { // Fallback on earlier versions } return; } } With the C# and objective code, the appear on screen is black. If I save the MTLTexture to PNG in ios plug in, the PNG is ok(not black), so I think the outTexture: outTexture write to unity is failed.
Posted
by
Post not yet marked as solved
0 Replies
514 Views
Hey there fellas, i am a beginner on ios trying to find a way to capture/extract depthdata from a captured image in my photo gallery. I have been using xcode to achieve this task but i am particularly new to swift so i am having troubles. I need the depthdata from the image to work on it and to be able to manipulate it.
Posted
by
Post not yet marked as solved
1 Replies
616 Views
We have a content creation application that uses SceneKit for rendering. In our application, we have a 3D view (non-AR), and an AR "mode" the user can go into. Currently we use a SCNView and an ARSCNView to achieve this. Our application currently targets iOS and MacOS (with AR only on iOS). With VisionOS on the horizon, we're trying to bring the tech stack up to date, as SceneKit no longer seems to be supported, and isn't supported at all on VisionOS. We'd like to use RealityKit for 3D rendering on all platforms; MacOS, iOS and VisionOS, in non-AR and AR mode where appropriate. So far this hasn't been too difficult. The greatest challenge has been adding gesture support to replace the allowsCameraControl option on the SCNView, as no such option on ARView. However, now we get to control shading, we're hitting a bit of a roadblock. When viewing the scene in Non-AR mode, we would like to add a ground plane underneath the object that only displays a shadow - in other words, it's opacity would be determined by light contribution. I've had a dig through the CustomMaterial API and it seems extremely primitive - there doesn't seem any way to get light information for a particular fragment, unless I'm missing something? Additionally, we support a custom shader that we can apply as materials. This custom shader allows the properties of the material to vary depending on the light contribution, light incidence angle...etc. Looking at the CustomMaterial, the API seems to be defining a CustomMaterial, whereas as guess we want to customise the BRDF calculation. We achieve this in SceneKit using a series of shader modifiers hooked into the various SCNShaderModifierEntryPoint. On VisionOS of course the lack of support for CustomMaterial is a shame, but I would hope something similar can be achieved with RealityComposer? We can live with the lack of custom material, but the shadow catcher is a killer for adoption for us. I'd even accept a different limited features on VisionOS, as long as we can matching our existing feature set on existing platforms. What am I missing?
Posted
by
Post not yet marked as solved
0 Replies
537 Views
While experimenting with AR view for different Product We came across an issue with Apples AR viewer where for glass (PBR Opacity) it is causing black patch to appear behind (Maybe Shadow). https://sketchfab.com/3d-models/welcome-5ba96662ba8d4774951f33fead4bf9db https://sketchfab.com/3d-models/candel-91b2059634e0478eb93777b0b2a726e9 We Tried to find work around but after doing multiple test but with all our experiments we came to the conclusion that Apple AR viewer is not able recognize the glass material and adjust the ground shadow as required.
Posted
by
Post not yet marked as solved
0 Replies
513 Views
I have a spherical HDR image that is being used for environment lighting in a SceneKit scene. I want to rotate the environment image. To set the environment lighting, I use the lightingEnvironment SCNMaterialProperty. This works fine, and my scene is lit using the IBL. As with all SCNMaterialProperty, I expect that I can use the contentsTransform property to rotate or transform the HDR. So I set it as follows: lightingEnvironment.contentsTransform = SCNMatrix4MakeRotation((45.0).degreesAsRadians, 0.0, 1.0, 0.0) My expectation is that the lighting environment would rotate 45 degrees in Y, but it doesn't change at all. Even if I throw in a completely random transform on all axis, there is no apparent change. To test if there is a change, I added a chrome ball and a diffuse ball to my scene and I'm comparing reflections on the chrome ball, and lighting on the diffuse ball. There is no change on either. It doesn't matter where I set the contentsTransform, it doesn't work. I had intended to set it from the renderer(_:updateAtTime:) method on the SCNRendererDelegate, so that I can rotate the IBL to match the point of view of the scene, but even if I transform the environment immediately after it is set, there is never a change. Is this a bug? Or am I doing something entirely wrong? Has anyone on here ever managed to get this to work?
Posted
by