3D Graphics

RSS for tag

Discuss integrating three-dimensional graphics into your app.

Posts under 3D Graphics tag

30 Posts
Sort by:
Post not yet marked as solved
0 Replies
815 Views
I am developing a web application that leverages WebGL to display 3D content. The app would benefit from tracking headset movement when viewing the 2D page as a Window while wearing Vision Pro. This would ultimately allow me a way to convey the idea of the Window acting as a portal into a virtual environment, as the rendered perspective of the 3D environment would match that of the user wearing the headset. This is a generic request/goal, as it would be applicable to any browser and any 6dof device, but I am interested in knowing if it is currently possible with Vision Pro (and the Simulator) and its version of Safari for "spatial computing". I can track the head movement while in a WebXR XR or "immersive" session, but I would like to be able to track it without going into VR mode. Is this possible? If so, how and using which tools?
Posted Last updated
.
Post not yet marked as solved
4 Replies
2.6k Views
Hi, I am trying to build and run the HelloPhotogrammetry app that is associated with WWDC21 session 10076 (available for download here). But when I run the app, I get the following error message: A GPU with supportsRaytracing is required I have a Mac Pro (2019) with an AMD Radeon Pro 580X 8 GB graphics card and 96 GB RAM. According to the requirements slide in the WWDC session, this should be sufficient. Is this a configuration issue or do I actually need a different graphics card (and if so, which one?). Thanks in advance.
Posted Last updated
.
Post not yet marked as solved
1 Replies
1k Views
Pre-planning a project to use multiple 360 cameras setup un a grid to generate an immersive experience, hoping to use photogrammetry to generate 3D images of objects inside the grid. beeconcern.ca wants to expand their bee gardens, and theconcern.ca wants to use it to make a live immersive apiary experience. Still working out the best method for compiling, editing, rendering; have been leaning towards UE5, but still seeking advice.
Posted
by g-arth.
Last updated
.
Post marked as solved
3 Replies
2.5k Views
So I'm trying to make a simple scene with some geometry of sorts and a movable camera. So far I've been able to render basic geometry in 2D alongside transforming set geometry using matrices. Following this I moved on to the Calculating Primitive Visibility Using Depth Testing Sample ... also smooth sailing. Then I had my first go at transforming positions between different coordinate spaces. I didn't get quite far with my rather blurry memory from OpenGL, all dough when I compared my view and projection matrix with the ones from the OpenGL glm::lookAt() and glm::perspective() functions there seemed to be no fundamental differences. Figuring Metal doing things differently I browsed the Metal Sample Code library for a sample containing a first-person camera. The only one I could find was Rendering Terrain Dynamically with Argument Buffers. Luckily it contained code for calculating view and projection matrices, which seemed to differ from my code. But I still have problems Problem Description When positioning the camera right in front of the geometry the view as well as the projection matrix produce seemingly accurate results: Camera Positon(0, 0, 1); Camera Directio(0, 0, -1) When moving further away though, parts of the scene are being wrongfully culled. Notably the ones farther away from the camera: Camera Position(0, 0, 2); Camera Direction(0, 0, -1) Rotating the Camera also produces confusing results: Camera Position: (0, 0, 1); Camera Direction: (cos(250°), 0, sin(250°)), yes I converted to radians My Suspicions The Projection isn't converting the vertices from view space to Normalised Device Coordinates correctly. Also when comparing two first two images, the lower part of the triangle seems to get bigger as the camera moves away which also doesn't appear to be right. Obviously the view matrix is also not correct as I'm pretty sure what's describe above isn't supposed to happen. Code Samples MainShader.metal #include <metal_stdlib> #include <Shared/Primitives.h> #include <Shared/MainRendererShared.h> using namespace metal; struct transformed_data {     float4 position [[position]];     float4 color; }; vertex transformed_data vertex_shader(uint vertex_id [[vertex_id]],                                       constant _vertex *vertices [[buffer(0)]],                                       constant _uniforms& uniforms [[buffer(1)]]) {     transformed_data output;     float3 dir = {0, 0, -1};     float3 inEye = float3{ 0, 0, 1 }; // position     float3 inTo = inEye + dir; // position + direction     float3 inUp = float3{ 0, 1, 0};          float3 z = normalize(inTo - inEye);     float3 x = normalize(cross(inUp, z));     float3 y = cross(z, x);     float3 t = (float3) { -dot(x, inEye), -dot(y, inEye), -dot(z, inEye) };     float4x4 viewm = float4x4(float4 { x.x, y.x, z.x, 0 },                               float4 { x.y, y.y, z.y, 0 },                               float4 { x.z, y.z, z.z, 0 },                               float4 { t.x, t.y, t.z, 1 });          float _nearPlane = 0.1f;     float _farPlane = 100.0f;     float _aspectRatio = uniforms.viewport_size.x / uniforms.viewport_size.y;     float va_tan = 1.0f / tan(0.6f * 3.14f * 0.5f);     float ys = va_tan;     float xs = ys / _aspectRatio;     float zs = _farPlane / (_farPlane - _nearPlane);     float4x4 projectionm = float4x4((float4){ xs,  0,  0, 0},                                     (float4){  0, ys,  0, 0},                                     (float4){  0,  0, zs, 1},                                     (float4){  0,  0, -_nearPlane * zs, 0 } );          float4 projected = (projectionm*viewm) * float4(vertices[vertex_id].position,1);     vector_float2 viewport_dim = vector_float2(uniforms.viewport_size);     output.position = vector_float4(0.0, 0.0, 0.0, 1.0);     output.position.xy = projected.xy / (viewport_dim / 2);     output.position.z = projected.z;     output.color = vertices[vertex_id].color;     return output; } fragment float4 fragment_shader(transformed_data in [[stage_in]]) {return in.color;} These are the vertices definitions let triangle_vertices = [_vertex(position: [ 480.0, -270.0, 1.0], color: [1.0, 0.0, 0.0, 1.0]),                          _vertex(position: [-480.0, -270.0, 1.0], color: [0.0, 1.0, 0.0, 1.0]),                          _vertex(position: [   0.0,  270.0, 0.0], color: [0.0, 0.0, 1.0, 1.0])] // TO-DO: make this use 4 vertecies and 6 indecies let quad_vertices = [_vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0])] This is the initialisation code of the depth stencil descriptor and state _view.depthStencilPixelFormat = MTLPixelFormat.depth32Float _view.clearDepth = 1.0 // other render initialisation code let depth_stencil_descriptor = MTLDepthStencilDescriptor() depth_stencil_descriptor.depthCompareFunction = MTLCompareFunction.lessEqual depth_stencil_descriptor.isDepthWriteEnabled = true; depth_stencil_state = try! _view.device!.makeDepthStencilState(descriptor: depth_stencil_descriptor)! So if you have any idea on why its not working or have some code of your own that's working or know of any public samples containing a working first-person camera, feel free to help me out. Thank you in advance! (please ignore any spelling or similar mistakes, english is not my primary language)
Posted
by _tally.
Last updated
.
Post marked as solved
3 Replies
1.2k Views
I'm looking to get a GPU to use for Object capture. The requirements are an AMD GPU with 4GB of VRAM and ray tracing support, the rx 580 seems to be able to do ray tracing from what I've found online but looks like someone had an issue with a 580X here https://developer.apple.com/forums/thread/689891
Posted Last updated
.
Post not yet marked as solved
0 Replies
775 Views
Hello With Unity you can import an animation and apply it to any character. For example, I import a walking animation and I can apply it to all my characters. Is there an equivalent with SceneKit? I would like to apply animations by programming without having to import for each character specifically Thanks
Posted Last updated
.
Post not yet marked as solved
0 Replies
548 Views
When trying to import Spatial in a project targeting macOS, I see the following error message: Cannot load underlying module for 'Spatial'. However, when I do the exact same import in an iOS project, it works fine. In the macOS project, Xcode offers to autocomplete the word 'Spatial', so it knows about it, but I can't use it. I've also tried adding the Spatial framework to the macOS project (note I never added anything to the iOS project where this works). When adding, there is no Spatial.Framework listed. There is libswiftSpatial.tbd, but there is no corresponding libswiftSpatial.dylib in \usr\lib\swift as the .tbd file says there should be. I'm kind of a n00b at this, and I don't understand what incredibly obvious thing I'm missing. The docs say Spatial is OK for macOS 13.0+, I have 13.4. I'm using Xcode 14.3.1. The doc page refers to Spatial as both a Framework, and as a module, and give no help on incorporating it into an Xcode project. Any help is appreciated, thanks. -C
Posted
by cjb9000.
Last updated
.
Post not yet marked as solved
0 Replies
504 Views
Hello I try to understand the movement of a character with physics. To do this, I imported max into the fox2 file provided by Apple. I apply a .static physics to it and I have a floor with static physics and static blocks to test collisions everything works very well except that Max is above the ground. He doesn't touch my ground. I couldn't understand why until I had the physicsShapes displayed in the debug options. With that I see that if max does not touch the ground it is because the automatic shape is below Max and this shape touches the ground. So I would like to know why the shape is shifted downwards and how to correct this problem? I did tests and the problem seems to come from physicsBody?.mass. If I remove the mass, the shape is correct but when I move my character it crosses the walls and when I put it on it is well stopped by the static boxes... Someone with an idea of how to correct this problem? This is my simplify code import SceneKit import PlaygroundSupport // create a scene view with an empty scene var sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 300, height: 300)) var scene = SCNScene() sceneView.scene = scene // start a live preview of that view PlaygroundPage.current.liveView = sceneView // default lighting sceneView.autoenablesDefaultLighting = true sceneView.allowsCameraControl = true sceneView.debugOptions = [.showPhysicsShapes] // a camera var cameraNode = SCNNode() cameraNode.camera = SCNCamera() cameraNode.position = SCNVector3(x: 0, y: 0, z: 3) scene.rootNode.addChildNode(cameraNode) // Make floor node let floorNode = SCNNode() let floor = SCNFloor() floor.reflectivity = 0.25 floorNode.geometry = floor floorNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil) scene.rootNode.addChildNode(floorNode) //add character guard let fichier = SCNScene(named: "max.scn") else { fatalError("failed to load Max.scn") } guard let character = fichier.rootNode.childNode(withName: "Max_rootNode", recursively: true) else { fatalError("Failed to find Max_rootNode") } scene.rootNode.addChildNode(character) character.position = SCNVector3(0, 0, 0) character.physicsBody = SCNPhysicsBody(type: .static, shape: nil) character.physicsBody?.mass = 5 Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
632 Views
I'm using a Mid-2014 Macbook Pro with Intel Iris graphics. There seems to be a problem when running Metal shader programs that use global constant arrays. I've reproduced the problem by making a small modification to the Learn-Metal-CPP tutorial. I've specifically modified the MSL shader program in "01-primitive.cpp" to get each triangle vertex's position and color from a global array defined in the shader itself. The shader's constant array values are identical to the values being passed in as vertex arrays in the original tutorial. I'd expect the resulting image to look like the original tutorial, which looked like this: However, my version of the program that uses the shader global arrays produces the following result: Here is my shader source that produced the wrong 2nd result above. You can replace the shader in Learn-Metal-CPP's 01-primitive.cpp with my shader to reproduce my result (on my hardware at least): #include <metal_stdlib> using namespace metal; constant float2 myGlobalPositions[3] = { float2(-0.8, 0.8), float2(0.0,-0.8), float2(0.8,0.8) }; constant float3 myGlobalColors[3] = { float3(1.0, 0.3, 0.2), float3(0.8, 1.0, 0.0), float3(0.8, 0.0, 1.0) }; struct v2f { float4 position [[position]]; float3 color; }; v2f vertex vertexMain( uint vertexId [[vertex_id]], device const float3* positions [[buffer(0)]], device const float3* colors [[buffer(1)]] ) { v2f o; // This uses neither of the global const arrays. It produces the correct result. // o.position = float4( positions[ vertexId ], 1.0 ); // o.color = colors[ vertexId ]; // This does not use myGlobalPositions. It produces the correct result. // o.position = float4( positions[ vertexId ], 1.0 ); // o.color = myGlobalColors[vertexId]; // This uses myGlobalPositions and myGlobalColors. IT PRODUCES THE WRONG RESULT. o.position = float4( myGlobalPositions[vertexId], 0.0, 1.0); o.color = myGlobalColors[vertexId]; return o; } float4 fragment fragmentMain( v2f in [[stage_in]] ) { return float4( in.color, 1.0 ); } I believe the issue has something to do with the alignment of the shader global array data. If I mess around with the sizes of the global arrays, I can sometimes make it produce the correct result. For example, making myGlobalColors start at a 32-byte-aligned boundary seems to produce the correct results. I've also attached my full source for 01-primitive.cpp in case that helps. Has anyone run into this issue? What would it take to get a fix for it? 01-primitive.cpp
Posted
by giogadi.
Last updated
.