simd provides types and functions for small vector and matrix computations.

simd Documentation

Posts under simd tag

3 Posts
Sort by:
Post not yet marked as solved
1 Replies
132 Views
I'm trying to decipher this roomplan api data output. Dimensions: the simd3 is width and height and I think thickness in meters but the spacial information is not clear to me and I can't find any info on it. an insights on this? *** CAPTURED ROOM OBJECTS *** Walls: Identifier: 9F4BEEBB-990C-42F3-A0BC-912E1F770877 Dimensions: SIMD3(5.800479, 2.4299998, 0.0) Category: wall Transform: SIMD4(0.25821668, 0.0, 0.966087, 0.0) SIMD4(0.0, 1.0, 0.0, 0.0) SIMD4(-0.966087, 0.0, 0.2582167, 0.0) SIMD4(2.463065, -0.27346277, -0.5366996, 1.0) Identifier: A72544F5-068D-4F19-8FA4-60C1331003E3 Dimensions: SIMD3(2.28993, 2.4299998, 0.0) Category: wall Transform: SIMD4(0.966087, 0.0, -0.2582166, 0.0) SIMD4(0.0, 1.0, 0.0, 0.0) SIMD4(0.25821656, 0.0, 0.966087, 0.0) SIMD4(0.608039, -0.27346277, -3.0429342, 1.0)
Posted Last updated
.
Post not yet marked as solved
0 Replies
195 Views
let dir = Vector3D(x: 0.21074744760506547, y: 0.38871720406617527, z: -1.0819520200824684).uniformlyScaled(by: 1) let vd = Vector3D(vector: [0,0,-1]) let rot = Rotation3D(position: Point3D(x: 0, y: 0, z: 0),target: Point3D(dir)) print(vd.rotated(by: simd_quatd(from: vd.normalized.vector , to: dir.normalized.vector))) let r = simd.simd_quaternion(simd_float3(vd), simd_float3(dir)) print(vd.rotated(by: Rotation3D(simd_quatd(from: vd.normalized.vector , to: dir.normalized.vector)))) result (x: 0.18030816736048502, y: 0.33257288514358785, z: -0.9256804204747842) (x: 0.18030816736048502, y: 0.33257288514358785, z: -0.9256804204747842) (x: 0.1439359315016178, y: 0.2654854115377888, z: -0.953309993592521) (x: 0.0, y: 0.0, z: 1.0) Expect (x: 0.21074744760506547, y: 0.38871720406617527, z: -1.0819520200824684) Why it is not follow expect value?
Posted
by koyakei.
Last updated
.
Post marked as solved
3 Replies
2.4k Views
So I'm trying to make a simple scene with some geometry of sorts and a movable camera. So far I've been able to render basic geometry in 2D alongside transforming set geometry using matrices. Following this I moved on to the Calculating Primitive Visibility Using Depth Testing Sample ... also smooth sailing. Then I had my first go at transforming positions between different coordinate spaces. I didn't get quite far with my rather blurry memory from OpenGL, all dough when I compared my view and projection matrix with the ones from the OpenGL glm::lookAt() and glm::perspective() functions there seemed to be no fundamental differences. Figuring Metal doing things differently I browsed the Metal Sample Code library for a sample containing a first-person camera. The only one I could find was Rendering Terrain Dynamically with Argument Buffers. Luckily it contained code for calculating view and projection matrices, which seemed to differ from my code. But I still have problems Problem Description When positioning the camera right in front of the geometry the view as well as the projection matrix produce seemingly accurate results: Camera Positon(0, 0, 1); Camera Directio(0, 0, -1) When moving further away though, parts of the scene are being wrongfully culled. Notably the ones farther away from the camera: Camera Position(0, 0, 2); Camera Direction(0, 0, -1) Rotating the Camera also produces confusing results: Camera Position: (0, 0, 1); Camera Direction: (cos(250°), 0, sin(250°)), yes I converted to radians My Suspicions The Projection isn't converting the vertices from view space to Normalised Device Coordinates correctly. Also when comparing two first two images, the lower part of the triangle seems to get bigger as the camera moves away which also doesn't appear to be right. Obviously the view matrix is also not correct as I'm pretty sure what's describe above isn't supposed to happen. Code Samples MainShader.metal #include <metal_stdlib> #include <Shared/Primitives.h> #include <Shared/MainRendererShared.h> using namespace metal; struct transformed_data {     float4 position [[position]];     float4 color; }; vertex transformed_data vertex_shader(uint vertex_id [[vertex_id]],                                       constant _vertex *vertices [[buffer(0)]],                                       constant _uniforms& uniforms [[buffer(1)]]) {     transformed_data output;     float3 dir = {0, 0, -1};     float3 inEye = float3{ 0, 0, 1 }; // position     float3 inTo = inEye + dir; // position + direction     float3 inUp = float3{ 0, 1, 0};          float3 z = normalize(inTo - inEye);     float3 x = normalize(cross(inUp, z));     float3 y = cross(z, x);     float3 t = (float3) { -dot(x, inEye), -dot(y, inEye), -dot(z, inEye) };     float4x4 viewm = float4x4(float4 { x.x, y.x, z.x, 0 },                               float4 { x.y, y.y, z.y, 0 },                               float4 { x.z, y.z, z.z, 0 },                               float4 { t.x, t.y, t.z, 1 });          float _nearPlane = 0.1f;     float _farPlane = 100.0f;     float _aspectRatio = uniforms.viewport_size.x / uniforms.viewport_size.y;     float va_tan = 1.0f / tan(0.6f * 3.14f * 0.5f);     float ys = va_tan;     float xs = ys / _aspectRatio;     float zs = _farPlane / (_farPlane - _nearPlane);     float4x4 projectionm = float4x4((float4){ xs,  0,  0, 0},                                     (float4){  0, ys,  0, 0},                                     (float4){  0,  0, zs, 1},                                     (float4){  0,  0, -_nearPlane * zs, 0 } );          float4 projected = (projectionm*viewm) * float4(vertices[vertex_id].position,1);     vector_float2 viewport_dim = vector_float2(uniforms.viewport_size);     output.position = vector_float4(0.0, 0.0, 0.0, 1.0);     output.position.xy = projected.xy / (viewport_dim / 2);     output.position.z = projected.z;     output.color = vertices[vertex_id].color;     return output; } fragment float4 fragment_shader(transformed_data in [[stage_in]]) {return in.color;} These are the vertices definitions let triangle_vertices = [_vertex(position: [ 480.0, -270.0, 1.0], color: [1.0, 0.0, 0.0, 1.0]),                          _vertex(position: [-480.0, -270.0, 1.0], color: [0.0, 1.0, 0.0, 1.0]),                          _vertex(position: [   0.0,  270.0, 0.0], color: [0.0, 0.0, 1.0, 1.0])] // TO-DO: make this use 4 vertecies and 6 indecies let quad_vertices = [_vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0])] This is the initialisation code of the depth stencil descriptor and state _view.depthStencilPixelFormat = MTLPixelFormat.depth32Float _view.clearDepth = 1.0 // other render initialisation code let depth_stencil_descriptor = MTLDepthStencilDescriptor() depth_stencil_descriptor.depthCompareFunction = MTLCompareFunction.lessEqual depth_stencil_descriptor.isDepthWriteEnabled = true; depth_stencil_state = try! _view.device!.makeDepthStencilState(descriptor: depth_stencil_descriptor)! So if you have any idea on why its not working or have some code of your own that's working or know of any public samples containing a working first-person camera, feel free to help me out. Thank you in advance! (please ignore any spelling or similar mistakes, english is not my primary language)
Posted
by _tally.
Last updated
.