SceneKit

RSS for tag

Create 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.

SceneKit Documentation

Posts under SceneKit tag

88 Posts
Sort by:
Post not yet marked as solved
1 Replies
80 Views
Hello, I am currently working on a project where I am creating a bookstore visualization with racks and shelves(Full immersive view). I have an array of names, each representing a USDZ object that is present in my working directory. Here’s the enum I am trying to iterate over: enum AssetName: String, Codable, Hashable, CaseIterable { case book1 = "B1" case book2 = "B2" case book3 = "B3" case book4 = "B4" } and the code for adding objects I wrote: import SwiftUI import RealityKit struct LocalAssetRealityView: View { let assetName: AssetName var body: some View { RealityView { content in if let asset = try? await ModelEntity(named: assetName.rawValue) { content.add(asset) } } } } Now I get the error, when I try to add multiple objects on Button click: Unable to present another Immersive Space when one is already requested or connected please suggest any solutions. Also suggest if anything can be done to add positions for the objects as well programatically.
Posted
by Code2aum.
Last updated
.
Post not yet marked as solved
0 Replies
165 Views
I'm trying to add dynamic shadows by adding a directional light to the scene. I implemented a POC based on the latest documentation. Basically, the way shadows are being rendered in RealityKit is by a adding a ModelEntity into an AnchorEntity with a target of type planes. The result is that I'm getting shadows that are terribly flickering. I'd add that in SceneKit, there are many more shadow-related properties that let you tweak the look and feel of the shadows, and it's not hard to get a decent shadow there. I'm wondering if having accurate dynamic shadows is possible in RealityKit and if not, if there's a plan to fix it in the next RealityKit version.
Posted
by nativ18.
Last updated
.
Post not yet marked as solved
1 Replies
185 Views
I am running a modified RoomPllan app in my test environment I get two ARSessions active, sometimes more. It appears that the first one is created by Scene Kit because it is related go ARSCNView. Who controls that and what gets processed through it? I noticed that I get a lot of Session Interruptions from Sensor Failure when I am doing World Tracking and the first one happens almost immediately. When I get the room capture delegates fired up I start getting images to the delegate via a second session that is collecting images. How do I tell which session is the scene kit session and which one is the RoomCapture session on thee fly when it comes through the delegate? Is there a difference in the object desciptor that I can use as a differentiator? Relying on the Address of the ARSession buffer being different is okay if you get your timing right. It wasn't clear from any of the documentation that there would be TWO or more AR Sessions delivering data through the delegates. The books on the use of ARKIT are not much help in determining the partition of responsibilities between the origins. The buffer arrivals at the functions supported by the delegates do not have a clear delineation of what function is delivered through which delegate discernible from the highly fragmented documentation provided by the Developer document library. Can someone give me some guidance here? Are there sources for CLEAR documentation of what is delivered via which delegate for the various interfaces?
Posted
by mfstanton.
Last updated
.
Post not yet marked as solved
1 Replies
139 Views
Hi, I'm an experienced developer on Apple platforms (having worked on iOS/tvOS projects for more than 10 years now). However, I've only worked on applications, or simple games which didn't require more than using UIKit or SwiftUI. Now, I'd like to start a new project, recreating an old game on tvOS with 3D graphics. This is not a project I plan to release, only a simple personal challenge. I'm torn with starting this project with either SceneKit, or Unity. On one hand, I love Apple frameworks and tools, so I guess I could easily progress with SceneKit. Also, I don't know Unity very well, but even if it's a simple project, I've seen that there are several restrictions for free plans (no custom splash screen, etc). On the other hand, I've read several threads (i.e. this one) making it look like that SceneKit isn't going anywhere, and clearly recommending Unity due to the fact that its documentation is way better, and the game more easily portable to other platforms. Also, if I'm going to learn something new, maybe I could learn more with Unity (using a completely different platform, software and language) than I would with SceneKit and Swift stuff. What's your opinion about this? Thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
133 Views
I'm working on a project in Xcode where I need to use a 3D model with multiple morph targets (shape keys in Blender) for animations. The model, specifically the Wolf3D_Head node, contains dozens of morph targets which are crucial for my project. Here's what I've done so far: I verified the morph targets in Blender (I can see all the morph targets correctly when opening both the original .glb file and the converted .dae file in Blender). Given that Xcode does not support .glb file format directly, I converted the model to .dae format, aiming to use it in my Xcode project. After importing the .dae file into Xcode, I noticed that Xcode does not show any morph targets for the Wolf3D_Head node or any other node in the model. I've already attempted using tools like ColladaMorphAdjuster and another version by JakeHoldom to adjust the .dae file, hoping Xcode would recognize the morph targets, but it didn't resolve the issue. My question is: How can I make Xcode recognize and display the morph targets present in the .dae file exported from Blender? Is there a specific process or tool that I need to use to ensure Xcode properly imports all the morph target information from a .dae file? Tools tried: https://github.com/JonAllee/ColladaMorphAdjuster, https://github.com/JakeHoldom/ColladaMorphAdjuster Thanks in advance!
Posted
by PilsenUK.
Last updated
.
Post not yet marked as solved
0 Replies
130 Views
I am working on a game that involves piloting a ship through a long tunnel. I want to be able to have the tunnel the same size as the screen always so that the four edges represent the "tunnel". I'm using a SCNShapeNode to create each of the four "walls", but I can can't seem to figure out how to always have them pinned to the edges of the screen. I've tried converting the screen size to the location on the scene, but it most often doesn't really work at all or at the very best only gets close. Any ideas on how to make this work regardless of device would be greatly appreciated. Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
139 Views
I have created an App with SceneKit where I have a series of walls (lines made with a SCNShapeNode) and a ball (SCNNode using an SCNSphere). The ball is supposed to continue with a constant speed regardless of where it goes, just changing directions whenever it hits a wall. However, the ball slows down and comes to a stop after a short while. If I start it close to a wall, it bounces off the wall in the correct direction, but then still ends up slowing down to a stop after a short while. Here is how the balls are created: let sphere = SCNSphere(radius: 1.0) var sphereNode1: SCNNode? let sphereNode1 = SCNNode(geometry: sphere) let theBody1 = SCNPhysicsBody(type: .dynamic, shape: SCNPhysicsShape(geometry: sphere)) sphereNode1.worldPosition = SCNVector3(0,20,0) sphereNode1.physicsBody = theBody1 //sphereNode1.physicsBody!.velocity = SCNVector3(0.0, 2.0, 0.0) sphereNode1.physicsBody!.applyForce(SCNVector3(0.0, 2.0, 0.0), asImpulse: true) sphereNode1.physicsBody!.physicsShape = SCNPhysicsShape(geometry: sphere) sphereNode1.physicsBody!.isAffectedByGravity = false sphereNode1.physicsBody!.restitution = 1.0 sphereNode1.geometry?.materials = [sphereMaterial] I tried both giving the ball a velocity and applying a force to it. Both resulted in the above mentioned actions. I can't seem to find anywhere in the documentation that suggests a way to resolve this. I actually did this same thing in the SpriteKit environment and everything worked fine. Hopefully someone can tell me what I'm missing. Thanks, Michael
Posted Last updated
.
Post not yet marked as solved
2 Replies
249 Views
In the below code I have extracted face mesh vertices from ARKit face anchors and created a custom face mesh using SceneKit SCNGeometry. This enabled me to stretch face mesh vertices as per my requirement. Now the problem I am facing is as follows. I am trying to apply a lipstick texture material which is of type SCNMaterial. Although ARSCNFaceGeometry lets me apply different textures through SCNMaterial and SCNNode, I am not able to do the same using mu CustomFaceGeometry. When I am applying a lipstick texture which looks like the image attached below, the full face is getting colored or modified, I want only that part of the face which has texture transparency as >0 and I dont want other part of the face to be modified. Can you give me a detailed solution using code? // ViewController.swift import UIKit import ARKit import SceneKit import simd class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate{ @IBOutlet weak var sceneView: ARSCNView! let vertexIndicesOfInterest = [250] var customFaceGeometry: CustomFaceGeometry! var scnFaceGeometry: SCNGeometry! private var faceUvGenerator: FaceTextureGenerator! var faceGeometry: ARSCNFaceGeometry! override func viewDidLoad() { super.viewDidLoad() sceneView.delegate = self override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let configuration = ARFaceTrackingConfiguration() sceneView.session.run(configuration) } } extension ViewController { func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor else { return } customFaceGeometry = CustomFaceGeometry(fromFaceAnchor: faceAnchor) let customGeometryNode = SCNNode(geometry: customFaceGeometry.geometry) customFaceGeometry.geometry.firstMaterial?.fillMode = .lines customFaceGeometry.geometry.firstMaterial?.transparency = 0.0 customFaceGeometry.geometry.firstMaterial?.isDoubleSided = true node.addChildNode(customGeometryNode) } func renderer(_ renderer: SCNSceneRenderer, willUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor, let faceMeshNode = node.childNodes.first else { return } DispatchQueue.main.async { self.customFaceGeometry.update(withFaceAnchor: faceAnchor, node: faceMeshNode) } } } class CustomFaceGeometry { var geometry: SCNGeometry let lipImage = UIImage(named: "Face.scnassets/lip_arks_y7.png") init(fromFaceAnchor faceAnchor: ARFaceAnchor) { self.geometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor)! } static func createCustomFaceGeometry(fromVertices vertices_o: [SCNVector3]) -> SCNGeometry { var vertices = vertices_o let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [Int32] = Array(0..<Int32(vertices.count)) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<Int32>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .point, primitiveCount: vertices.count, bytesPerIndex: MemoryLayout<Int32>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } static func createGeometry(fromFaceAnchor faceAnchor: ARFaceAnchor) -> SCNGeometry let vertices = faceAnchor.geometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } return CustomFaceGeometry.createCustomFaceGeometry(fromVertices: vertices) } func update(withFaceAnchor faceAnchor: ARFaceAnchor, node: SCNNode) { if let newGeometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor) { node.geometry = newGeometry let lipstickNode = SCNNode(geometry: newGeometry) let lipstickTextureMaterial = SCNMaterial() lipstickTextureMaterial.diffuse.contents = lipImage lipstickTextureMaterial.transparency = 1.0 lipstickNode.geometry?.firstMaterial = lipstickTextureMaterial node.geometry?.firstMaterial?.fillMode = .lines node.geometry?.firstMaterial?.transparency = 0.5 } } static func createCustomSCNGeometry(from faceAnchor: ARFaceAnchor) -> SCNGeometry? { let faceGeometry = faceAnchor.geometry var vertices: [SCNVector3] = faceGeometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } print(vertices[250]) let ll_ratio_y = Float(0.969999) vertices[290] = SCNVector3(x: vertices[290].x, y: vertices[290].y*ll_ratio_y, z: vertices[290].z) vertices[274] = SCNVector3(x: vertices[274].x, y: vertices[274].y*ll_ratio_y, z: vertices[274].z) vertices[265] = SCNVector3(x: vertices[265].x, y: vertices[265].y*ll_ratio_y, z: vertices[265].z) vertices[700] = SCNVector3(x: vertices[700].x, y: vertices[700].y*ll_ratio_y, z: vertices[700].z) vertices[730] = SCNVector3(x: vertices[730].x, y: vertices[730].y*ll_ratio_y, z: vertices[730].z) vertices[25] = SCNVector3(x: vertices[25].x, y: vertices[25].y*ll_ratio_y, z: vertices[25].z) vertices[709] = SCNVector3(x: vertices[709].x, y: vertices[709].y*ll_ratio_y, z: vertices[709].z) vertices[725] = SCNVector3(x: vertices[725].x, y: vertices[725].y*ll_ratio_y, z: vertices[725].z) vertices[710] = SCNVector3(x: vertices[710].x, y: vertices[710].y*ll_ratio_y, z: vertices[710].z) let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [UInt16] = faceGeometry.triangleIndices.map(UInt16.init) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<UInt16>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .triangles, primitiveCount: indices.count / 3, bytesPerIndex: MemoryLayout<UInt16>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } }
Posted
by akash-ar.
Last updated
.
Post not yet marked as solved
1 Replies
252 Views
Hi, My app has a volumetric window displaying some 3D content for the user. I would like the user to be able to control the color of the material using a color picker displayed below the model in the same window, but unfortunately neither ColorPicker nor Picker are functional in volumetric scenes. Attempting to use them causes the app to crash with NSInternalInconsistencyException: Presentations are not permitted within volumetric window scenes. This seems rather limiting. Is there a way either of these components can be utilized? I could build a different "control panel" window but it would not be attached to the model window and it would get confusing if user has multiple 3d windows open. Thank you
Posted
by zer0x.
Last updated
.
Post not yet marked as solved
0 Replies
178 Views
I experience an issue with SceneKit that is driving me crazy ;( I have severe hangs when I disable Metal API Validation (which is default when you don't run from Xcode). So is there any way to force enable Metal API Validation for AppStore binary? (run MTL_DEBUG_LAYER=1 for Testflight or App Store) Hangs happen on Catalyst but also on iOS if I use lightingEnvironment...
Posted
by Gil.
Last updated
.
Post not yet marked as solved
0 Replies
142 Views
An SCNNode is created and used for either an SCNView or an SKView. SceneKit and SpriteKit are using default values. The SceneView has an SCNScene with a rootNode of the SCNNode. The SpriteKitView has a SpriteKitScene with an SK3DNode that has an SCNScene with a rootNode of the SCNNode. There is no other code changing or adding values. Why are the colors for the SCNView less vibrant than the colors for the SKView? Is there a default to change to make them equivalent, or another value to add? I have tried changing the default SCNMaterial but only succeeded in making the image black or dark. Any help is appreciated.
Posted
by philk1701.
Last updated
.
Post not yet marked as solved
0 Replies
252 Views
I'm porting a scenekit app to RealityKit, eventually offering an AR experience there. I noticed that when I run it on my iPhone 15 Pro and iPad Pro with the 120Hz screen, the framerate seems to be limited to 60fps. Is there a way to increase the target framerate to 120 like I can with sceneKit? I'm setting up my arView like so: @IBOutlet private var arView: ARView! { didSet { arView.cameraMode = .nonAR arView.debugOptions = [.showStatistics] } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
172 Views
I have a human-like rigged 3D model in a DAE file. I want to programmatically build a scene with several instances of this model in different poses. I can extract the SCNSkinner and skeleton chain from the DAE file without problem. I have discovered that to have different poses, I need to clone the skeleton chain, and clone the SCNSkinner as well, then modify the skeletons position. Works fine. This is done this way: // Read the skinner from the DAE file let skinnerNode = daeScene.rootNode.childNode(withName: "toto-base", recursively: true)! // skinner let skeletonNode1 = skinnerNode.skinner!.skeleton! // Adding the skinner node as a child of the skeleton node makes it easier to // 1) clone the whole thing // 2) add the whole thing to a scene skeletonNode1.addChildNode(skinnerNode) // Clone first instance to have a second instance var skeletonNode2 = skeletonNode1.clone() // Position and move the first instance skeletonNode1.position.x = -3 let skeletonNode1_rightLeg = skeletonNode1.childNode(withName: "RightLeg", recursively: true)! skeletonNode1_rightLeg.eulerAngles.x = 0.6 scene.rootNode.addChildNode(skeletonNode1) // Position and move the second instance skeletonNode2.position.x = 3 let skeletonNode2_leftLeg = skeletonNode2.childNode(withName: "LeftLeg", recursively: true)! skeletonNode2_leftLeg.eulerAngles.z = 1.3 scene.rootNode.addChildNode(skeletonNode2) It seems the boneWeights and boneIndices sources are duplicated for each skinner, so if I have let's say 100 instances, I eat a huge amount of memory, for something that is constant. Is there any way to avoid the duplication of the boneWeights and boneIndices ?
Posted Last updated
.
Post not yet marked as solved
1 Replies
212 Views
Hello fellow developers here is something that I don t fully grasp : 1/ I have a fake SceneKit with two nodes both having light 2/ I have a small widget to explore those lights and tweak some param -> in the small widget I can t update a toggle item when a new light is selected while other params are updated ! here is a short sample that illustrate what I am trying to resolve import SwiftUI import SceneKit class ShortScene { var scene = SCNScene() var lightNodes : [SCNNode] { get {scene.rootNode.childNodes(passingTest: { current, stop in current.light != nil} ) } } init() { let light1 = SCNLight() light1.castsShadow = false light1.type = .omni light1.intensity = 100 let nodelight1 = SCNNode() nodelight1.light = light1 nodelight1.name = "nodeLight1" scene.rootNode.addChildNode(nodelight1) let light2 = SCNLight() light2.castsShadow = false light2.type = .ambient light2.intensity = 300 let nodelight2 = SCNNode() nodelight2.light = light2 nodelight2.name = "nodeLight2" scene.rootNode.addChildNode(nodelight2) } } extension SCNLight : ObservableObject {} extension SCNNode : ObservableObject {} struct LightViewEx : View { @ObservedObject var lightParam : SCNLight @ObservedObject var lightNode : SCNNode var bindCol : Binding<Color> @State var castShadows : Bool init( _ _lightNode : SCNNode) { if let _light = _lightNode.light { lightParam = _light lightNode = _lightNode bindCol = Binding<Color>( get: { if let _lightcol = _lightNode.light!.color as! NSColor? { return Color(_lightcol)} else { return Color.red } }, set: { newCol in _lightNode.light!.color = NSColor(newCol) } ) castShadows = _lightNode.light!.castsShadow print( "For \(lightNode.name!) : CShadows \(castShadows)") } else { fatalError("No Light attached to Node") } } var body : some View { VStack(alignment: .leading) { Text("Light Params") Picker("Type",selection : $lightParam.type) { Text("IES").tag(SCNLight.LightType.IES) Text("Ambient").tag(SCNLight.LightType.ambient) Text("Directionnal").tag(SCNLight.LightType.directional) Text("Directionnal").tag(SCNLight.LightType.directional) Text("Omni").tag(SCNLight.LightType.omni) Text("Probe").tag(SCNLight.LightType.probe) Text("Spot").tag(SCNLight.LightType.spot) Text("Area").tag(SCNLight.LightType.area) } ColorPicker("Light Color", selection: bindCol) Text("Intensity") TextField("Intensity", value: $lightParam.intensity, formatter: NumberFormatter()) Divider() // Toggle("shadows", isOn: $lightParam.castsShadow ).onChange(of: lightParam.castsShadow, { lightParam.castsShadow.toggle() }) Toggle("CastShadows", isOn: $castShadows ) .onChange(of: castShadows) { lightParam.castsShadow = castShadows;print("castsShadows changed to \(castShadows)") } } } } struct sceneView : View { @State var _lightIdx : Int = 0 @State var shortScene = ShortScene() var body : some View { VStack(alignment: .leading) { if shortScene.lightNodes.isEmpty == false { Picker("Lights", selection: $_lightIdx) { ForEach(0..<shortScene.lightNodes.count, id: \.self) { index in Text(shortScene.lightNodes[index].name ?? "NoName" ).tag(index) } } GridRow(alignment: .top) { LightViewEx(shortScene.lightNodes[_lightIdx]) } } } } } struct testUIView: View { var body: some View { sceneView() } } #Preview { testUIView() } Something is obviously not right ! Anyone has some idea ?
Posted
by Pom73.
Last updated
.
Post not yet marked as solved
3 Replies
570 Views
Hi there - Where would a dev go these days to get an initial understanding of SceneKit? The WWDC videos linked in various places seem to be gone?! For example, the SceneKit page at developer.apple.com lists features a session videos link that comes up without any result, https://developer.apple.com/scenekit/ Any advice..? Cheers, Jan
Posted
by JayMcBee.
Last updated
.
Post not yet marked as solved
0 Replies
285 Views
Hi, I'm trying to display an STL model file in visionOS. I import the STL file using SceneKit's ModelIO extension, add it to an empty scene USDA and then export the finished scene into a temporary USDZ file. From there I load the USDZ file as an Entity and add it onto the content. However, the model in the resulting USDZ file has no lighting and appears as an unlit solid. Please see the screenshot below: Top one is created from directly importing a USDA scene with the model already added using Reality Composer through in an Entity and works as expected. Middle one is created from importing the STL model as an MDLAsset using ModelIO, adding onto the empty scene, exporting as USDZ. Then importing USDZ into an Entity. This is what I want to be able to do and is broken. Bottom one is just for me to debug the USDZ import/export. It was added to the empty scene using Reality Composer and works as expected, therefore the USDZ export/import is not broken as far as I can tell. Full code: import SwiftUI import ARKit import SceneKit.ModelIO import RealityKit import RealityKitContent struct ContentView: View { @State private var enlarge = false @State private var showImmersiveSpace = false @State private var immersiveSpaceIsShown = false @Environment(\.openImmersiveSpace) var openImmersiveSpace @Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace var modelUrl: URL? = { if let url = Bundle.main.url(forResource: "Trent 900 STL", withExtension: "stl") { let asset = MDLAsset(url: url) asset.loadTextures() let object = asset.object(at: 0) as! MDLMesh let emptyScene = SCNScene(named: "EmptyScene.usda")! let scene = SCNScene(mdlAsset: asset) // Position node in scene and scale let node = SCNNode(mdlObject: object) node.position = SCNVector3(0.0, 0.1, 0.0) node.scale = SCNVector3(0.02, 0.02, 0.02) // Copy materials from the test model in the empty scene to our new object (doesn't really change anything) node.geometry?.materials = emptyScene.rootNode.childNodes[0].childNodes[0].childNodes[0].childNodes[0].geometry!.materials // Add new node to our empty scene emptyScene.rootNode.addChildNode(node) let fileManager = FileManager.default let appSupportDirectory = try! fileManager.url(for: .applicationSupportDirectory, in: .userDomainMask, appropriateFor: nil, create: true) let permanentUrl = appSupportDirectory.appendingPathComponent("converted.usdz") if emptyScene.write(to: permanentUrl, delegate: nil) { // We exported, now load and display return permanentUrl } } return nil }() var body: some View { VStack { RealityView { content in // Add the initial RealityKit content if let scene = try? await Entity(contentsOf: modelUrl!) { // Displays middle and bottom models content.add(scene) } if let scene2 = try? await Entity(named: "JetScene", in: realityKitContentBundle) { // Displays top model using premade scene and exported as USDA. content.add(scene2) } } update: { content in // Update the RealityKit content when SwiftUI state changes if let scene = content.entities.first { let uniformScale: Float = enlarge ? 1.4 : 1.0 scene.transform.scale = [uniformScale, uniformScale, uniformScale] } } .gesture(TapGesture().targetedToAnyEntity().onEnded { _ in enlarge.toggle() }) VStack (spacing: 12) { Toggle("Enlarge RealityView Content", isOn: $enlarge) .font(.title) Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace) .font(.title) } .frame(width: 360) .padding(36) .glassBackgroundEffect() } .onChange(of: showImmersiveSpace) { _, newValue in Task { if newValue { switch await openImmersiveSpace(id: "ImmersiveSpace") { case .opened: immersiveSpaceIsShown = true case .error, .userCancelled: fallthrough @unknown default: immersiveSpaceIsShown = false showImmersiveSpace = false } } else if immersiveSpaceIsShown { await dismissImmersiveSpace() immersiveSpaceIsShown = false } } } } } #Preview(windowStyle: .volumetric) { ContentView() } To test this even further, I exported the generated USDZ and opened in Reality Composer. The added model was still broken while the test model in the scene was fine. This also further proved that import/export is fine and RealityKit is not doing something weird with the imported model. I am convinced this has to be something with the way I'm using ModelIO to import the STL file. Any help is appreciated. Thank you
Posted
by zer0x.
Last updated
.
Post not yet marked as solved
4 Replies
388 Views
I got a vision pro. I need a recording function, but when I looked it up in the developer documentation, I'm a little worried because it's only in the developer simulator. I'm making a recording app, but I'm asking other developers if they have any information because I don't want to make it so that Apple doesn't suggest or allow guidelines.
Posted Last updated
.
Post not yet marked as solved
1 Replies
273 Views
Hi everyone I'm making a small private app for my one of my engineering projects, a part of this app shows a 3D model of what it looks like in real life based on a position value of a joint that needs to be updated in real time. I was able import a USDZ of the exact model of the project, and make the proper nodes that can rotate, however I run into a problem where SceneKit takes forever to update the node, I'm not sure if my code just needs optimizing or SceneKit is just not the framework to use when needing things in a 3D model to be updated in real time I've confirmed that the device receives the values in realtime, it is just SceneKit that doesn't update the model in time I'm not very good at explaining things so I put as much detail as I possibly can and hope my problem is clear, I'm also pretty new to swift and iOS development. Here is the code I'm using import SwiftUI import SceneKit struct ModelView2: UIViewRepresentable { @State private var eulerAngle: Float = 0.0 @StateObject var service = BluetoothService() let sceneView = SCNView() func makeUIView(context: Context) -> SCNView { if let scene = SCNScene(named: "V4.usdz") { sceneView.scene = scene if let meshInstanceNode = scene.rootNode.childNode(withName: "MeshInstance", recursively: true), let meshInstance1Node = scene.rootNode.childNode(withName: "MeshInstance_1", recursively: true), let meshInstance562Node = scene.rootNode.childNode(withName: "MeshInstance_562", recursively: true) { // Rotate mesh instance around its own axis /* meshInstance562Node.eulerAngles = SCNVector3(x: 0, y: -0.01745329 * service.posititonValue, z: 0) */ print(meshInstance562Node.eulerAngles) } } sceneView.allowsCameraControl = true sceneView.autoenablesDefaultLighting = true return sceneView } func updateUIView(_ uiView: SCNView, context: Context) { if let scene = SCNScene(named: "V4.usdz") { sceneView.scene = scene if let meshInstanceNode = scene.rootNode.childNode(withName: "MeshInstance", recursively: true), let meshInstance1Node = scene.rootNode.childNode(withName: "MeshInstance_1", recursively: true), let meshInstance562Node = scene.rootNode.childNode(withName: "MeshInstance_562", recursively: true) { let boundingBox = meshInstance562Node.boundingBox let pivot = SCNMatrix4MakeTranslation( boundingBox.min.x + (boundingBox.max.x - boundingBox.min.x) / 2, boundingBox.min.y + (boundingBox.max.y - boundingBox.min.y) / 2, boundingBox.min.z + (boundingBox.max.z - boundingBox.min.z) / 2 ) meshInstance562Node.pivot = pivot meshInstance562Node.addChildNode(meshInstanceNode) meshInstance562Node.addChildNode(meshInstance1Node) var original = SCNMatrix4Identity original = SCNMatrix4Translate(original, 182.85785, 123.54999, 17.857864) // Translate along the Y-axis meshInstance562Node.transform = original print(service.posititonValue) var buffer: Float = 0.0 if service.posititonValue != buffer { meshInstance562Node.eulerAngles = SCNVector3(x: 0, y: -0.01745329 * service.posititonValue, z: 0) buffer = service.posititonValue } } } } func rotateNodeInPlace(node: SCNNode, duration: TimeInterval, angle: Float) { // Create a rotation action let rotationAction = SCNAction.rotateBy(x: 0, y: CGFloat(angle), z: 0, duration: duration) // Repeat the rotation action indefinitely // let repeatAction = SCNAction.repeatForever(rotationAction) // Run the action on the node node.runAction(rotationAction) print(node.transform) } func rotate(node: SCNNode, angle: Float) { node.eulerAngles = SCNVector3(x: 0, y: -0.01745329 * angle, z: 0) } } #Preview { ModelView2() }
Posted Last updated
.