Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit tag

344 Posts
Sort by:
Post not yet marked as solved
14 Replies
7.1k Views
I'm trying to convert an obj model to usdz using onnly a color_mapxcrun usdz_converter Kudde_v03/Kudde_v03.obj ./Kudde_flower_2048.usdz -color_map Final_test_1/Textures/2048/Kudde_2048_flower_lagoon_color_map.png -normal_map Final_test_1/Textures/2048/Kudde_2048_normal_map.png -vThe model is converted fine and looks ok in Quick Look on my mac but when I look at it in Quick Look on my iPhone the model is too dark.If I open the obj file in XCode and SceneKit the model also looks fine after applying the color map to the diffuse option.It's like the lighting is all wrong in Quick look on iPhone. The issue is there in both object mode and AR mode.This is what i looks like on iPhone X Quick look https://ibb.co/MG69BVb (The preview in the Files app looks fine)and using quick look on my mac https://ibb.co/gM626ZfUsing Xcode https://ibb.co/zPgfr7fHeres my verbose output.usdz_converter Version: 1.009 -v: Verbose output Primitives: Transform: /Kudde_v03 Transform: /Kudde_v03/Geom GeomMesh: /Kudde_v03/Geom/ZBrush_defualt_group bound material: /Kudde_v03/Materials/default Replacing material unbind material: /Kudde_v03/Materials/default Binding to material /Kudde_v03/Materials/StingrayPBS_0 GeomScope: /Kudde_v03/Materials ShadeMaterial: /Kudde_v03/Materials/default ShadeMaterial: /Kudde_v03/Materials/StingrayPBS_0 ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/pbr ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/Primvar ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/color_map ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/normal_map ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/ao_map ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/emissive_map ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/metallic_map ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/roughness_mapAny ides on what's going on here?Or how I can get my models to not be too dark in the phone?
Posted
by
Post not yet marked as solved
9 Replies
9.7k Views
Hello,I was wondering what the specs of the LiDAR sensor found in the new iPad Pro are (like the Resolution, Vertical Field of View). How can I get these information?
Posted
by
Post not yet marked as solved
4 Replies
6.4k Views
Greetings everyone, I am a new developer and I try to use the lidar of ipad pro for potential engineering applications. It seems the new feature 'depth map' is produced with the info both from lidar and camera. Is there a way that I can access to the depth datapoints which are purely detected by the lidar? My application scenario is too dark that the camera can not be used, so I can only relay on lidar. Thank you in advance!
Posted
by
Post not yet marked as solved
1 Replies
639 Views
Hello, I'm setting up an ar view using scene anchors from reality composer. The scenes load perfectly fine the first time entering the AR View. When I go back to the previous screen and re-enter the AR View the app crashes before any of the scenes appear on the screen. I've tried pausing and resuming the session and am still getting the following error. validateFunctionArguments:3536: failed assertion `Fragment Function(fsRenderShadowReceiverPlane): incorrect type of texture (MTLTextureTypeCube) bound at texture binding at index 0 (expect MTLTextureType2D) for projectiveShadowMapTexture[0].' Any help would be very much appreciated. Thanks
Posted
by
Post not yet marked as solved
4 Replies
1.5k Views
Are there any good tutorials or suggestions on creating models in Blender and exporting with the associated materials and nodes? Specifically I'm looking to see if there is an ability to export translucency associated with an object (i.e. glass bottle). I have created a simple cube with a Principled BSDF shader, but the transmission and IOR settings are not porting over. Any tips or suggestions would be helpful.
Posted
by
Post not yet marked as solved
5 Replies
2.0k Views
Hello, I'm working on an application that will try to take measures from parts of a human body (hand, foot..). Now that Lidar has been integrated on the iPhone Pro and Arkit 4.0 came out, I would like to know if it seems feasible to combine the feature of the library such as model creation and geometry measurement, to get precise measurement of a body part. Thanks for your help, Tom
Posted
by
Post not yet marked as solved
1 Replies
1.5k Views
Object Tracking not working from Unity github repo  https://github.com/Unity-Technologies/arfoundation-samples. It seems ARTrackedObjectManager is not getting properly instantiated (added one debug program) Tested with Unity 2019.3 with AR foundation 4.1.1 and 2021.10a2 with AR foundation 4.1.1, xcode version used 12.1 on Iphone7 with iOS14.2, Though PlaneDetection and ImageTracking are working fine from the same github sample repo. For Reference object I could 3D scan and generate reference object using the AR foundation code. The scanning experience is good and gives good results while testing with the scanning application.The reference object along with other sample reference object is included in the reference object library as advised in the manuals. I am now having a doubt only the right combination of Unity version, AR foundation version, Phone version , iOS verion and potentially Xcode version works. Any help or pointers will be highly appreciated. Has anybody been able to implement the same successfully.
Posted
by
Post not yet marked as solved
2 Replies
1.1k Views
Hi, The ARkit is a great tool, I have my small app doing things, and it's fun! but I wanted to try to migrate from ARWorldConfiguration to ARGeoTrackingConfiguration - https://developer.apple.com/documentation/arkit/argeotrackingconfiguration and then we can see that this configuration is limited to a couples of USA only cites. But I can't manage to figure Why and if, in the near future, this will be expanded world wide ?
Posted
by
Post not yet marked as solved
1 Replies
527 Views
I am developing an app using ARKit4 ARGeoTrackingConfiguration following https://developer.apple.com/documentation/arkit/content_anchors/tracking_geographic_locations_in_ar. I am outside of the US, so my location is not supported. I simulate a location but the CoachingState always stays at .initializing Is there a way to test geotracking apps outside of the US?
Posted
by
Post not yet marked as solved
3 Replies
5k Views
Hello, I am new to this amazing AR Developing world. wanted to know if I desire to develop an app for the AR Glasses that are about to be launched in the future - should I use the ARkit?
Posted
by
Post not yet marked as solved
10 Replies
2.7k Views
I'm trying to Replay a recorded session, but I keep getting the error. 2021-10-02 14:44:57.687259-0700 arZero[11120:1137758] MOVReaderInterface - ERROR - Error Domain=com.apple.videoeng.streamreaderwarning Code=0 "Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'." UserInfo={NSLocalizedDescription=Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'.} 2021-10-02 14:44:58.798050-0700 arZero[11120:1137758] [Session] ARSession <0x111d066d0>: did fail with error: Error Domain=com.apple.arkit.error Code=101 "Required sensor unavailable." UserInfo={NSLocalizedDescription=Required sensor unavailable., NSLocalizedFailureReason=A required sensor is not available on this device.} The same video works on the iPad Pro, but not on an iPhone 12 Pro and iPhone 13 Pro. I've tried recording the video with all different phones.
Posted
by
Post not yet marked as solved
3 Replies
661 Views
Hi, I am using world tracking configuration with userFaceTrackingEnabled enabled. I am also setting frame semantics to contain scene depth. Having such world & front configuration, is it possible to access both camera depth and color images? Thank you, Robert
Posted
by
Post not yet marked as solved
2 Replies
2.0k Views
I'm using ARKit to collect LiDAR data. I have read the depth map and its format is 'kCVPixelFormatType_DepthFloat32'. I saved the depth map to a PNG image by converting it to UIImage, but I found that the PNG depth map is incorrect. The PNG format only supports 16bit data. let ciImage = CIImage(cvPixelBuffer: pixelBuf) let cgImage = context.createCGImage(ciImage, from: ciImage.extent) let uiImage = UIImage(cgImage: cgImage!).pngData() So, I have to save the depth map to a TIFF image. let ciImage = CIImage(cvPixelBuffer: pixelBuf)         do { try context.writeTIFFRepresentation(of: ciImage, to: path, format: context.workingFormat, colorSpace: context.workingColorSpace!, options: [:]) } catch { self.showInfo += "Save TIFF failed;" print(error) } How to convert the depth map from kCVPixelFormatType_DepthFloat32 to Float16? Is there a correct way to save the depth map to a PNG image?
Posted
by
Post not yet marked as solved
1 Replies
973 Views
I'm making an app that captures data using ARKit and will ultimately send the images+depth+gravity to an Object Capture Photogrammetry agent. I need to use the depth data and produce a model with correct scale, so from what I understand I need to send the depth file + set proper exif data in the image. Since I'm getting the images+depth from ARKit I'll need to set the exif data manually before saving the images. Unfortunately the documentation on this is a bit light, so would you be able to let me know what exif data needs to be set in order for the Photogrammetry to be able to create the model with proper scale? If I try and set my Photogrammetry agent with manual metadata like this: var sample = PhotogrammetrySample(id: id, image: image)       var dict:[ String: Any ] = [:]      dict["FocalLength"] = 23.551325 dict["PixelWidth"] = 1920 dict["PixelHeight"] = 1440       sample.metadata = dict I get the following error in the output and depth is ignored: [Photogrammetry] Can't use FocalLenIn35mmFilm to produce FocalLengthInPixel! Punting...
Posted
by
Post not yet marked as solved
2 Replies
981 Views
I am trying to use ARkit to create a 3d .ply map of a room with LiDAR, similarly to how apps like 3D scanner app do. However, I would like to access the wide angle image frames that are captured throughout the process. I've read on various forums that this is not possible because apple has locked those images away from developers and they are only used for their algorithm that creates the depth map. Is it possible to access these images, and if not is there a reason?
Posted
by
Post not yet marked as solved
2 Replies
2.9k Views
Hello folks! How can I get a real-world measurement between the device (iPad Pro 5th. gen) and an object measured with the LiDAR? Let's say I have a reticle in the middle of my CameraView and want to measure precisely from my position to that point I'm aiming?. Almost like the "Measure App" from Apple. sceneDepth doesn't give me anything. I also looked into the Sample Code "Capturing Depth Using the LiDAR Camera" Any ideas how to do that? A push in to the right direction might also be very helpful Thanks in advance!
Posted
by
Post not yet marked as solved
3 Replies
1.5k Views
Hello, I created in Blender a simple cube with 2 animations, one animation move up and down the cube and second one rotating cube on his position. I exported this file in glb format and I tried to converted using Reality Converter, unfortunately I can only see 1 animation. Is there any limitation of Reality Converter? Can I include more than 1 animation? The original file glb has the 2 animation inside, as you can see from the screenshot I checked the file using a online viewer for glb and there are no problem, both animations are in. The converter unfortunately see only the last one created. Any reason or explanation? I believe is a limitation on Reality Converter Regards
Posted
by
Post not yet marked as solved
18 Replies
11k Views
I found that almost all third-party applications cannot focus when they ask me to take an ID card photo. The same issue description from Twitter: https://twitter.com/mlinzner/status/1570759943812169729?s=20&t=n_Z5lTJac3L5QVRK7fbJZg Who can fix it? Apple or third-party developers?
Posted
by