Discuss augmented reality and virtual reality app capabilities.

Posts under AR / VR tag

118 Posts
Sort by:
Post not yet marked as solved
2 Replies
643 Views
Hi i am student from London studying app development in Arizona. I have a few ideas for apps that I believe would add to the Apple AR experience. I was wondering where I should go about getting started in the development process. Any Guidance would be much appreciated :)
Posted
by
Post not yet marked as solved
1 Replies
736 Views
I’m trying to make a pass-through experience where there is an emissive/glow-y/bloom-y object. Everything I’ve tried and researched so far, it looks like that won’t be possible with RealityKit rendering and I’ll have to use Unity with its URP. But will it be possible to use that renderer while still having pass-through video to ground the experience in the real world? Or is that only possible with fully immersive experiences? if there’s a completely different approach that is better, I’m willing and wanting to learn. Thank you in advance! -Dan
Posted
by
Post not yet marked as solved
0 Replies
524 Views
Hello ! I'm struggling with the famous lack of content issue in my app, and I'd like your opinion and advice on why I've been rejected for several weeks now. Here is what Apple tells me everytime : We noticed that the usefulness of your augmented reality app is limited by the minimal features or content it includes. Apps using ARKit should provide rich and integrated augmented reality experiences. We encourage you to review your app concept and incorporate more robust AR features and functionality. Note that merely dropping a model into an AR view or replaying animation is not enough. What do you guys recommend ? Did you get your AR app published and do you know how ? What should I add to maximize my chances ? And here's what my AR app does: You arrive on a Collection View menu with image cells which allows you to launch 4 different AR experiences and 1 form view. First experience is an AR marker (movie posters) which animates when being scanned. You have the possibility to download a valid AR marker by a link that redirects you to my website. Once you have downloaded it (and printed it), you have to scan the QRCode that is on the marker. If the QRCode is recognized, the AR experience starts. Otherwise (in case someone tries to scan an unknown QRCode), an error message is displayed. Second experience is the same as the first one but with another type of AR markers (advertising posters). Both first and second experience display interactive buttons that allow users to be redirected to external links (like cinema tickets platforms, etc.) Third experience is an avatar (video of me filmed on a green background then cropped) that you can place on the floor once it is detected. I've implemented the coaching overlay and a custom focus square. Last experience is a 3D model of a sofa that you can also place on the floor and then translate, rotate and scale with gestures. Regarding the form view, it is displaying our contact email address, a picker element to choose which experience you would like to contact us for and a validation button to open the Mail app with a pre-written message. I have planned to add more texts to guide the user during the avatar and the 3D model experiences, background music when you launch the avatar and the 3D model experiences and add 2 more furniture models to the last experience.
Posted
by
Post not yet marked as solved
2 Replies
619 Views
I have been playing with RealityKit and ARKit. One thing I am not able to figure out is if it's possible to actually place an object, say on a floor behind a couch and not be able to see it if viewing the area it was place from the other side of the couch. If thats confusing I apologize. Basically I want to "hide" objects in a closet or behind other physical objects. Are we just not there yet with this stuff? Or is there a particular way to do it I am missing? It just seems odd when I place an object then I see it "on top" of the couch from the other side. Thanks! Brandon
Post not yet marked as solved
0 Replies
354 Views
I'm new to Xcode and looking at how to build a basic AR app that would allow me to place a 2D photo in the real world. For example: Is there a template I could use to create this app? Any help is greatly appreciated! Thanks!
Posted
by
Post not yet marked as solved
0 Replies
884 Views
In regular Metal, I can do all sorts of tricks with texture masking to create composite objects and effects, similar to CSG. Since for now, AR-mode in visionOS requires RealityKit without the ability to use custom shaders, I'm a bit stuck. I'm pretty sure so far that what I want is impossible and requires a feature request, but here it goes: Here's a 2D example: Say I have some fake circular flashlights shining into the scene, depthwise, and everything else is black except for some rectangles that are "lit" by the circles. The result: How it works: In Metal, my per-instance data contain a texture index for a mask texture. The mask texture has an alpha of 0 for spots where the instance should not be visible, and an alpha of 1 otherwise. So in an initial renderpass, I draw the circular lights to this mask texture. In pass 2, I attach the fullscreen mask texture (circular lights) to all mesh instances that I want hidden in the darkness. A custom fragment shader multiplies the alpha of the full-screen mask texture sample at the given fragment with the color that would otherwise be output. i.e. out_color *= mask.a. The way I have blending and clear colors set-up, wherever the mask alpha is 0, an object will be hidden. The background clear color is black. The following is how the scene looks if I don't attach the masking texture. You can see that behind the scenes, the full rectangle is there. In visionOS AR-mode, the point is for the system to apply lighting, depth, and occlusion information to the world. For my effect to work, I need to be able to generate an intermediate representation of my world (after pass 2) that shows some of that world in darkness. I know I can use Metal separately from RealityKit to prepare a texture to apply to a RealityKit mesh using DrawableQueue However, as far as I know there is no way to supply a full-screen depth buffer for RealityKit to mix with whatever it's doing with the AR passthrough depth and occlusion behind the scenes. So my Metal texture would just be a flat quad in the scene rather than something mixed with the world. Furthermore, I don't see a way to apply a full-screen quad to the scene, period. I think my use case is impossible in visionOS in AR mode without customizable rendering in Metal (separate issue: I still think in single full app mode, it should be possible to grant access to the camera and custom rendering more securely) and/or a RealityKit feature enabling mixing of depth and occlusion textures for compositing. I love these sorts of masking/texture effects because they're simple and elegant to pull-off, and I can imagine creating several useful and fun experiences using this masking and custom depth info with AR passthrough. Please advise on how I could achieve this effect in the meantime. However, I'll go ahead and say a specific feature request is the ability to provide full-screen depth and occlusion textures to RealityKit so it's easier to mix Metal rendering as a pre-pass with RealityKit as a final composition step.
Posted
by
Post not yet marked as solved
6 Replies
2.3k Views
Hi, is there a way in visionOS to anchor an entity to the POV via RealityKit? I need an entity which is always fixed to the 'camera'. I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene. Edit: ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform) How would I get this information on visionOS? RealityViews content does not seem offer anything comparable. An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height. I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it. Appreciate any hints, thanks!
Posted
by
Post not yet marked as solved
1 Replies
747 Views
Related to “what you can do in visionOS,” what are all of these camera-related functionalities for? (As of yet, not described in the documentation) https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/colortextures https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/relativeviewport What are the intended use cases? Is this the equivalent to render-to-texture? I also see some interop with raw Metal happening here.
Posted
by
Post not yet marked as solved
1 Replies
924 Views
In full immersive (VR) mode on visionOS, if I want to use compositor services and a custom Metal renderer, can I still get the user’s hands texture so my hands appear as they are in reality? If so, how? If not, is this a valid feature request in the short term? It’s purely for aesthetic reasons. I’d like to see my own hands, even in immersive mode.
Posted
by
Post not yet marked as solved
1 Replies
823 Views
For the MaterialX shadergraph, the given example hard-codes two textures for blending at runtime ( https://developer.apple.com/documentation/visionos/designing-realitykit-content-with-reality-composer-pro#Build-materials-in-Shader-Graph ) Can I instead generate textures at runtime and set what those textures are as dynamic inputs for the material, or must all used textures be known when the material is created? If the procedural texture-setting is possible, how is it done, since the example shows a material with those hard-coded textures? EDIT: It looks like the answer is ”yes” since setParameter accepts textureResources https://developer.apple.com/documentation/realitykit/materialparameters/value/textureresource(_:)?changes=l_7 However, how do you turn a MTLTexture into a TextureResource?
Posted
by
Post not yet marked as solved
0 Replies
475 Views
I'm trying to scan a real world object with Apple ARKit Scanner . Sometimes the scan is not perfect, so I'm wondering if I can obtain an .arobject in other ways, for example with other scanning apps, and then merge all the scans into one single more accurate scan. I know that merging is possible, in fact, during the ARKit Scanner session the app prompts me if I want to merge multiple scans, and in that case I can select previous scan from my files app, in this context I would like to add from other sources. Is it possible ? And if yes, are out there any other options to obtain an .arobject, or is that a practical way to improve the quality of object detection? Thanks
Posted
by
Post not yet marked as solved
1 Replies
777 Views
In RealityComposerPro, I've set up a Custom Material that receives an Image File as an input. When I manually select an image and upload it to RealityComposerPro as the input value, I'm able to easily drive the surface of my object/scene with this image. However, I am unable to drive the value of this "cover" parameter via shaderGraphMaterial.setParameter(name: , value: ) in Swift since there is no way to supply an Image as a value of type MaterialParameters.Value. When I print out shaderGraphMaterials.parameterNames I see both "color" and "cover", so I know this parameter is exposed. Is this a feature that will be supported soon / is there a workaround? I assume that if something can be created as an input to Custom Material (in this case an Image File), there should be an equivalent way to drive it via Swift. Thanks!
Posted
by
Post marked as solved
1 Replies
727 Views
I’m still a little unsure about the various spaces and capabilities. I’d like to make full use of hand tracking, joints and all. In the mode with passthrough and a single application present (not a shared space), is that available? (I am pretty sure that the answer is “yes,” but I’d like to confirm.) What is this mode called in the system? Mixed full-space?
Posted
by
Post not yet marked as solved
0 Replies
499 Views
What would be the best way to go about recognizing a 3D physical object, then anchoring digital 3D assets to it? I would also like to use occlusion shaders and masks on the assets too. There's a lot of info out there, but the most current practices keep changing and I'd like to start in the right direction! If there is a tutorial or demo file that someone can point me to that would be great!
Posted
by
Post not yet marked as solved
0 Replies
373 Views
Hello everyone, I am trying to publish an augmented reality app into app store. I have prepared physical book and the app is used to scan some pages of that book and view some of the images in 3D. The app was rejected with "apps cannot require users to purchase unrelated products or engage in advertising or marketing activities to unlock app functionality" because image markers (my book) needed for it. Should I add in-app purchase to buy the book(it is free at the moment). I have seen toys being operated with apps. Just trying to understand how they would solve this issue because similar to my case the toy needs to be bought/obtained separately. Thank you.
Posted
by
Post not yet marked as solved
0 Replies
364 Views
Hi everyone! I'm totally new to programming and trying to learn AR by working on a simple AR app. Right now I have a tap gesture for loading my cat model [link to my code](re-tap to relocate the cat), however, I want to add a button for confirm my cat model's position and then make my tap function only work for my other models (ball/heart/fish models). I have no idea how to make that happened. Can anyone give me some guide please??
Posted
by