visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,229 Posts
Sort by:
Post not yet marked as solved
1 Replies
624 Views
I would like to know whether it will be possible to access high-resolution textures coming from ARKit scene reconstruction or by having access to camera frames. In session 10091 it appears that ARFrame (with its camera data) is no longer available on visionOS. The use cases I have in mind are along the lines of: Having a paper card with a QR code on a physical table and use pixel data to recognize the code and place a corresponding virtual object on top Having physical board game components recognized and used as inputs: for example you control white chess pieces physically while your opponent's black pieces are projected virtually onto your table Having a user draw a crude map on physical paper and being able to use this as an image to be processed/recognized These examples all have in common that the physical objects serve directly as inputs to the application without having to manipulate a virtual representation. In an ideal privacy-preserving way it would be possible to ask ARKit to provide texture information a specially-defined volume in physical space or at least a given recognized surface (e.g. a table or a wall).
Posted
by
Post marked as solved
6 Replies
2.1k Views
Hi, I have one question: you explained Shared Space at the beginning of the session video, but I didn't really understand it. Is this Shared Space like the Dock on a Mac? Are applications placed in the Shared Space and the operation is to launch the application placed in the Shared Space ? Why is the word "Shared" included, or is there a function to do Shared? "By default, apps launch into Shared Space." By default, apps launch into the Shared Space. What is the default? What is the non-default state? "People remain connected to their surroundings through passthrough." What does the above mean on visionOS? By the way, is the application that starts on the Shared Space the so-called clock, or does the Safari browser also work on the Shared Space? What kind of applications can only run on Full Space? I don't have an image of the role of each function on visionOS. If possible, it would be easier to understand if there is an actual image of the application running, not just a diagram. Best regards. Sadao Tokuyama https://1planet.co.jp/
Posted
by
Post not yet marked as solved
1 Replies
494 Views
Volumes allow an app to display 3D content in defined bounds, sharing the space with other apps What does it mean to be able to share space in Volumes? What are the benefits of being able to do this? Do you mean Shared Space? I don't understand Shared Space very well to begin with. they can be viewed from different angles. Does this mean that because it is 3D content with depth, if I change the angle, I can see it with depth? It seems obvious to me because it is 3D content. Is this related to Volumes?
Posted
by
Post marked as solved
3 Replies
1.6k Views
How does VisionOS interact with people with eye disorders? For example has OpticID been tested with specific disorders like nystagmus? What about Lazy eye problems, would the usage for people with lazy eye disorder be different for other people?
Posted
by
Post not yet marked as solved
3 Replies
1.6k Views
Will it be possible to get direct access to all the different cameras of Vision Pro via AVFoundation or indirectly via other frameworks? Which sensors (inner and outer color & IR cameras, LiDAR, TrueDepth camera) will developers have access to on visionOS?
Posted
by
rd1
Post marked as solved
16 Replies
1.9k Views
Apparenrly, shadows aren’t generated for procedural geometry in RealityKit: https://codingxr.com/articles/shadows-lights-in-realitykit/ Has this been fixed? My projects tend to involve a lot of procedurally-generated meshes as opposed to importes-models. This will be even more important when VisionOS is out. On a similar note, it used to be that ground shadows were not per-entity. I’d like to enable or disable them per-entity. Is it possible? Since currently the only way to use passthrough AR in Vision OS will be to use RealityKit, more flexibility will be required. I can‘t simply apply my own preferences.
Posted
by
Post not yet marked as solved
1 Replies
1.6k Views
We‘ve learned this week that Vision Pro in full immersive mode does not allow you to move around. However, many of the most exciting use cases for immersive computing include medical rehabilitation and exercise, immersive classroom lab spaces for running experiments, virtual museum galleries and exhibits, and escape rooms. These use cases require free movement capability, however. It would be a shame for Vision Pro to limit full immersive mode to stand-still tasks when many of these interesting and beneficial use cases exist. Many existing products allow for mobility in VR by asking the user to define a safe walkable zone in their environment. For the use cases above, these are controlled environments in which bumping into real world objects is not a risk. Also, many existing VR solutions do use VR in this capacity, and I think Vision Pro would be a great platform for extending the potential for these sorts of experiences given the additional capability of the software and hardware. Is Apple interested in exploring potential solutions for enabling movement in full VR mode / is this feedback assistant-worthy? I understand this is a v1 of the hardware and perhaps this problem is still being explored, and that future iterations might see major improvements. It happens to be the case that almost all of the projects I’d like to pursue using Vision Pro require free mobility.
Posted
by
Post not yet marked as solved
4 Replies
1.1k Views
I am interested in building some outdoor traffic/driving related apps. Can Vision pro used outdoor with rebliable position estimate?
Posted
by
Post not yet marked as solved
3 Replies
3.1k Views
Hi, I have a question about Apple Vision Pro specifications. HWhat are the vertical, horizontal, and diagonal FOV degrees of Vision Pro? How many cm does Vision Pro's Near clipping plane support? How many meters does Vision Pro's Far clipping plane support? Does eye tracking, authentication, etc. work properly for people wearing contact lenses? At what age can Vision Pro be worn? Best regards. Sadao Tokuyama https://1planet.co.jp/ https://twitter.com/tokufxug
Posted
by
Post not yet marked as solved
0 Replies
674 Views
As this was not explicitly stated, Will the Vision Pro allow for navigation of the UI in complete darkness? I am aware that it has inbuilt Lidar but it is still unclear to me if this will allow for tracking in dark rooms. Understanding if this is a feature of the headset is a quite important aspect of an application that I am working on. -Cheremiah
Posted
by
Post not yet marked as solved
2 Replies
1.2k Views
Is there support for using multiple UV channels in AR QuickLook in iOS17? One important use case would be to put a tiling texture in an overlapping tiling UV set while mapping Ambient Occlusion to a separate unwrapped non-overlapping UV set. This is very important to author 3D content combining high-resolution surface detail and high-quality Ambient Occlusion data while keeping file size to a minimum.
Posted
by
Post not yet marked as solved
1 Replies
718 Views
What formats are the 3D videos and photos taken by Vision Pro? Regarding the photos, can regular photos be converted into a format that Vision Pro can play using methods like NeRF? As for videos, can Vision Pro play videos recorded by devices like RealSense, or regular videos that have had depth information added in post-production? Thanks!
Posted
by
Post not yet marked as solved
0 Replies
383 Views
After watching the 'Animate with springs' session from this past Friday, Jacob said (while talking about the new spring configurations) "we're adopting these universally across Apple's design and engineering efforts. So all of our frameworks that support springs will use them." As an aspiring visionOS developer eagerly awaiting the visionOS sdk, my question is will these kinds of methods be available for 3D applications? Being able to apply realistic spring physics 3D entities opens up the door for some really fun game mechanics in spatial computing applications
Posted
by
Post not yet marked as solved
2 Replies
742 Views
Can we create apps that aren't just screen based? For example, playing a music instrument using visionOS. The screen could be utilised to show the music notes, or a video of a play along (imagine a course that teaches you to play the instrument). Would like to understand the potentinal capabilites & limitations of Vision Pro + visionOS.
Posted
by
Post not yet marked as solved
6 Replies
2.3k Views
Hi, I was wondering after watching the WWDC23 session, Meet Core Location for spatial computing, does the Apple Vision Pro have GPS? Or does it provide Core Location functionality via Wi-Fi? Also, in Unity, we use Input.location to get latitude and longitude. When developing in Unity with Apple Vision Pro, do we use Input.location to get latitude and longitude? Best regards. Sadao Tokuyama https://1planet.co.jp/ https://twitter.com/tokufxug
Posted
by