Post not yet marked as solved
Environment
Apple Silicon M1 Pro
macOS 14.4
Xcode 15.3 (15E204a)
visionOS simulator 1.1
Step
Create a new visionOS app project and compile it through xcodebuild:
xcodebuild -destination "generic/platform=visionOS"
It fails on RealityAssetsCompile with log :
error: Failed to find newest available Simulator runtime
But if I open the Xcode IDE and start building, it works fine. This error only occurs in xcodebuild.
More
I noticed that in xcrun simctl list the vision pro simulator is in unavailable state:
-- visionOS 1.1 --
Apple Vision Pro (6FB1310A-393E-4E82-9F7E-7F6D0548D136) (Booted) (unavailable, device type profile not found)
And i can't find the vision pro device type in xcrun simctl list devicetypes, does it matter? I have tried to completely reinstall Xcode and simulator runtime, but still the same error.
Post not yet marked as solved
I need to obtain data through mqtt and subscription? Is there any idea or framework ?
Think you
Post not yet marked as solved
Hi Guys, I would like to ask if anyone knows the FPS of screen recording and airplay on Vision Pro. Airplay refers to mirroring the Vision Pro view to MacBook/iPhone/iPad. Also, is there any way to record the screen with the raw FPS of Vision Pro (i.e., 90)?
Post not yet marked as solved
Hi there. I've been trying to take a snapshot programmatically on apple vision pro but haven't succeeded.
This is the code I am using so far:
func takeSnapshot<Content: View>(of view: Content) -> UIImage? {
var image: UIImage?
uiQueue.sync {
let controller = UIHostingController(rootView: view)
controller.view.bounds = UIScreen.main.bounds
let renderer = UIGraphicsImageRenderer(size: controller.view.bounds.size)
image = renderer.image { context in
controller.view.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true)
}
}
return image
}
However, UIScreen is unavailable on visionOS.
Any idea of how I can achieve this?
Thanks
Oscar
Post not yet marked as solved
Our app needs to scan QR codes (or a similar mechanism) to populate it with content the user wants to see.
Is there any update on QR code scanning availability on this platform? I asked this before, but never got any feedback.
I know that there is no way to access the camera (which is an issue in itself), but at least the system could provide an API to scan codes.
(It would be also cool if we were able to use the same codes Vision Pro uses for detecting the Zeiss glasses, as long as we could create these via server-side JavaScript code.)
Post not yet marked as solved
I'm on VisionOS 1.2 beta and Instruments will capture everything but RealityKit information.
RealityKit frames and RealityKit metrics captures no data. This used to work though I'm not sure what version it did. Unbelievably frustrating.
Hi team,
I'm running into the following issue, for which I don't seem to find a good solution.
I would like to be able to drag and drop items from a view into empty space to open a new window that displays detailed information about this item.
Now, I know something similar has been flagged already in this post (FB13545880: Support drag and drop to create a new window on visionOS)
HOWEVER, all this does, is launch the App again with the SAME WindowGroup and display ContentView in a different state (show a selected product e.g.).
What I would like to do, is instead launch ONLY the new WindowGroup, without a new instance of ContentView.
This is the closest I got so far. It opens the desired window, but in addition it also displays the ContentView WindowGroup
WindowGroup {
ContentView()
.onContinueUserActivity(Activity.openWindow, perform: handleOpenDetail)
}
WindowGroup(id: "Detail View", for: Reminder.ID.self) { $reminderId in
ReminderDetailView(reminderId: reminderId! )
}
.onDrag({
let userActivity = NSUserActivity(activityType: Activity.openWindow)
let localizedString = NSLocalizedString("DroppedReminterTitle", comment: "Activity title with reminder name")
userActivity.title = String(format: localizedString, reminder.title)
userActivity.targetContentIdentifier = "\(reminder.id)"
try? userActivity.setTypedPayload(reminder.id)
// When setting the identifier
let encoder = JSONEncoder()
if let jsonData = try? encoder.encode(reminder.persistentModelID),
let jsonString = String(data: jsonData, encoding: .utf8) {
userActivity.userInfo = ["id": jsonString]
}
return NSItemProvider(object: userActivity)
})
func handleOpenDetail(_ userActivity: NSUserActivity) {
guard let idString = userActivity.userInfo?["id"] as? String else {
print("Invalid or missing identifier in user activity")
return
}
if let jsonData = idString.data(using: .utf8) {
do {
let decoder = JSONDecoder()
let persistentID = try decoder.decode(PersistentIdentifier.self, from: jsonData)
openWindow(id: "Detail View", value: persistentID)
} catch {
print("Failed to decode PersistentIdentifier: \(error)")
}
} else {
print("Failed to convert string to data")
}
}
Post not yet marked as solved
Is there a maximum distance at which an entity will register a TapGesture()? I'm unable to interact with entities farther than 8 or 9 meters away. The below code generates a series of entities progressively farther away. After about 8 meters, the entities no long respond to tap gestures.
RealityView { content in
var body: some View {
RealityView { content in
for i in 0..<10 {
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
immersiveContentEntity.position = SIMD3<Float>(x: Float(-i*i), y: 0.75, z: Float(-1*i)-3)
}
}
}
.gesture(tap)
}
var tap: some Gesture {
TapGesture()
.targetedToAnyEntity()
.onEnded { value in
AudioServicesPlaySystemSound(1057)
print(value.entity.name)
}
}
}
Post not yet marked as solved
Hi!
I was trying to port our sdk for visionOS.
I was going through the documentation and saw this video: https://developer.apple.com/videos/play/wwdc2023/10089/
Is there any working code sample for it, same goes for arkit c api ?
Couldn't find any links. Thanks in advance.
Sahil
Post not yet marked as solved
I am developing an immersive application featured with hands interacting my virtual objects. When my hand passes through the object, the rendered color of my hand is like blending hand color with object's color together, both semi transparent. I wonder if it is possible to make my hand be always "opaque", or say the alpha value of rendered hand (coz it's VST) is always 1, but the object's alpha value could be varied in terms of whether it is interacting with hand.
(I was thinking this kind of feature might be supported by a specific component (just like HoverEffectComponent), but I didn't find that out)
Post not yet marked as solved
Good day. I'm inquiring if there is a way to test functionality between Apple Pencil Pro and Apple Vision Pro? I'm trying to work on an idea that would require a tool like the Pencil as an input device. Will there be an SDK for this kind of connectivity?
Post not yet marked as solved
how to get a clear background with navigationstack in visionOS app?
Post not yet marked as solved
I have been trying to replicate the entity transform functionality present in the magnificent app Museum That Never Was (https://apps.apple.com/us/app/the-museum-that-never-was/id6477230794) -- it allows you to simultaneously rotate, magnify and translate the entity, using gestures with both hands (as opposed to normal DragGesture() which is a one-handed gesture). I am able to rotate & magnify simultaneously but translating via drag does not activate while doing two-handed gestures. Any ideas? My setup is something like so:
Gestures:
var drag: some Gesture {
DragGesture()
.targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self))
.onChanged { value in
gestureTranslation = value.convert(value.translation3D, from: .local, to: .scene)
}
.onEnded { value in
itemTranslation += gestureTranslation
gestureTranslation = .init()
}
}
var rotate: some Gesture {
RotateGesture3D()
.targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self))
.onChanged { value in
gestureRotation = simd_quatf(value.rotation.quaternion).inverse
}
.onEnded { value in
itemRotation = gestureRotation * itemRotation
gestureRotation = .identity
}
}
var magnify: some Gesture {
MagnifyGesture()
.targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self))
.onChanged { value in
gestureScale = Float(value.magnification)
}
.onEnded { value in
itemScale *= gestureScale
gestureScale = 1.0
}
}
RealityView modifiiers:
.simultaneousGesture(drag)
.simultaneousGesture(rotate)
.simultaneousGesture(magnify)
RealityView update block:
entity.position = itemTranslation + gestureTranslation + exhibitDefaultPosition
entity.orientation = gestureRotation * itemRotation
entity.scaleAll(itemScale * gestureScale)
Post not yet marked as solved
[visionOS Question]
I’m using the hierarchy of an entity loaded from a RealityKit Pro project to drive the content of a NavigationSplitView. I’d like to render any of the child entities in a RealityKitView in the detail pane when a user selects the child entity name from the list in the NavigationSplitView. I haven’t been able to render the entity in the detail view yet.
I have tried updating the position/scaling to no avail. I also tried adding an AnchorEntity and set the child entity parent to it. I’m starting to suspect that the way to do it is to create a scene for each individual child entity in the RealityKit Pro project. I’d prefer to avoid this approach as I want a data-driven approach.
Is there a way to implement my idea in RealityKit in code?
Post not yet marked as solved
Im trying to use a RealityView with attachments and this error is being thowen. Am i using the RealityView wrong? I've seen other people use a RealityView with Attachments in visionOS... Please let this be a bug...
RealityView { content, attachments in
contentEntity = ModelEntity(mesh: .generatePlane(width: 0.3, height: 0.5))
content.add(contentEntity!)
} attachments: {
Text("Hello!")
}.task {
await loadImage()
await runSession()
await processImageTrackingUpdates()
}
Post not yet marked as solved
I am trying to verify my understanding of adding a HoverEffectComponent on entities inside a scene in RealityViews.
Inside RealityComposer Pro, I have added the required Input Target and Collision components to one entity inside a node with multiple siblings, and left any options as defaults. They appear to create appropriately sized bounding boxes etc for these objects.
In my RealityView I programmatically add the HoverEffectComponents to the entities as I don't see them in RCP.
On device, this appears to "work" in the sense that when I gaze at the entity, it lights up - but so does every other entity in the scene - even those without Input Target and Collision components attached.
Because the documentation on the components is sparse I am unsure if this is behavior as designed (e.g. all entities in that node are activated) or a bug or something in between.
Has anyone encountered this and is there an appropriate way of setting these relationships up?
Thanks
Post not yet marked as solved
Hi,
just generated a HDR10 MVHEVC file, mediainfo is below:
Color range : Limited
Color primaries : BT.2020
Transfer characteristics : PQ
Matrix coefficients : BT.2020 non-constant
Codec configuration box : hvcC+lhvC
then generate the segment files with below command:
mediafilesegmenter --iso-fragmented -t 4 -f av_1 av_new_1.mov
then upload the segment files and prog_index.m3u8 to web server.
just find that can not play the HLS stream on Safari...
the url is http://ip/vod/prog_index.m3u8
just checked that if i remove the tag Transfer characteristics : PQ when generating the MVHEVC file.
above same mediafilesegmenter command and upload the files to web server.
the new version of HLS stream is can play on Safari...
Is there any way to play HLS PQ video on Safari. thanks.
Post not yet marked as solved
Hi all, I need some help debugging some code I wrote. Just as a preface, I'm an extremely new VR/AR developer and also very new to using ARKit + RealityKit. So please bear with me :) I'm just trying to make a simple program that will track an image and place an entity on it. The image is tracked correctly, but the moment the program recognizes the image and tries to place an entity on it, the program crashes. Here’s my code:
VIEWMODEL CODE:
Observable class ImageTrackingModel {
var session = ARKitSession() // ARSession used to manage AR content
var imageAnchors = [UUID: Bool]() // Tracks whether specific anchors have been processed
var entityMap = [UUID: ModelEntity]() // Maps anchors to their corresponding ModelEntity
var rootEntity = Entity() // Root entity to which all other entities are added
let imageInfo = ImageTrackingProvider(
referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "referancePaper")
)
init() {
setupImageTracking()
}
func setupImageTracking() {
if ImageTrackingProvider.isSupported {
Task {
try await session.run([imageInfo])
for await update in imageInfo.anchorUpdates {
updateImage(update.anchor)
}
}
}
}
func updateImage(_ anchor: ImageAnchor) {
let entity = ModelEntity(mesh: .generateSphere(radius: 0.05)) // THIS IS WHERE THE CODE CRASHES
if imageAnchors[anchor.id] == nil {
rootEntity.addChild(entity)
imageAnchors[anchor.id] = true
print("Added new entity for anchor \(anchor.id)")
}
if anchor.isTracked {
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
print("Updated transform for anchor \(anchor.id)")
}
}
}
APP:
@main
struct MyApp: App {
@State var session = ARKitSession()
@State var immersionState: ImmersionStyle = .mixed
private var viewModel = ImageTrackingModel()
var body: some Scene {
WindowGroup {
ModeSelectView()
}
ImmersiveSpace(id: "appSpace") {
ModeSelectView()
}
.immersionStyle(selection: $immersionState, in: .mixed)
}
}
Content View:
RealityView { content in
Task {
viewModel.setupImageTracking()
}
} //Im serioulsy so clueless on how to use this view
Post not yet marked as solved
In visionOS. mix mode, I place a virtual object on the floor and a chair in front of it, but the chair does not obstruct the virtual object, making the effect unrealistic. How to make chairs and other objects in reality cover virtual objects
I was executing some code from Incorporating real-world surroundings in an immersive experience
func processReconstructionUpdates() async {
for await update in sceneReconstruction.anchorUpdates {
let meshAnchor = update.anchor
guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue }
switch update.event {
case .added:
let entity = ModelEntity()
entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform)
entity.collision = CollisionComponent(shapes: [shape], isStatic: true)
entity.components.set(InputTargetComponent())
entity.physicsBody = PhysicsBodyComponent(mode: .static)
meshEntities[meshAnchor.id] = entity
contentEntity.addChild(entity)
case .updated:
guard let entity = meshEntities[meshAnchor.id] else { continue }
entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform)
entity.collision?.shapes = [shape]
case .removed:
meshEntities[meshAnchor.id]?.removeFromParent()
meshEntities.removeValue(forKey: meshAnchor.id)
}
}
}
I would like to toggle the Occlusion mesh available on the dev tools below, but programmatically. I would like to have a button, that would activate and deactivate that.
I was checking .showSceneUnderstanding but it does not seem to work in visionOS. I get the following error 'ARView' is unavailable in visionOS when I try what is available Visualizing and Interacting with a Reconstructed Scene