Apple Silicon

RSS for tag

Build apps, libraries, frameworks, plug-ins, and other executable code that run natively on Apple silicon.

Apple Silicon Documentation

Posts under Apple Silicon tag

69 Posts
Sort by:
Post not yet marked as solved
5 Replies
1.8k Views
We recently started working on getting an iOS app to work on Macs with Apple Silicon as a "Designed for iPhone" app and are having issues with speech synthesis. Specifically, voices retuned by AVSpeechSynthesisVoice.speechVoices() do not all work on the Mac. When we build an utterance and attempt to speak, the synthesizer falls back on a default voice and says some very odd text about voice parameters (that is not in the utterance speech text) before it does say the intended speech. Here is some sample code to setup the utterance and speak: func speak(_ text: String, _ settings: AppSettings) { let utterance = AVSpeechUtterance(string: text) if let voice = AVSpeechSynthesisVoice(identifier: settings.selectedVoiceIdentifier) { utterance.voice = voice print("speak: voice assigned \(voice.audioFileSettings)") } else { print("speak: voice error") } utterance.rate = settings.speechRate utterance.pitchMultiplier = settings.speechPitch do { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playback, mode: .default, options: .duckOthers) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) self.synthesizer.speak(utterance) return } catch let error { print("speak: Error setting up AVAudioSession: \(error.localizedDescription)") } } When running the app on the Mac, this is the kind of error we get with "com.apple.eloquence.en-US.Rocko" as the selectedVoiceIdentifier: speak: voice assgined [:] 2023-05-29 18:00:14.245513-0700 A.I.[9244:240554] [aqme] AQMEIO_HAL.cpp:742 kAudioDevicePropertyMute returned err 2003332927 2023-05-29 18:00:14.410477-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.412837-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.413774-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.414661-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.415544-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.416384-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.416804-0700 A.I.[9244:240554] [AXTTSCommon] Audio Unit failed to start after 5 attempts. 2023-05-29 18:00:14.416974-0700 A.I.[9244:240554] [AXTTSCommon] VoiceProvider: Could not start synthesis for request SSML Length: 140, Voice: [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null), converted from tts request [TTSSpeechRequest 0x600002c29590] <speak><voice name="com.apple.eloquence.en-US.Rocko">How much wood would a woodchuck chuck if a wood chuck could chuck wood?</voice></speak> language: en-US footprint: premium rate: 0.500000 pitch: 1.000000 volume: 1.000000 2023-05-29 18:00:14.428421-0700 A.I.[9244:240360] [VOTSpeech] Failed to speak request with error: Error Domain=TTSErrorDomain Code=-4010 "(null)". Attempting to speak again with fallback identifier: com.apple.voice.compact.en-US.Samantha When we run AVSpeechSynthesisVoice.speechVoices(), the "com.apple.eloquence.en-US.Rocko" is absolutely in the list but fails to speak properly. Notice that the line: print("speak: voice assigned \(voice.audioFileSettings)") Shows: speak: voice assigned [:] The .audioFileSettings being empty seems to be a common factor for the voices that do not work properly on the Mac. For voices that do work, we see this kind of output and values in the .audioFileSettings: speak: voice assigned ["AVFormatIDKey": 1819304813, "AVLinearPCMBitDepthKey": 16, "AVLinearPCMIsBigEndianKey": 0, "AVLinearPCMIsFloatKey": 0, "AVSampleRateKey": 22050, "AVLinearPCMIsNonInterleaved": 0, "AVNumberOfChannelsKey": 1] So we added a function to check the .audioFileSettings for each voice returned by AVSpeechSynthesisVoice.speechVoices(): //The voices are set in init(): var voices = AVSpeechSynthesisVoice.speechVoices() ... func checkVoices() { DispatchQueue.global().async { [weak self] in guard let self = self else { return } let checkedVoices = self.voices.map { ($0.0, $0.0.audioFileSettings.count) } DispatchQueue.main.async { self.voices = checkedVoices } } } That looks simple enough, and does work to identify which voices have no data in their .audioFileSettings. But we have to run it asynchronously because on a real iPhone device, it takes more than 9 seconds and produces a tremendous amount of error spew to the console. 2023-06-02 10:56:59.805910-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:56:59.971435-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:57:00.122976-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:57:00.144430-0700 A.I.[17186:910116] [AXTTSCommon] MauiVocalizer: 11006 (Can't compile rule): regularExpression=\Oviedo(?=, (\x1b\\pause=\d+\\)?Florida)\b, message=unrecognized character follows \, characterPosition=1 2023-06-02 10:57:00.147993-0700 A.I.[17186:910116] [AXTTSCommon] MauiVocalizer: 16038 (Resource load failed): component=ttt/re, uri=, contentType=application/x-vocalizer-rettt+text, lhError=88602000 2023-06-02 10:57:00.148036-0700 A.I.[17186:910116] [AXTTSCommon] Error loading rules: 2147483648 ... This goes on and on and on ... There must be a better way?
Posted
by
Post not yet marked as solved
0 Replies
398 Views
I was helping a developer with a gnarly issue today and, as part of that, I had to explain how Apple silicon code accesses global variables. I’ve done this a few times now, so I figured I might as well write it up for all. If you have questions or comments, put them in a new thread and tag it with Debugging so that I see it. Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = "eskimo" + "1" + "@" + "apple.com" Accessing Global Variables on Apple Silicon Consider the -[WKWebView navigationDelegate getter method. The code for that is available in Darwin: 691 - (id <WKNavigationDelegate>)navigationDelegate 692 { 693 return _navigationState->navigationDelegate().autorelease(); 694 } This reads the _navigationState ivar, whose value is a C++ object pointer of type NavigationState, and calls the navigationDelegate() method on it. Finally, it calls the autorelease() method on the result. Disassembling this on Intel you see this: (lldb) disas -f WebKit`-[WKWebView navigationDelegate]: -> … <+0>: pushq %rbp … <+1>: movq %rsp, %rbp … <+4>: pushq %rbx … <+5>: pushq %rax … <+6>: movq 0x875bd2(%rip), %rax ; WKWebView._navigationState … <+13>: movq (%rdi,%rax), %rsi … <+17>: leaq -0x10(%rbp), %rbx … <+21>: movq %rbx, %rdi … <+24>: callq 0x10adce41c ; WebKit::NavigationState::navigationDelegate() … <+29>: movq (%rbx), %rdi … <+32>: callq 0x10b37398e ; symbol stub for: CFMakeCollectable … <+37>: movq %rax, %rdi … <+40>: callq 0x10b37ac0c ; symbol stub for: objc_autorelease … <+45>: addq $0x8, %rsp … <+49>: popq %rbx … <+50>: popq %rbp … <+51>: retq The code starting at +29 is the autorelease() stuff, so ignore that. Rather, focus on the code from +6 through to +24: At +6 it reads the WKWebView._navigationState global variable. The Objective-C runtime sets this up to be the offset from the start of the object to the _navigationState ivar. At +13 it reads the ivar itself. The remaining instructions set up the call to navigationDelegate(). The instruction at +6 is a PC-relative read (rip is the PC). This is well supported on 64-bit Intel [1], so it’s only one instruction. Now consider this same disassembly on Apple silicon: (lldb) disas -f WebKit`-[WKWebView navigationDelegate]: -&gt; … <+0>: sub sp, sp, #0x20 … <+4>: stp x29, x30, [sp, #0x10] … <+8>: add x29, sp, #0x10 … <+12>: adrp x8, 2127 … <+16>: ldrsw x8, [x8, #0xc24] … <+20>: ldr x0, [x0, x8] … <+24>: add x8, sp, #0x8 … <+28>: bl 0x10523b620 ; WebKit::NavigationState::navigationDelegate() … <+32>: ldr x0, [sp, #0x8] … <+36>: bl 0x1057a9d8c ; symbol stub for: CFMakeCollectable … <+40>: bl 0x1057b8270 ; symbol stub for: objc_autorelease … <+44>: ldp x29, x30, [sp, #0x10] … <+48>: add sp, sp, #0x20 … <+52>: ret Again, the stuff from +32 onwards is uninteresting. The instructions of interest run from +12 to +28. Specifically, the two instructions at +12 and +16 represent a PC-relative read. This requires two instructions because Apple silicon instructions are of a fixed width. There’s not enough space in a 32-bit instruction to encode a large PC-relative offset. Rather, it has to be split across two instructions. The most interesting instruction is the one at +12, adrp. I’m not sure what this mnemonic is officially, but I always think of it as add relative to page. The instruction: Takes an immediate value Shifts it left by 12 bits Adds it to the PC Masks off the bottom 12-bits Puts that in the target register The instruction after the adrp can vary. In this case the goal is to load an int from a PC-relative address, so it’s a load instruction, ldrsw. This specific syntax uses an immediate offset from a base register. This offset ‘fills in’ the bits ‘missing’ in the address calculated by the preceding adrp instruction. Now, let’s say that the adrp instruction was at PC 0x1051665c4. Here’s how you calculate the value it generates: (lldb) p/x ((0x1051665c4+(2127<<12))&~0x0fff) (long) $10 = 0x00000001059b5000 Now add the immediate from the ldrsw to calculate the address used by that instruction: (lldb) p/a 0x00000001059b5000+0xc24 (long) $11 = 0x00000001059b5c24 WebKit`WKWebView._navigationState Finally, load an int value from that address: (lldb) p *(int *)0x00000001059b5c24 (int) $13 = 400 And so the value of the WKWebView._navigationState global variable, which is the offset of the _navigationState ivar within the WKWebView object, is 400. And on with the debugging! [1] Notably, it was very poorly supported on 32-bit Intel, but fortunately we don’t care about that any more.
Posted
by
Post not yet marked as solved
3 Replies
999 Views
I've written a native app for macOS on my MacBook Air (with the Apple M2 chip.) Now I need to test it for an Intel-based CPU. When I build my app in Xcode, it is supposed to cover both ARM64 and x86-64 architectures in a single Mach-O binary, but when I send it to my customer he tells me that the app works on the Apple silicon but it crashes on his Intel-based Mac. So I'm looking for ways to test-run my app on an Intel-based platform and see what is wrong there. (But I obviously don't want to buy a separate Mac just for that.) I know that one can use Azure to spin up a Windows, or a Linux VM and open it via a web browser. But it doesn't seem to support macOS. How can I run an Intel-based macOS in a virtual environment? Or, do you have any other suggestions?
Posted
by
Post not yet marked as solved
0 Replies
566 Views
I am developing a multi thread instrument plugin for audio unit V2. This topic is about a software synthesizer that has been proven to work on intel macs, and has been converted to apple silicon native. I have a problem when I use logic pro on apple silicon macs. Plug the created software synthesizer to the instrument track. Make the track not exist other than the track you created. Put it in recording mode. When the above steps are followed, the performance meter on the logic pro will show that the load is concentrated on one specific core and far exceeds the total load when the load is divided. This load occurs continuously and is resolved when another track is created and the track is selected. It is understandable as a specification that the load is concentrated on a particular core. However, the magnitude of the load is abnormal. In fact, when the peak exceeds 100%, it leads to the generation of acoustic noise. Also, in this case, the activity monitor included with macOS does not show any increase in the usage of a specific CPU core. Also, the time profiler included with XCode did not identify any location that took a large amount of time. We have examined various experimental programs and found that there is a positive correlation between the frequency of thread switches in multi threaded areas and the peak of this CPU spike. We even found a positive correlation between the frequency of thread switches in the multithreaded area and the peak of this CPU spike. Mutex is used for thread switch. In summary In summary, we speculate that performance seems to be worse when multi thread processing is done on a single core. Is there any solution to this problem at the developer level or at the customer level of logic pro? Symptom environment MacBookePro 16inch 2021 CPU: apple m1 max OS: macOS 12.6.3 Memory: 32GB Logic pro 10.7.9 Built-in speaker autido buffer size: 32 sample Performance meter before symptoms occurred A view of the performance meter with symptoms after the recording condition
Posted
by
Post not yet marked as solved
0 Replies
425 Views
I've been trying to get the bash/script version of DeepFaceLab to work with Apple Silicon Macs, but this was original a Windows project that even now has non-existent support for MacOs/Apple Silicon. I am thinking of converting everything into a native macOS app using Swift, specifically optimized for Apple Silicon GPUs. Here's what I got from ChatGPT. Any help/advice on how to do this would be greatly appreciated. I don't have any Swift programming experience, but I have experience with some coding and can generally figure things out. I know that this is probably not feasible for a single individual with little programming experience, but I wanted to throw this out there to see what others think. Thank you Here's a high-level overview of the steps involved in porting DeepFaceLab to Swift with a graphical UI: Understand DeepFaceLab: Thoroughly study the DeepFaceLab project, its Python scripts, and the overall architecture to grasp its functionalities and dependencies. Choose a Swift Framework: Decide on the UI framework you want to use for the macOS app. SwiftUI is Apple's latest UI framework that works across all Apple platforms, including macOS. Alternatively, you can use AppKit for a more traditional approach. Rewrite Python to Swift: Convert the Python code from DeepFaceLab into Swift. You'll need to rewrite all the image processing, deep learning, and video manipulation code in Swift, potentially using third-party Swift libraries or native macOS frameworks. Deep Learning Integration: Replace the Python-based deep learning library used in DeepFaceLab with an appropriate Swift-compatible deep learning framework. TensorFlow and PyTorch both offer Swift APIs, but you may need to adapt the specific model implementation to Swift. Image Processing: Find equivalent Swift libraries or frameworks for image processing tasks used in DeepFaceLab. UI Development: Design and implement the graphical user interface using SwiftUI or AppKit. You'll need to create views, controls, and navigation elements to interact with the underlying Swift code. Integration: Connect the Swift code with the UI components, ensuring that actions in the GUI trigger the appropriate Swift functions and display results back to the user. Testing and Debugging: Rigorously test the Swift application and debug any issues that arise during the porting process. Optimization: Ensure that the Swift app performs efficiently and effectively on macOS devices.
Posted
by
Post not yet marked as solved
0 Replies
586 Views
I am in the process of developing a matrix-vector multiplication kernel. While conducting performance evaluations, I've noticed that on M1/M1 Pro/M1 Max, the kernel demonstrates an impressive memory bandwidth utilization of around 90%. However, when executed on the M1 Ultra/M2 Ultra, this figure drops to approximately 65%. My suspicion is that this discrepancy is attributed to the dual-die architecture of the M1 Ultra/M2 Ultra. It's plausible that the necessary data might be stored within the L2 cache of the alternate die. Could you kindly provide any insights or recommendations for mitigating the occurrence of on-die L2 cache misses on the Ultra chips? Additionally, I would greatly appreciate any general advice aimed at enhancing memory load speeds on these particular chips.
Posted
by
Post not yet marked as solved
0 Replies
540 Views
We use an in-house OpenGL app to provide the out-the-window visuals for our flight simulators. The app is cross platform, but until now the Mac version was only used by desktop researchers, not in our primary sim labs. Now we are attempting to replace some Windows boxes with Apple Studios. We can easily maintain high framerate, and visual quality is excellent, but we are finding the graphics have a bit of stutter during high yaw rates (which quickly forces new assets into view). I've eliminating unnecessary processes, tried raising my priority via pthread_set_qos_class_self_np() or thread_policy_set(), and reducing textures quality, all of which helped, but it didn't eliminate the problem. For background, we are using framebuffers, we have a very large texture database (90 GB), and the render code runs in the main thread (not a secondary thread). What might I be missing?
Posted
by
Post not yet marked as solved
1 Replies
1.7k Views
I just recently saw a message in the Unity forums, by a Unity staff member, that Apple requires an Apple Silicon based Mac (M1, M2) in order to build apps for the Vision Pro glasses. This confused me since the simulator works just fine on my Intel Mac. Is there any official statement from Apple on this? It would be weird to buy a new Mac just because of this.
Posted
by
Post not yet marked as solved
0 Replies
506 Views
Hi everyone, I'm trying to test some functionality of jax-metal and got this error. Any help please? import jax import jax.numpy as jnp import numpy as np def f(x): y1=x+x*x+3 y2=x*x+x*x.T return y1*y2 x = np.random.randn(3000,3000).astype('float32') jax_x_gpu = jax.device_put(jnp.array(x), jax.devices('METAL')[0]) jax_x_cpu = jax.device_put(jnp.array(x), jax.devices('cpu')[0]) jax_f_gpu = jax.jit(f, backend='METAL') jax_f_gpu(jax_x_gpu) --------------------------------------------------------------------------- XlaRuntimeError Traceback (most recent call last) Cell In[1], line 17 13 jax_x_cpu = jax.device_put(jnp.array(x), jax.devices('cpu')[0]) 15 jax_f_gpu = jax.jit(f, backend='METAL') ---> 17 jax_f_gpu(jax_x_gpu) [... skipping hidden 5 frame] File ~/.virtualenvs/jax-metal/lib/python3.11/site-packages/jax/_src/pjit.py:817, in _create_sharding_with_device_backend(device, backend) 814 elif backend is not None: 815 assert device is None 816 out = SingleDeviceSharding( --> 817 xb.get_backend(backend).get_default_device_assignment(1)[0]) 818 return out XlaRuntimeError: UNIMPLEMENTED: DefaultDeviceAssignment not supported for Metal Client.
Posted
by
Post not yet marked as solved
1 Replies
417 Views
I have the higher end M1 Mac Studio, and I have had a lot of success with Metal pipelines. However, I tried to compile a compute pipeline that uses the bfloat type and it seems to have no idea what that is. Error: program_source:10:55: error: unknown type name 'bfloat'; did you mean 'float'? Is there an OS update that is necessary for this support?
Posted
by
Post not yet marked as solved
0 Replies
639 Views
I am trying to install Python from source according to the readme using: ./configure make &lt;-- Error happens here make test sudo make altinstall However, I cannot complete the make command since it fails with: Undefined symbols for architecture arm64: "_libintl_bindtextdomain", referenced from: __locale_bindtextdomain in _localemodule.o "_libintl_dcgettext", referenced from: __locale_dcgettext in _localemodule.o "_libintl_dgettext", referenced from: __locale_dgettext in _localemodule.o "_libintl_gettext", referenced from: __locale_gettext in _localemodule.o "_libintl_setlocale", referenced from: __locale_setlocale in _localemodule.o __locale_localeconv in _localemodule.o "_libintl_textdomain", referenced from: __locale_textdomain in _localemodule.o ld: symbol(s) not found for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make: *** [Programs/_freeze_module] Error 1 Looks like make is somehow using the wrong architecture. I just don't know why. Does anyone have an idea?
Posted
by
Post marked as solved
1 Replies
577 Views
Hi all, I'm working on migrating a legacy codebase to run natively on Apple Silicon macOS. The project builds and runs fine on Rosetta, but anytime I try to build it for Apple Silicon I get build errors from referencing CarbonAPI. Is it possible to resolve these issues and get CarbonAPI to build natively for Apple Silicon, or will I have to rewrite the offending pieces of code? Thanks!
Posted
by
Post not yet marked as solved
3 Replies
846 Views
I just transitioned to an M2 silicon and compiled some of my previous programs. However, I am running into some execution problems when my code is not able to find the libSystem.B.dylib. I am runing Apple M2 Max, OS 13.5.2 (22G91) I installed XCODE and command line utilities as normal and installed gcc/gfortran using homebrew. The resulting fault text is below dyld[13777]: dyld cache '(null)' not loaded: syscall to map cache into shared region failed dyld[13777]: Library not loaded: /usr/lib/libSystem.B.dylib Referenced from: /Users/gamalakabani/Applications/TALYS_CODE/talys/bin/talys Reason: tried: '/usr/lib/libSystem.B.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/lib/libSystem.B.dylib' (no such file), '/usr/lib/libSystem.B.dylib' (no such file, no dyld cache), '/usr/local/lib/libSystem.B.dylib' (no such file) ./verify: line 12: 13777 Abort trap: 6 $talys &lt; talys.inp &gt; talys.out Is this an issue with homebrew gcc? Any help will be appreciated. Thanks
Posted
by
Post not yet marked as solved
0 Replies
350 Views
Hi How can i delete macOs versions 1.1 and 1.2 (both are macOs compiled, NOT iOs based)? In fact, I want to submit my iOs version for macOs devices (with apple silicon, and ignore intel onces). But: In App Store Connect In Pricing & Availability section there is this section : Apple Silicon Mac Availability when i click on checkbox :"Make this app available", and try to save, the save do not happen (stays blue) , and if i try to go to another section, it tells me that i have to save otherwise changes will be ignored (so that another confirms that my intent is not saved). so i assume i have to delete app first. in this link https://developer.apple.com/help/app-store-connect/create-an-app-record/remove-an-app/ it says that all versions (so in my case, event curent iOS App) should not be Ready for sale so is the only solution is te remove app from all appstores and refill things again?
Posted
by
Post not yet marked as solved
0 Replies
535 Views
Hi there! I have a problem running UI tests on simulator, while using M1 Mac. After running for a couple of minutes, usually 3 minutes is enough, tests begin to fail every ±60s on XCUIApplication().launch() with Failed to terminate com.***.***:10552: Failed to terminate com.***.***:0 At this point, my simulator window does not even react to any manual input. That issue have been happening since Xcode 13. Unit tests are running completely fine though. Also, that issue does not appear on an Intel-based Mac. Sadly, I haven't found any information on that issue, however, there are some posts on SO mentioning the same issue, and only one of those posts has an answer, but it mentions unchecking 'Open using Rosetta' option for Xcode, which is not possible anymore. I would appreciate any advice, as I feel like I have tried everything already :(
Posted
by
Post not yet marked as solved
4 Replies
573 Views
In an old project we just moved on mac ARM + Sonoma + Xcode 15 (fresh install), we get this link error: '_yytext' from '.../Objects-normal/arm64/html.lexer.o' not 8-byte aligned, which cannot be encoded as a target of LDR/STR in '_PHPyylex' from '.../Objects-normal/arm64/php.lexer.o' We don't get this error on Xcode 15 on mac Intel, nor with Xcode 14 on mac ARM. Does anyone know what we could do to fix that? thanks!
Posted
by