Apple Silicon

RSS for tag

Build apps, libraries, frameworks, plug-ins, and other executable code that run natively on Apple silicon.

Apple Silicon Documentation

Posts under Apple Silicon tag

69 Posts
Sort by:
Post not yet marked as solved
2 Replies
1.1k Views
Users can run our apps on Macs with Apple Silicon via the "iPad Apps on Mac" feature. The apps use PHPhotoLibrary.requestAuthorization(for: .addOnly, handler: callback) to request write-only access to the user's Photo Library during image export. This works as intended on macOS, but a huge problem arises when the user denies access (by accident or intentionally) and later decides that they want us to add their image to Photos: There is no way to grant this permission again. In System Preferences → Privacy & Security → Photos, the app is just not listed – in fact, none of the "iPad Apps on Mac" apps appear here. Not even tccutil reset all my.bundle.id works. It just reports tccutil: Failed to reset all approval status for my.bundle.id. Uninstalling, restarting the Mac, and reinstalling the app also doesn't work. The system seems to remember the initial decision. Is this an oversight in the integration of those apps with macOS, or are we missing something fundamental here? Is there maybe a way to prompt the user again?
Posted Last updated
.
Post not yet marked as solved
0 Replies
350 Views
Hi How can i delete macOs versions 1.1 and 1.2 (both are macOs compiled, NOT iOs based)? In fact, I want to submit my iOs version for macOs devices (with apple silicon, and ignore intel onces). But: In App Store Connect In Pricing & Availability section there is this section : Apple Silicon Mac Availability when i click on checkbox :"Make this app available", and try to save, the save do not happen (stays blue) , and if i try to go to another section, it tells me that i have to save otherwise changes will be ignored (so that another confirms that my intent is not saved). so i assume i have to delete app first. in this link https://developer.apple.com/help/app-store-connect/create-an-app-record/remove-an-app/ it says that all versions (so in my case, event curent iOS App) should not be Ready for sale so is the only solution is te remove app from all appstores and refill things again?
Posted
by Bulltech.
Last updated
.
Post not yet marked as solved
3 Replies
846 Views
I just transitioned to an M2 silicon and compiled some of my previous programs. However, I am running into some execution problems when my code is not able to find the libSystem.B.dylib. I am runing Apple M2 Max, OS 13.5.2 (22G91) I installed XCODE and command line utilities as normal and installed gcc/gfortran using homebrew. The resulting fault text is below dyld[13777]: dyld cache '(null)' not loaded: syscall to map cache into shared region failed dyld[13777]: Library not loaded: /usr/lib/libSystem.B.dylib Referenced from: /Users/gamalakabani/Applications/TALYS_CODE/talys/bin/talys Reason: tried: '/usr/lib/libSystem.B.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/lib/libSystem.B.dylib' (no such file), '/usr/lib/libSystem.B.dylib' (no such file, no dyld cache), '/usr/local/lib/libSystem.B.dylib' (no such file) ./verify: line 12: 13777 Abort trap: 6 $talys < talys.inp > talys.out Is this an issue with homebrew gcc? Any help will be appreciated. Thanks
Posted
by gakabani.
Last updated
.
Post marked as solved
1 Replies
577 Views
Hi all, I'm working on migrating a legacy codebase to run natively on Apple Silicon macOS. The project builds and runs fine on Rosetta, but anytime I try to build it for Apple Silicon I get build errors from referencing CarbonAPI. Is it possible to resolve these issues and get CarbonAPI to build natively for Apple Silicon, or will I have to rewrite the offending pieces of code? Thanks!
Posted Last updated
.
Post not yet marked as solved
5 Replies
6.0k Views
I'm now running Tensorflow models on my Macbook Air 2020 M1, but I can't find a way to monitor the Neural Engine 16 cores usage to fine tune my ML tasks. The Activity Monitor only reports CPU% and GPU% and I can't find any APIs available on Mach include files in the MacOSX 11.1 sdk or documentation available so I can slap something together from scratch in C. Could anyone point me in some direction as to get a hold of the API for Neural Engine usage. Any indicator I could grab would be a start. It looks like this has been omitted from all sdk documentation and general userland, I've only found a ledger_tag_neural_footprint attribute, which looks memory related, and that's it.
Posted
by rgolive.
Last updated
.
Post not yet marked as solved
0 Replies
640 Views
I am trying to install Python from source according to the readme using: ./configure make <-- Error happens here make test sudo make altinstall However, I cannot complete the make command since it fails with: Undefined symbols for architecture arm64: "_libintl_bindtextdomain", referenced from: __locale_bindtextdomain in _localemodule.o "_libintl_dcgettext", referenced from: __locale_dcgettext in _localemodule.o "_libintl_dgettext", referenced from: __locale_dgettext in _localemodule.o "_libintl_gettext", referenced from: __locale_gettext in _localemodule.o "_libintl_setlocale", referenced from: __locale_setlocale in _localemodule.o __locale_localeconv in _localemodule.o "_libintl_textdomain", referenced from: __locale_textdomain in _localemodule.o ld: symbol(s) not found for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make: *** [Programs/_freeze_module] Error 1 Looks like make is somehow using the wrong architecture. I just don't know why. Does anyone have an idea?
Posted
by gernophil.
Last updated
.
Post not yet marked as solved
5 Replies
1.8k Views
We recently started working on getting an iOS app to work on Macs with Apple Silicon as a "Designed for iPhone" app and are having issues with speech synthesis. Specifically, voices retuned by AVSpeechSynthesisVoice.speechVoices() do not all work on the Mac. When we build an utterance and attempt to speak, the synthesizer falls back on a default voice and says some very odd text about voice parameters (that is not in the utterance speech text) before it does say the intended speech. Here is some sample code to setup the utterance and speak: func speak(_ text: String, _ settings: AppSettings) { let utterance = AVSpeechUtterance(string: text) if let voice = AVSpeechSynthesisVoice(identifier: settings.selectedVoiceIdentifier) { utterance.voice = voice print("speak: voice assigned \(voice.audioFileSettings)") } else { print("speak: voice error") } utterance.rate = settings.speechRate utterance.pitchMultiplier = settings.speechPitch do { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playback, mode: .default, options: .duckOthers) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) self.synthesizer.speak(utterance) return } catch let error { print("speak: Error setting up AVAudioSession: \(error.localizedDescription)") } } When running the app on the Mac, this is the kind of error we get with "com.apple.eloquence.en-US.Rocko" as the selectedVoiceIdentifier: speak: voice assgined [:] 2023-05-29 18:00:14.245513-0700 A.I.[9244:240554] [aqme] AQMEIO_HAL.cpp:742 kAudioDevicePropertyMute returned err 2003332927 2023-05-29 18:00:14.410477-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.412837-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.413774-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.414661-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.415544-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.416384-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.416804-0700 A.I.[9244:240554] [AXTTSCommon] Audio Unit failed to start after 5 attempts. 2023-05-29 18:00:14.416974-0700 A.I.[9244:240554] [AXTTSCommon] VoiceProvider: Could not start synthesis for request SSML Length: 140, Voice: [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null), converted from tts request [TTSSpeechRequest 0x600002c29590] <speak><voice name="com.apple.eloquence.en-US.Rocko">How much wood would a woodchuck chuck if a wood chuck could chuck wood?</voice></speak> language: en-US footprint: premium rate: 0.500000 pitch: 1.000000 volume: 1.000000 2023-05-29 18:00:14.428421-0700 A.I.[9244:240360] [VOTSpeech] Failed to speak request with error: Error Domain=TTSErrorDomain Code=-4010 "(null)". Attempting to speak again with fallback identifier: com.apple.voice.compact.en-US.Samantha When we run AVSpeechSynthesisVoice.speechVoices(), the "com.apple.eloquence.en-US.Rocko" is absolutely in the list but fails to speak properly. Notice that the line: print("speak: voice assigned \(voice.audioFileSettings)") Shows: speak: voice assigned [:] The .audioFileSettings being empty seems to be a common factor for the voices that do not work properly on the Mac. For voices that do work, we see this kind of output and values in the .audioFileSettings: speak: voice assigned ["AVFormatIDKey": 1819304813, "AVLinearPCMBitDepthKey": 16, "AVLinearPCMIsBigEndianKey": 0, "AVLinearPCMIsFloatKey": 0, "AVSampleRateKey": 22050, "AVLinearPCMIsNonInterleaved": 0, "AVNumberOfChannelsKey": 1] So we added a function to check the .audioFileSettings for each voice returned by AVSpeechSynthesisVoice.speechVoices(): //The voices are set in init(): var voices = AVSpeechSynthesisVoice.speechVoices() ... func checkVoices() { DispatchQueue.global().async { [weak self] in guard let self = self else { return } let checkedVoices = self.voices.map { ($0.0, $0.0.audioFileSettings.count) } DispatchQueue.main.async { self.voices = checkedVoices } } } That looks simple enough, and does work to identify which voices have no data in their .audioFileSettings. But we have to run it asynchronously because on a real iPhone device, it takes more than 9 seconds and produces a tremendous amount of error spew to the console. 2023-06-02 10:56:59.805910-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:56:59.971435-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:57:00.122976-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:57:00.144430-0700 A.I.[17186:910116] [AXTTSCommon] MauiVocalizer: 11006 (Can't compile rule): regularExpression=\Oviedo(?=, (\x1b\\pause=\d+\\)?Florida)\b, message=unrecognized character follows \, characterPosition=1 2023-06-02 10:57:00.147993-0700 A.I.[17186:910116] [AXTTSCommon] MauiVocalizer: 16038 (Resource load failed): component=ttt/re, uri=, contentType=application/x-vocalizer-rettt+text, lhError=88602000 2023-06-02 10:57:00.148036-0700 A.I.[17186:910116] [AXTTSCommon] Error loading rules: 2147483648 ... This goes on and on and on ... There must be a better way?
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.7k Views
I just recently saw a message in the Unity forums, by a Unity staff member, that Apple requires an Apple Silicon based Mac (M1, M2) in order to build apps for the Vision Pro glasses. This confused me since the simulator works just fine on my Intel Mac. Is there any official statement from Apple on this? It would be weird to buy a new Mac just because of this.
Posted
by waldgeist.
Last updated
.
Post not yet marked as solved
1 Replies
1.7k Views
Hi All, I would like to know if there are any C APIs to control the Floating-Point Control Register (FPCR) on Apple Silicon? The ARM documentation does not show any C APIs for doing this. The only example code looks like VHDL, so I was wondering if any developers here knew of any. Thanks
Posted
by jamescjoy.
Last updated
.
Post marked as solved
3 Replies
1.2k Views
Hi. Sorry if this question has been answered in another post, if it has I can't find it. My device is MacBook Pro 16-inch, M1, 2021. So I tried to create a VM using this guide from Apple I followed the guide and used an image of debian. Everything worked fine until the machine appeared stuck at some point of the installation. I chose my languages then I had some other prompt asking me to install something but I can't remember precisely the step at which I thought it was freezed (I think it was the GNOME install) So because the machine was not responding for several minutes (I might have been too hurried) I quitted the process by simply clicking on the Quit button in the VM window. The problem is that from that point onward, I can't load any VM anymore. The build is successful in Xcode, the machine starts but immediately quits with this response from Xcode logs : Virtual machine successfully started. Guest did stop virtual machine. 2023-02-02 22:22:45.413600+0100 GUILinux[22984:380971] [client] No error handler for XPC error: Connection invalid I just can't understand why, I tried to delete and download the guide again but it doesn't work. I will add that it's my first time using Xcode and I might have missed something obivous.
Posted
by Frikilax.
Last updated
.
Post marked as solved
3 Replies
5.5k Views
MacOS M1 machines can run iOS applications. We have an iOS application that runs a fullscreen metal game. The game can also run across all desktop platforms via Steam. In additional to Steam, we would like to make it available through the AppStore on MacOS. We'd like to utilise our iOS builds for this so that the Apple payment (micro-transactions) and sign-in processes can be reused. While the app runs on MacOS, it runs in a small iPad shaped window that cannot be resized. We do not want to add iPad multitasking support (portrait orientation is not viable), but would like the window on MacOS to be expandable to full screen. Currently there is an option to make it full screen, but the metal view (MTKView) delegate does not receive a drawableSizeWillChange event for this, meaning the new resolution of the window cannot be received. Is there another method of retrieving a window size change event in this context? What is the recommended way of enabling window resizing on MacOS but not iPad for a single iOS app?
Posted Last updated
.
Post not yet marked as solved
1 Replies
418 Views
I have the higher end M1 Mac Studio, and I have had a lot of success with Metal pipelines. However, I tried to compile a compute pipeline that uses the bfloat type and it seems to have no idea what that is. Error: program_source:10:55: error: unknown type name 'bfloat'; did you mean 'float'? Is there an OS update that is necessary for this support?
Posted Last updated
.
Post not yet marked as solved
0 Replies
506 Views
Hi everyone, I'm trying to test some functionality of jax-metal and got this error. Any help please? import jax import jax.numpy as jnp import numpy as np def f(x): y1=x+x*x+3 y2=x*x+x*x.T return y1*y2 x = np.random.randn(3000,3000).astype('float32') jax_x_gpu = jax.device_put(jnp.array(x), jax.devices('METAL')[0]) jax_x_cpu = jax.device_put(jnp.array(x), jax.devices('cpu')[0]) jax_f_gpu = jax.jit(f, backend='METAL') jax_f_gpu(jax_x_gpu) --------------------------------------------------------------------------- XlaRuntimeError Traceback (most recent call last) Cell In[1], line 17 13 jax_x_cpu = jax.device_put(jnp.array(x), jax.devices('cpu')[0]) 15 jax_f_gpu = jax.jit(f, backend='METAL') ---> 17 jax_f_gpu(jax_x_gpu) [... skipping hidden 5 frame] File ~/.virtualenvs/jax-metal/lib/python3.11/site-packages/jax/_src/pjit.py:817, in _create_sharding_with_device_backend(device, backend) 814 elif backend is not None: 815 assert device is None 816 out = SingleDeviceSharding( --> 817 xb.get_backend(backend).get_default_device_assignment(1)[0]) 818 return out XlaRuntimeError: UNIMPLEMENTED: DefaultDeviceAssignment not supported for Metal Client.
Posted Last updated
.
Post not yet marked as solved
0 Replies
540 Views
We use an in-house OpenGL app to provide the out-the-window visuals for our flight simulators. The app is cross platform, but until now the Mac version was only used by desktop researchers, not in our primary sim labs. Now we are attempting to replace some Windows boxes with Apple Studios. We can easily maintain high framerate, and visual quality is excellent, but we are finding the graphics have a bit of stutter during high yaw rates (which quickly forces new assets into view). I've eliminating unnecessary processes, tried raising my priority via pthread_set_qos_class_self_np() or thread_policy_set(), and reducing textures quality, all of which helped, but it didn't eliminate the problem. For background, we are using framebuffers, we have a very large texture database (90 GB), and the render code runs in the main thread (not a secondary thread). What might I be missing?
Posted
by Jeff Ray.
Last updated
.
Post not yet marked as solved
0 Replies
587 Views
I am in the process of developing a matrix-vector multiplication kernel. While conducting performance evaluations, I've noticed that on M1/M1 Pro/M1 Max, the kernel demonstrates an impressive memory bandwidth utilization of around 90%. However, when executed on the M1 Ultra/M2 Ultra, this figure drops to approximately 65%. My suspicion is that this discrepancy is attributed to the dual-die architecture of the M1 Ultra/M2 Ultra. It's plausible that the necessary data might be stored within the L2 cache of the alternate die. Could you kindly provide any insights or recommendations for mitigating the occurrence of on-die L2 cache misses on the Ultra chips? Additionally, I would greatly appreciate any general advice aimed at enhancing memory load speeds on these particular chips.
Posted
by lshzh.
Last updated
.
Post not yet marked as solved
0 Replies
425 Views
I've been trying to get the bash/script version of DeepFaceLab to work with Apple Silicon Macs, but this was original a Windows project that even now has non-existent support for MacOs/Apple Silicon. I am thinking of converting everything into a native macOS app using Swift, specifically optimized for Apple Silicon GPUs. Here's what I got from ChatGPT. Any help/advice on how to do this would be greatly appreciated. I don't have any Swift programming experience, but I have experience with some coding and can generally figure things out. I know that this is probably not feasible for a single individual with little programming experience, but I wanted to throw this out there to see what others think. Thank you Here's a high-level overview of the steps involved in porting DeepFaceLab to Swift with a graphical UI: Understand DeepFaceLab: Thoroughly study the DeepFaceLab project, its Python scripts, and the overall architecture to grasp its functionalities and dependencies. Choose a Swift Framework: Decide on the UI framework you want to use for the macOS app. SwiftUI is Apple's latest UI framework that works across all Apple platforms, including macOS. Alternatively, you can use AppKit for a more traditional approach. Rewrite Python to Swift: Convert the Python code from DeepFaceLab into Swift. You'll need to rewrite all the image processing, deep learning, and video manipulation code in Swift, potentially using third-party Swift libraries or native macOS frameworks. Deep Learning Integration: Replace the Python-based deep learning library used in DeepFaceLab with an appropriate Swift-compatible deep learning framework. TensorFlow and PyTorch both offer Swift APIs, but you may need to adapt the specific model implementation to Swift. Image Processing: Find equivalent Swift libraries or frameworks for image processing tasks used in DeepFaceLab. UI Development: Design and implement the graphical user interface using SwiftUI or AppKit. You'll need to create views, controls, and navigation elements to interact with the underlying Swift code. Integration: Connect the Swift code with the UI components, ensuring that actions in the GUI trigger the appropriate Swift functions and display results back to the user. Testing and Debugging: Rigorously test the Swift application and debug any issues that arise during the porting process. Optimization: Ensure that the Swift app performs efficiently and effectively on macOS devices.
Posted Last updated
.
Post marked as solved
3 Replies
8.8k Views
For all my iOS projects only simulators running iOS 16.4 are listed as Run Destinations ... although I've installed the iOS 13 simulator and corresponding entries are listed under "Devices & Simulators". I've toggled "Show run destination" from "Automatic" to "Always" with no avail. Deployment target is e.g. iOS 13, and I'm running Xcode Version 14.3 (14E222b) on a 14" MBP with Apple Silicon. As a current bypass I'm booting up the simulator manually and install apps by "xcrun simctl install booted APP.app" to allow some basic testing, but that's no sustainable solution. Any help is much appreciated! Mattes
Posted
by MyMattes.
Last updated
.
Post not yet marked as solved
0 Replies
566 Views
I am developing a multi thread instrument plugin for audio unit V2. This topic is about a software synthesizer that has been proven to work on intel macs, and has been converted to apple silicon native. I have a problem when I use logic pro on apple silicon macs. Plug the created software synthesizer to the instrument track. Make the track not exist other than the track you created. Put it in recording mode. When the above steps are followed, the performance meter on the logic pro will show that the load is concentrated on one specific core and far exceeds the total load when the load is divided. This load occurs continuously and is resolved when another track is created and the track is selected. It is understandable as a specification that the load is concentrated on a particular core. However, the magnitude of the load is abnormal. In fact, when the peak exceeds 100%, it leads to the generation of acoustic noise. Also, in this case, the activity monitor included with macOS does not show any increase in the usage of a specific CPU core. Also, the time profiler included with XCode did not identify any location that took a large amount of time. We have examined various experimental programs and found that there is a positive correlation between the frequency of thread switches in multi threaded areas and the peak of this CPU spike. We even found a positive correlation between the frequency of thread switches in the multithreaded area and the peak of this CPU spike. Mutex is used for thread switch. In summary In summary, we speculate that performance seems to be worse when multi thread processing is done on a single core. Is there any solution to this problem at the developer level or at the customer level of logic pro? Symptom environment MacBookePro 16inch 2021 CPU: apple m1 max OS: macOS 12.6.3 Memory: 32GB Logic pro 10.7.9 Built-in speaker autido buffer size: 32 sample Performance meter before symptoms occurred A view of the performance meter with symptoms after the recording condition
Posted
by makotom.
Last updated
.