Integrate machine learning models into your app using Core ML.

Core ML Documentation

Posts under Core ML tag

107 Posts
Sort by:
Post not yet marked as solved
1 Replies
472 Views
I created a word tagging model in CreateML and am trying to make predictions with it using the following code: let text = "$30.00 7/1/2023" let model = TaggingModel() let input = TaggingModelInput(text: text) guard let output = try? model.prediction(input: input) else { fatalError("Unexpected runtime error.") } However, the output separates "$" and "30.00" as separate tokens as well as "7", "/", "1", "/", etc. Is there any way to make sure prices and dates get grouped together and to simply separate tokens based on whitespace? Any help is appreciated!
Posted
by esch.
Last updated
.
Post not yet marked as solved
0 Replies
426 Views
When the input dimension is 600w, the operator runs on ANE. But when the input shape is 100w or 200w, this operator can only run on the CPU. The data dimension has decreased, but it does not run on ANE. What is the reason for this and what are the ways to avoid it
Posted
by zhouzheng.
Last updated
.
Post not yet marked as solved
0 Replies
347 Views
I have converted an UIImage to MLShapedArray and by default this is NCHW format. I need to permute it into NCWH to prepare it for an ML model. What is the recommended way to achieve this ?
Posted
by yetanadur.
Last updated
.
Post not yet marked as solved
1 Replies
591 Views
Hello, I'm trying to train a MLImageClassifier dataset using Swift using the function MLImageClassifier.train. It doesn't change the dataset size (I have the same problem with a smaller one), but when the train reaches the 9 completedUnitCount of 10, even if the CPU usage is still high, seems to happen a soft lock that doesn't never brings the model to its completion (or error). The dataset is made of jpg images, using the CreateML app doesn't appear any problem during the training. There is any known issue with CreateML training APIs about part 9 of the process? There is any information about this part of the training job? Thank you
Posted Last updated
.
Post not yet marked as solved
1 Replies
544 Views
I converted a toy Pytorch regression model to CoreML mlmodel using coremltools and set it to be updatable with mean_squared_error_loss. But when testing the training, the context.metrics[.lossValue] can give negative value which is impossible. Further more, context.metrics[.lossValue] result is very different from my own computed training loss as shown in the screenshot attached. I was wondering if I used a wrong way to extract the training loss from context? Does context.metrics[.lossValue] really give MSE if I used coremltools function set_mean_squared_error_loss to set the loss? Any suggestion is appreciated. Since the validation loss decreases as epoch goes, the model should be indeed updated correctly. I am using coremltools==7.0, xcode==15.0.1 Here is my code to convert Pytorch model to updatable CoreML model: import coremltools from coremltools.models.neural_network import NeuralNetworkBuilder, SgdParams, AdamParams from coremltools.models import datatypes # Load the model specification spec = coremltools.utils.load_spec('regression.mlmodel') builder = NeuralNetworkBuilder(spec=spec) builder.inspect_output_features() # Name: linear_1 # Make layers updatable builder.make_updatable(['linear_0', 'linear_1']) # Manually add a mean squared error loss layer feature = ('linear_1', datatypes.Array(1)) builder.set_mean_squared_error_loss(name='lossLayer', input_feature=feature) # define the optimizer (Adam in this example) adam_params = AdamParams(lr=0.01, beta1=0.9, beta2=0.999, eps=1e-8, batch=16) builder.set_adam_optimizer(adam_params) # Set the number of epochs builder.set_epochs(100) # Save the updated model updated_model = coremltools.models.MLModel(spec) updated_model.save('updatable_regression30.mlmodel') Here is the code I use to try to update the saved updatable_regression30.mlmodel: import CoreML import GameKit func generateSampleData(numSamples: Int, seed: UInt64) -> ([MLMultiArray], [MLMultiArray]) { // simple regression: y = 10 * sum(x) + 1 var inputArray = [MLMultiArray]() var outputArray = [MLMultiArray]() // Create a random number generator with a fixed seed let randomSource = GKLinearCongruentialRandomSource(seed: seed) let randomDistribution = GKRandomDistribution(randomSource: randomSource, lowestValue: 0, highestValue: 1000) for _ in 0..<numSamples { do { let input = try MLMultiArray(shape: [1, 2], dataType: .float32) let output = try MLMultiArray(shape: [1], dataType: .float32) var sumInput: Float = 0 for i in 0..<input.shape[1].intValue { // Generate random value using the fixed seed generator let inputValue = Float(randomDistribution.nextInt()) / 1000.0 input[[0, i] as [NSNumber]] = NSNumber(value: inputValue) sumInput += inputValue } output[0] = NSNumber(value: 10.0 * sumInput + 1.0) inputArray.append(input) outputArray.append(output) } catch { print("Error occurred while creating MLMultiArrays: \(error)") } } return (inputArray, outputArray) } func computeLoss(model: MLModel, data: ([MLMultiArray], [MLMultiArray])) -> Double { let (inputData, outputData) = data var totalLoss: Double = 0 for (index, input) in inputData.enumerated() { let output = outputData[index] if let prediction = try? model.prediction(from: MLDictionaryFeatureProvider(dictionary: ["x": MLFeatureValue(multiArray: input)])), let predictedOutput = prediction.featureValue(for: "linear_1")?.multiArrayValue { let loss = (output[0].doubleValue - predictedOutput[0].doubleValue) totalLoss += loss * loss // squared error } } return totalLoss / Double(inputData.count) // mean of squared errors } func trainModel() { // Load the updatable model guard let updatableModelURL = Bundle.main.url(forResource: "updatable_regression30", withExtension: "mlmodelc") else { print("Failed to load the updatable model") return } // Generate sample data let (inputData, outputData) = generateSampleData(numSamples: 200, seed: 8) let validationData = generateSampleData(numSamples: 100, seed:18) // Create an MLArrayBatchProvider from the sample data var featureProviders = [MLFeatureProvider]() for (index, input) in inputData.enumerated() { let output = outputData[index] let dataPointFeatures: [String: MLFeatureValue] = [ "x": MLFeatureValue(multiArray: input), "linear_1_true": MLFeatureValue(multiArray: output) ] if let provider = try? MLDictionaryFeatureProvider(dictionary: dataPointFeatures) { featureProviders.append(provider) } } let batchProvider = MLArrayBatchProvider(array: featureProviders) // Define progress handlers let progressHandlers = MLUpdateProgressHandlers(forEvents: [.trainingBegin, .epochEnd], progressHandler: { context in switch context.event { case .trainingBegin: print("Training began.") case .epochEnd: let loss = context.metrics[.lossValue] as! Double let validationLoss = computeLoss(model: context.model, data: validationData) let computedTrainLoss = computeLoss(model: context.model, data: (inputData, outputData)) print("Epoch \(context.metrics[.epochIndex]!) ended. Training Loss: \(loss), Computed Training Loss: \(computedTrainLoss), Validation Loss: \(validationLoss)") default: break } } ) // Create an update task with progress handlers let updateTask = try! MLUpdateTask(forModelAt: updatableModelURL, trainingData: batchProvider, configuration: nil, progressHandlers: progressHandlers) // Start the update task updateTask.resume() } // call trainModel() to start training
Posted
by bcxiao.
Last updated
.
Post not yet marked as solved
1 Replies
698 Views
I'm trying to create an updatable model, but this seems possible only by creating from scratch a neural network model and then, using the NeuralNetworkBuilder, call the make_updatable method. But I met a lot of problems on this way for the solution. In this example I try to open a converted ML Model (neural network) using the NeuralNetworkBuilder: import coremltools model = coremltools.models.MLModel("SimpleImageClassifier.mlpackage") spec = model.get_spec() builder = coremltools.models.neural_network.NeuralNetworkBuilder(spec=spec) builder.inspect_layers() But I met this error in the builder instance line: AttributeError: 'NoneType' object has no attribute 'layers' I also tried to define a neural network using the NeuralNetworkBuilder but then what do I have to do with this object? I didn't find a way to save it or convert it. The result I want is simple, the possibility to train more the model on the user device to meet his exigences. However the way to obtain an updatable model seems incomprehensible. In my case, the model should be an image classification. What approach should I follow to achieve this result? Thank you
Posted Last updated
.
Post not yet marked as solved
0 Replies
407 Views
I converted a decoder model into CoreML using following way: input_1 = ct.TensorType(name="input_1", shape=ct.Shape((1, ct.RangeDim(lower_bound=1, upper_bound=50), 512)), dtype=np.float32) input_2 = ct.TensorType(name="input_2", shape=ct.Shape((1, ct.RangeDim(lower_bound=1, upper_bound=50), 512)), dtype=np.float32) decoder_iOS2 = ct.convert(decoder_layer, inputs=[input_1, input_2] ) But if load the model in Xcode it gives me two errors: Error1: MLE5Engine is not currently supported for models with range shape inputs that try to utilize the Neural Engine. Q1: As having a Flexible Input shape is nature of the Decoder, I can ignore this error message, right? This is the things that can't be fixed.? Erro2: doUnloadModel:options:qos:error:: model=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/CB2207C5-B549-4868-AEB5-FFA7A3E24397/Photo2ASCII.app/Deocder_iOS_test2.mlmodelc/model.mil : sourceURL= (null) : key={"isegment":0,"inputs":{"input_1":{"shape":[512,1,1,1,1]},"input_2":{"shape":[512,1,1,1,1]}},"outputs":{"Identity":{"shape":[512,1,1,1,1]}}} : identifierSource=0 : cacheURLIdentifier=A93CE297F87F752D426002C8D1CE79094E614BEA1C0E96113228C8D3F06831FA_F055BF0F9A381C4C6DC99CE8FCF5C98E7E8B83EA5BF7CFD0EDC15EF776B29413 : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=6885927629810 : intermediateBufferHandle=6885928772758 : queueDepth=127 } : state=3 : programHandle=6885927629810 : intermediateBufferHandle=6885928772758 : queueDepth=127 : attr={ ANEFModelDescription = { ANEFModelInput16KAlignmentArray = ( ); ANEFModelOutput16KAlignmentArray = ( ); ANEFModelProcedures = ( { ANEFModelInputSymbolIndexArray = ( 0, 1 ); ANEFModelOutputSymbolIndexArray = ( 0 ); ANEFModelProcedureID = 0; } ); kANEFModelInputSymbolsArrayKey = ( "input_1", "input_2" ); kANEFModelOutputSymbolsArrayKey = ( "Identity@output" ); kANEFModelProcedureNameToIDMapKey = { net = 0; }; }; NetworkStatusList = ( { LiveInputList = ( { BatchStride = 1024; Batches = 1; Channels = 1; Depth = 1; DepthStride = 1024; Height = 1; Interleave = 1; Name = "input_1"; PlaneCount = 1; PlaneStride = 1024; RowStride = 1024; Symbol = "input_1"; Type = Float16; Width = 512; }, { BatchStride = 1024; Batches = 1; Channels = 1; Depth = 1; DepthStride = 1024; Height = 1; Interleave = 1; Name = "input_2"; PlaneCount = 1; PlaneStride = 1024; RowStride = 1024; Symbol = "input_2"; Type = Float16; Width = 512; } ); LiveOutputList = ( { BatchStride = 1024; Batches = 1; Channels = 1; Depth = 1; DepthStride = 1024; Height = 1; Interleave = 1; Name = "Identity@output"; PlaneCount = 1; PlaneStride = 1024; RowStride = 1024; Symbol = "Identity@output"; Type = Float16; Width = 512; } ); Name = net; } ); } : perfStatsMask=0} was not loaded by the client. Q2: Is that I can ignore this error message, if I'm gonna use CPU/GPU when running the model?
Posted Last updated
.
Post not yet marked as solved
0 Replies
356 Views
Hi, we have several customers with stringent firewall restrictions who require the ability to designate the endpoint for downloading the model encryption key in the firewall rules. How can I obtain the necessary details to ensure a successful download of the key?
Posted
by sankalpa.
Last updated
.
Post not yet marked as solved
1 Replies
538 Views
Hello! I would like to develop a visionOS application that tracks a single object in a user's environment. Skimming through the documentation I found out that this feature is currently unsupported in ARKit (we can only recognize images). But it seems it should be doable by combining CoreML and Vision frameworks. So I have a few questions: Is it the best approach or is there a simpler solution? What is the best way to train a CoreML model without access to the device? Will videos recorded by iPhone 15 be enough? Thank you in advance for all the answers.
Posted
by kmoczala.
Last updated
.
Post not yet marked as solved
0 Replies
358 Views
hello! I have converted a single grid_sample opration in pytorch to mlpackage using your coremltools, and open it with xcode for benchmarking. there is only one op which is called resample. and I run it with my mac m1 pro .but I found that it is only run on cpu, so the latency is not in my demand. can you support the resample with gpu, or can i implement it with metal by myself?
Posted Last updated
.
Post not yet marked as solved
1 Replies
586 Views
Coremltools: 6.2.0 When I run coreml model in python result is good: {'var_840': array([[-8.15439941e+02, 2.88793579e+02, -3.83110474e+02, -8.95208740e+02, -3.53131561e+02, -3.65339783e+02, -4.94590851e+02, 6.24686813e+01, -5.92614822e+01, -9.67470627e+01, -4.30247498e+02, -9.27047348e+01, 2.19661942e+01, -2.96691345e+02, -4.26566772e+02........ But when I run on xcode so result look like: [-inf,inf,nan,-inf,nan,nan,nan,nan,nan,-inf,-inf,-inf,-inf,-inf,-inf,nan,-inf,-inf,nan,-inf,nan,nan,-inf,nan,-inf,-inf,-inf,nan,nan,nan,nan,nan,nan,nan,nan,nan,nan,-inf,nan,nan,nan,nan,-inf,nan,-inf ....... Step1: Convert Resnet50 to coreml: import torch import torchvision # Load a pre-trained version of MobileNetV2 model. torch_model = torchvision.models.resnet50(pretrained=True) # Set the model in evaluation mode. torch_model.eval() # Trace the model with random data. example_input = torch.rand(1, 3, 224, 224) traced_model = torch.jit.trace(torch_model, example_input) out = traced_model(example_input) # Download class labels in ImageNetLabel.txt. # Set the image scale and bias for input image preprocessing. import coremltools as ct image_input = ct.ImageType(shape=example_input.shape, ) # Using image_input in the inputs parameter: # Convert to Core ML using the Unified Conversion API. model = ct.convert( traced_model, inputs=[image_input], compute_units=ct.ComputeUnit.CPU_ONLY, ) # Save the converted model. model.save("resnettest.mlmodel") # Print a confirmation message. print('model converted and saved') Step2: Test model coreml in python: import coremltools as ct import PIL import numpy as np # Load the model model = ct.models.MLModel('/Users/ngoclh/Downloads/resnettest.mlmodel') print(model) img_path = "/Users/ngoclh/gitlocal/DetectCirtochApp/DetectCirtochApp/resources/image.jpg" img = PIL.Image.open(img_path) img = img.resize([224, 224], PIL.Image.ANTIALIAS) coreml_out_dict = model.predict({"input_1" : img}) print(coreml_out_dict) Step3: Test coreml model in Xcode: func getFeature() { do { let deepLab = try VGG_emb.init() //mobilenet_emb.init()//cirtorch_emb.init() let image = UIImage(named: "image.jpg") let pixBuf = image!.pixelBuffer(width: 224, height: 224)! guard let output = try? deepLab.prediction(input_1: pixBuf) else { return } let names = output.featureNames print("ngoc names: ", names) for name in names { let feature = output.featureValue(for: name) print("ngoc feature: ", feature) } } catch { print(error) } }
Posted
by HuuNgoc.
Last updated
.
Post not yet marked as solved
2 Replies
694 Views
My app allows the user to select different stable diffusion models, and I noticed a very strange issue concerning memory management. When using the StableDiffusionPipeline (https://github.com/apple/ml-stable-diffusion) with cpu+gpu, around 1.5 GB of memory is not properly released after generateImages is called and the pipeline is released. When generating more images with a new StableDiffusionPipeline object, memory is reused and stays stable at around 1.5 GB after inference is complete. Everything, especially MLModels, are released properly. Guessing, MLModel seems to create a persistent cache. Here is the problem: When using a different MLModel afterwards, another 1.5 GB is not released and stays resident. Using a third model, this totales to 4.5 GB of unreleased, persistent memory. At first I thought that would be a bug in the StableDiffusionPipeline – but I was able to reproduce this behaviour in a very minimal objective-c sample without ARC: MLArrayBatchProvider *batchProvider = [[MLArrayBatchProvider alloc] initWithFeatureProviderArray:@[<VALID FEATURE PROVIDER>]]; MLModelConfiguration *config = [[MLModelConfiguration alloc] init]; config.computeUnits = MLComputeUnitsCPUAndGPU; MLModel *model = [[MLModel modelWithContentsOfURL:[NSURL fileURLWithPath:<VALID PATH TO .mlmodelc SD 1.5 FILE>] configuration:config error:&error] retain]; id<MLBatchProvider> returnProvider = [model predictionsFromBatch:batchProvider error:&error]; [model release]; [config release]; [batchProvider release]; After running this minimal code, 1.5 GB of persistent memory is present that is not released during the lifetime of the app. This only happens on macOS 14(.1) Sonoma and on iOS 17(.1), but not on macOS 13 Ventura. On Ventura, everything works as expected and the memory is released when predictionsFromBatch: is done and the model is released. Some observations: This only happens using cpu+gpu, not cpu+ane (since the memory is allocated out of process) and not using cpu-only It does not matter which stable diffusion model is used, I tried custom sd-derived models as well as the apple-provided sd 1.5 models I reproduced the issue on MBP 16" M1 Max with macOS 14.1, iPhone 12 mini with iOS 17.0.3 and iPad Pro M2 with iPadOS 17.1 The memory that "leaks" are mostly huge malloc block of 100-500 MB of size OR IOSurfaces This memory is allocated during predictionsFromBatch, not while loading the model Loading and unloading a model does not leak memory – only when predictionsFromBatch is called, the huge memory chunk is allocated and never freed during the lifetime of the app Does anybody have any clue what is going on? I highly suspect that I am missing something crucial, but my colleagues and me looked everywhere trying to find a method of releasing this leaked/cached memory.
Posted
by MendelK.
Last updated
.
Post not yet marked as solved
1 Replies
589 Views
I'm trying to convert a PyTorch forward Transformer model to CoreML but am running into several issues, like these errors: "For mlprogram, inputs with infinite upper_bound is not allowed. Please set upper. bound" 570 • to a positive value in "RangeDim)" for the "inputs" param in ct.convert().' raise NotImplementedError ( 259 "inplace_ops pass doesn't yet support append op inside conditional" Are there any more samples besides https://developer.apple.com/videos/play/tech-talks/10154 The sample in that video an imageType is used as input but in my model text is the input (and the output). I also get warned that converting "torch script" is experimental but in the video it says it a torch script is required to convert (though I know the video is a few years old).
Posted Last updated
.
Post not yet marked as solved
3 Replies
550 Views
Hi there. We use a core ML model for image processing, and because loading core ml model take long time (~10 sec), we preload core ML model when app start time. but in some device, loading core ml model fails with such error. we download core ML model from server then load model from local storage. loading code looks like this. typical. MLModel.load(contentsOf: compliedUrl, configuration: config) once this error happen, it keeps fails until we restart the device. (+) In this article, I saw that it is related some "limitation of decrypt session" : https://developer.apple.com/forums/thread/707622 but it also happens to in-house test flight builds which are used only under 5 people. Can I know why this happens?
Posted Last updated
.
Post not yet marked as solved
2 Replies
439 Views
hello, I am a machine learning engineer, recently I need to run pytorch's grid_sample opration on iphone. so I use coremltools to convert pytorch grid_sample to MIL resample op which is officially supported. But when running on the phone, it is switched to the CPU instead of the GPU or ANE (xcode connected with phone, run offical performance benchmark). I would like to ask why there is no efficient GPU implementation? What I am looking forward to is running around 2ms, but 8ms with cpu
Posted Last updated
.
Post not yet marked as solved
0 Replies
546 Views
Hi everyone, Wondering if you know how the device decide which compute unit (GPU, CPU or ANE) to use when compute units are set to ALL? I'm working on optimizing a GPT2 model to run on ANE. I ran the performance report for the existing model and the report showed me operators not supported by ANE. Then I went onto remove these operators and converted the model to CoreML again. This time the performance report showed that every operator is supported by ANE but the device still prefers GPU when the compute units are set to ALL and perfers CPU when the compute units are set to CPU and ANE. ALL CPU and ANE Does anyone know why? Thank you in advance!
Posted
by dcdcdc123.
Last updated
.
Post not yet marked as solved
0 Replies
645 Views
In the midst of developing an app, xcode just can't run on the same device I have been using. And give the error below. To fix, I have reset my laptop and iphone. And also delete the files in ~/Library/Developer/Xcode. And do rebuilding. Multiple times. Just doesn't work. And I CAN run on another handset; but that handset just older and not having enought RAM, so I can't depend on it. And then, I found out how to fix it. I just change to ANOTHER Bundle ID. And it works. The question of this post is, I want to use the original bundle ID. e.g. if it happens again later after I have upload the app to appstore, I can't just always change the bundle ID. I think the problem is on the handset, maybe some data of setting loaded to the handset. Any way I can fix this? An error occurred while communicating with a remote process. Domain: com.apple.dt.CoreDeviceError Code: 3 User Info: { DVTErrorCreationDateKey = "2023-10-15 10:51:36 +0000"; IDERunOperationFailingWorker = IDEInstallCoreDeviceWorker; } -- The connection was interrupted. Domain: com.apple.Mercury.error Code: 1000 User Info: { XPCConnectionDescription = "<SystemXPCPeerConnection 0x60000338fe70> { <connection: 0x600001c6a0d0> { name = com.apple.CoreDevice.CoreDeviceService, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } }"; } -- Event Metadata: com.apple.dt.IDERunOperationWorkerFinished : { "device_isCoreDevice" = 1; "device_model" = "iPhone15,2"; "device_osBuild" = "17.1 (21B5066a)"; "device_platform" = "com.apple.platform.iphoneos"; "dvt_coredevice_version" = "348.1"; "dvt_mobiledevice_version" = "1643.2.4"; "launchSession_schemeCommand" = Run; "launchSession_state" = 1; "launchSession_targetArch" = arm64; "operation_duration_ms" = 300973; "operation_errorCode" = 1000; "operation_errorDomain" = "com.apple.dt.CoreDeviceError.3.com.apple.Mercury.error"; "operation_errorWorker" = IDEInstallCoreDeviceWorker; "operation_name" = IDERunOperationWorkerGroup; "param_debugger_attachToExtensions" = 0; "param_debugger_attachToXPC" = 1; "param_debugger_type" = 3; "param_destination_isProxy" = 0; "param_destination_platform" = "com.apple.platform.iphoneos"; "param_diag_MainThreadChecker_stopOnIssue" = 0; "param_diag_MallocStackLogging_enableDuringAttach" = 0; "param_diag_MallocStackLogging_enableForXPC" = 1; "param_diag_allowLocationSimulation" = 1; "param_diag_checker_tpc_enable" = 0; "param_diag_gpu_frameCapture_enable" = 3; "param_diag_gpu_shaderValidation_enable" = 0; "param_diag_gpu_validation_enable" = 1; "param_diag_memoryGraphOnResourceException" = 0; "param_diag_queueDebugging_enable" = 1; "param_diag_runtimeProfile_generate" = 0; "param_diag_sanitizer_asan_enable" = 0; "param_diag_sanitizer_tsan_enable" = 0; "param_diag_sanitizer_tsan_stopOnIssue" = 0; "param_diag_sanitizer_ubsan_stopOnIssue" = 0; "param_diag_showNonLocalizedStrings" = 0; "param_diag_viewDebugging_enabled" = 1; "param_diag_viewDebugging_insertDylibOnLaunch" = 1; "param_install_style" = 0; "param_launcher_UID" = 2; "param_launcher_allowDeviceSensorReplayData" = 0; "param_launcher_kind" = 0; "param_launcher_style" = 99; "param_launcher_substyle" = 8192; "param_runnable_appExtensionHostRunMode" = 0; "param_runnable_productType" = "com.apple.product-type.application"; "param_structuredConsoleMode" = 1; "param_testing_launchedForTesting" = 0; "param_testing_suppressSimulatorApp" = 0; "param_testing_usingCLI" = 0; "sdk_canonicalName" = "iphoneos17.0"; "sdk_osVersion" = "17.0"; "sdk_variant" = iphoneos; } -- System Information macOS Version 14.0 (Build 23A344) Xcode 15.0 (22265) (Build 15A240d) Timestamp: 2023-10-15T18:51:36+08:00
Posted
by ckchanhk.
Last updated
.
Post not yet marked as solved
8 Replies
651 Views
I'm using some CoreML models from C++. I've been trying to profile them using the CoreML Instrument in Instruments. It seems that that only works when I sign my binaries with the get-task-allow entitlement. Is there an easier way? Ideally I'd like to be able to profile a Python program that calls my C++ code and I would rather not re-sign Python.
Posted
by smpanaro.
Last updated
.
Post marked as solved
3 Replies
1.3k Views
There seems to be a new MLE5Engine in iOS 17 and macOS 14, that causes issues with our style transfer models: The output is wrong (just gray pixels) and not the same as on iOS 16. There is a large memory leak. The memory consumption is increasing rapidly with each new frame. Concerning 2): There are a lot of CVPixelBuffers leaking during prediction. Those buffers somehow have references to themselves and are not released properly. Here is a stack trace of how the buffers are created: 0 _malloc_zone_malloc_instrumented_or_legacy 1 _CFRuntimeCreateInstance 2 CVObject::alloc(unsigned long, _CFAllocator const*, unsigned long, unsigned long) 3 CVPixe Buffer::alloc(_CFAllocator const*) 4 CVPixelBufferCreate 5 +[MLMultiArray(ImageUtils) pixelBufferBGRA8FromMultiArrayCHW:channelOrderIsBGR:error:] 6 MLE5OutputPixelBufferFeatureValueByCopyingTensor 7 -[MLE5OutputPortBinder _makeFeatureValueFromPort:featureDescription:error:] 8 -[MLE5OutputPortBinder _makeFeatureValueAndReturnError:] 9 __36-[MLE5OutputPortBinder featureValue]_block_invoke 10 _dispatch_client_callout 11 _dispatch_lane_barrier_sync_invoke_and_complete 12 -[MLE5OutputPortBinder featureValue] 13 -[MLE5OutputPort featureValue] 14 -[MLE5ExecutionStreamOperation outputFeatures] 15 -[MLE5Engine _predictionFromFeatures:options:usingStream:operation:error:] 16 -[MLE5Engine _predictionFromFeatures:options:error:] 17 -[MLE5Engine predictionFromFeatures:options:error:] 18 -[MLDelegateModel predictionFromFeatures:options:error:] 19 StyleModel.prediction(input:options:) When manually disabling the use of the MLE5Engine, the models run as expected. Is this an issue caused by our model, or is it a bug in Core ML?
Posted Last updated
.