Swift is a powerful and intuitive programming language for Apple platforms and beyond.

Swift Documentation

Posts under Swift tag

2,033 Posts
Sort by:
Post not yet marked as solved
2 Replies
58 Views
As an exercise in learning Swift, I rewrote a toy C++ command line tool in Swift. After switching to an UnsafeRawBufferPointer in a critical part of the code, the Release build of the Swift version was a little faster than the Release build of the C++ version. But the Debug build took around 700 times as long. I expect a Debug build to be somewhat slower, but by that much? Here's the critical part of the code, a function that gets called many thousands of times. The two string parameters are always 5-letter words in plain ASCII (it's related to Wordle). By the way, if I change the loop ranges from 0..<5 to [0,1,2,3,4], then it runs about twice as fast in Debug, but twice as slow in Release. func Score( trial: String, target: String ) -> Int { var score = 0 withUnsafeBytes(of: trial.utf8) { rawTrial in withUnsafeBytes(of: target.utf8) { rawTarget in for i in 0..<5 { let trial_i = rawTrial[i]; if trial_i == rawTarget[i] // strong hit { score += kStrongScore } else // check for weak hit { for j in 0..<5 { if j != i { let target_j = rawTarget[j]; if (trial_i == target_j) && (rawTrial[j] != target_j) { score += kWeakScore break } } } } } } } return score }
Posted
by JWWalker.
Last updated
.
Post not yet marked as solved
0 Replies
38 Views
Hello, I have created a Neural Network → K Nearest Neighbors Classifier with python. # followed by k-Nearest Neighbors for classification. import coremltools import coremltools.proto.FeatureTypes_pb2 as ft from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder import copy # Take the SqueezeNet feature extractor from the Turi Create model. base_model = coremltools.models.MLModel("SqueezeNet.mlmodel") base_spec = base_model._spec layers = copy.deepcopy(base_spec.neuralNetworkClassifier.layers) # Delete the softmax and innerProduct layers. The new last layer is # a "flatten" layer that outputs a 1000-element vector. del layers[-1] del layers[-1] preprocessing = base_spec.neuralNetworkClassifier.preprocessing # The Turi Create model is a classifier, which is treated as a special # model type in Core ML. But we need a general-purpose neural network. del base_spec.neuralNetworkClassifier.layers[:] base_spec.neuralNetwork.layers.extend(layers) # Also copy over the image preprocessing options. base_spec.neuralNetwork.preprocessing.extend(preprocessing) # Remove other classifier stuff. base_spec.description.ClearField("metadata") base_spec.description.ClearField("predictedFeatureName") base_spec.description.ClearField("predictedProbabilitiesName") # Remove the old classifier outputs. del base_spec.description.output[:] # Add a new output for the feature vector. output = base_spec.description.output.add() output.name = "features" output.type.multiArrayType.shape.append(1000) output.type.multiArrayType.dataType = ft.ArrayFeatureType.FLOAT32 # Connect the last layer to this new output. base_spec.neuralNetwork.layers[-1].output[0] = "features" # Create the k-NN model. knn_builder = KNearestNeighborsClassifierBuilder(input_name="features", output_name="label", number_of_dimensions=1000, default_class_label="???", number_of_neighbors=3, weighting_scheme="inverse_distance", index_type="linear") knn_spec = knn_builder.spec knn_spec.description.input[0].shortDescription = "Input vector" knn_spec.description.output[0].shortDescription = "Predicted label" knn_spec.description.output[1].shortDescription = "Probabilities for each possible label" knn_builder.set_number_of_neighbors_with_bounds(3, allowed_range=(1, 10)) # Use the same name as in the neural network models, so that we # can use the same code for evaluating both types of model. knn_spec.description.predictedProbabilitiesName = "labelProbability" knn_spec.description.output[1].name = knn_spec.description.predictedProbabilitiesName # Put it all together into a pipeline. pipeline_spec = coremltools.proto.Model_pb2.Model() pipeline_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION pipeline_spec.isUpdatable = True pipeline_spec.description.input.extend(base_spec.description.input[:]) pipeline_spec.description.output.extend(knn_spec.description.output[:]) pipeline_spec.description.predictedFeatureName = knn_spec.description.predictedFeatureName pipeline_spec.description.predictedProbabilitiesName = knn_spec.description.predictedProbabilitiesName # Add inputs for training. pipeline_spec.description.trainingInput.extend([base_spec.description.input[0]]) pipeline_spec.description.trainingInput[0].shortDescription = "Example image" pipeline_spec.description.trainingInput.extend([knn_spec.description.trainingInput[1]]) pipeline_spec.description.trainingInput[1].shortDescription = "True label" pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(base_spec) pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(knn_spec) pipeline_spec.pipelineClassifier.pipeline.names.extend(["FeatureExtractor", "kNNClassifier"]) coremltools.utils.save_spec(pipeline_spec, "../Models/FaceDetection.mlmodel") it is from the following tutorial: https://machinethink.net/blog/coreml-training-part3/ It Works and I were am to include it into my project: I want to train the model via the MLUpdateTask: ar batchInputs: [MLFeatureProvider] = [] let imageconstraint = (model.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint) let imageOptions: [MLFeatureValue.ImageOption: Any] = [ .cropAndScale: VNImageCropAndScaleOption.scaleFill.rawValue] var featureProviders = [MLFeatureProvider]() //URLS where images are stored let trainingData = ImageManager.getImagesAndLabel() for data in trainingData{ let label = data.key for imgURL in data.value{ let featureValue = try MLFeatureValue(imageAt: imgURL, constraint: imageconstraint!, options: imageOptions) if let pixelBuffer = featureValue.imageBufferValue{ let featureProvider = FaceDetectionTrainingInput(image: pixelBuffer, label: label) batchInputs.append(featureProvider)}} let trainingData = MLArrayBatchProvider(array: batchInputs) When calling the MLUpdateTask as follows, the context.model from completionHandler is null. Unfortunately there is no other Information available from the compiler. do{ debugPrint(context) try context.model.write(to: ModelManager.targetURL) } catch{ debugPrint("Error saving the model \(error)") } }) updateTask.resume() I get the following error when I want to access the context.model: Thread 5: EXC_BAD_ACCESS (code=1, address=0x0) Can some1 more experienced tell me how to fix this? It seems like I am missing some parameters? I am currently not splitting the Data when training into train and test data. only preprocessing im doing is scaling the image down to 227x227 pixels. Thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
47 Views
I've found a strange leak, that looks like a bug. When we open two sheets, or fullscreenCovers and the last one has a TextField, then after closing both, @StateObject property is not released. If you delete TextField, there will be no memory leak. It works well and memory is releasing on iOS 16 built with Xcode 15 (simulators) Memory is leaking and not releasing on iOS 17 built with Xcode 15 (simulators, device 17.4.1) import SwiftUI struct ContentView: View { @State var isFirstOpen: Bool = false var body: some View { Button("Open first") { isFirstOpen = true } .sheet(isPresented: $isFirstOpen) { FirstView() } } } struct FirstView: View { @StateObject var viewModel = LeakedViewModel() var body: some View { ZStack { Button("Open second") { viewModel.isSecondOpen = true } } .sheet(isPresented: $viewModel.isSecondOpen) { SecondView(onClose: { viewModel.isSecondOpen = false }) } } } final class LeakedViewModel: ObservableObject { @Published var isSecondOpen: Bool = false init() { print("LeakedViewModel init") } deinit { print("LeakedViewModel deinit") } } struct SecondView: View { @State private var text: String = "" private let onClose: () -> Void init(onClose: @escaping () -> Void) { self.onClose = onClose } var body: some View { Button("Close second"){ onClose() } TextField("text: $text", text: $text) // Comment TextField and the leak will disappear, viewModel deinit called } } @main struct LeaksApp: App { var body: some Scene { WindowGroup { ContentView() } } } May be related to https://forums.developer.apple.com/forums/thread/738840
Posted
by Andreynt.
Last updated
.
Post not yet marked as solved
1 Replies
67 Views
Hello together, i want to us some classes to manage informations, which get fetched from a firestore database. My idea was, that I will have different classes for the different state of informations. The information which will be common for the different states should have the same properties. Therefore it made sense for me to have a super class which stores the main informations and derive a subclass with extra properties to store the more informations. My question is, how to define the initializer method properly, so that I can store these data informations fetched from firestore at once without any loss. Superclass (I reduced it to a minimum, just to show my principal problem): class GameInfo: Codable, Identifiable { @DocumentID var id: String? // -> UUID of firestore document var league: String var homeTeam: String var guestTeam: String enum CodingKeys: String, CodingKey { case league case homeTeam case guestTeam } init(league: String, homeTeam: String, guestTeam: String) { self.league = league self.homeTeam = homeTeam self.guestTeam = guestTeam } } the subclass should contain the GameInfo Properties and some others ... class Game: GameInfo { var startTime: Date? enum CodingKeys: String, CodingKey { case startTime } init(league: String, homeTeam: String, guestTeam: String, startTime: Date) { self.startTime = startTime super.init(league: league, homeTeam: homeTeam, guestTeam: guestTeam) } required init(from decoder: any Decoder) throws { let values = try decoder.container(keyedBy: CodingKeys.self) self.startTime = try values.decodeIfPresent(Date.self, forKey: .startTime) super.init(league: "", homeTeam: "",, guestTeam: "") } With the required init-method, informations get decoded and stored. (the data from firestore contain the league, homeTeam, guestTeam and startTime informations). The super.init() method as defined results in empty strings. But what I want is, that the league, homeTeam and guestTeam values will also be decoded from the firestore informations. But I don't know how. If I use the code super.init(league: league, homeTeam: homeTeam, guestTeam: guestTeam) within the required init() than I get the compiler error message 'self' used in property access 'league' before 'super.init' call What is wrong in my thinking ? Any help appreciated. Thanks and best regards Peter
Posted
by Luggi71.
Last updated
.
Post not yet marked as solved
0 Replies
50 Views
i'm struct dynamic island detail content dynamicIsland: { context in DynamicIsland { expandedContent(context: context) } compactLeading: { .... } compactTrailing: { ... } i want show different content based on context. private func expandedContent(context: ActivityViewContext<xxxx>)->DynamicIslandExpandedContent<some View> { if (context.state.style == 0) { return expandedControlContent1(context: context) } else if (context.state.style == 1) { return expandedControlContent2(context: context) } else { return expandedControlContent3(context: context) } } compiles error Function declares an opaque return type 'some View', but the return statements in its body do not have matching underlying types
Posted
by Highmore.
Last updated
.
Post not yet marked as solved
3 Replies
127 Views
Hello, I'm looking for a way to detect using NWPathMonitor when the iOS device is connected to a router but not to the internet. As an example a mobile router WiFi without SIM. In settings I'm able to switch the connection to its WiFi, once connected a label below the SSID shows Not connected to the internet. I would like to show the same thing to the user inside my app, but unfortunately I always get the satisfied answer. Am I missing something in configuring NWPathMonitor or reading the answer? final class InternetConnectionMonitor { lazy var internetConnectionStatusPublisher: AnyPublisher&lt;InternetConnectionStatus, Never&gt; = { _internetConnectionStatusSubject .compactMap{ $0 } .eraseToAnyPublisher() }() var lastInternetConnectionStatus: InternetConnectionStatus? { _internetConnectionStatusSubject.value } private let _internetConnectionStatusSubject = CurrentValueSubject&lt;InternetConnectionStatus?, Never&gt;(nil) private let pathMonitor = NWPathMonitor() private let pathMonitorQueue = DispatchQueue(label: "com.xxxxx-network-monitor", qos: .default) init() { startPathMonitoring() } private func startPathMonitoring() { pathMonitor.pathUpdateHandler = { [weak self] path in guard let self else { return } let networkStatus = InternetConnectionStatus(from: path) self._internetConnectionStatusSubject.send(networkStatus) } pathMonitor.start(queue: pathMonitorQueue) } }
Posted
by DrAma78.
Last updated
.
Post not yet marked as solved
1 Replies
79 Views
Hello everyone, So I will start off by saying I am a very amateur developer with some experience in C++ mostly. Over the summer I want to build an app similar to a board game and launch it on the App Store for me and my friends to play when we don't have the game's physical board. Basically, there would be one person who hosts a "game" while everyone else joins through a code or something like that (maybe there's an easier way if you know everyone would be playing in person with each other). Once a game begins I want cards to show up on peoples's screens and that's it, no fancy graphics or anything like that. So, to the root of my issue. I am brand new to Swift and Xcode. I began googling and tinkering with it and made a little app where a user can add names and then pick letters from the names to display (very very basic stuff). I also figured out how to import and manipulate images a little bit. My question is about the process of making a game, connecting it to GameKit/Game Center, and then how to actually launch it on the App Store so my friends can also download it. If anyone has any resources they particularly found useful when starting out using Swift, please let me know. I really really don't like reading straight from the documentation (although who does honestly). Anything helps!! Thank you!
Posted
by dgswag.
Last updated
.
Post not yet marked as solved
0 Replies
60 Views
Hi. I plan to use a WebView in an iOS app (SWIFT) and this should run a web app with WASM and using IndexedDB for permanent credentials. I found rumors and information on Apple deleting data in IndexedDB and localStorage after 7 days (see links below). But I found no official information that tells me if this is true for my WebView in my ordinary mobile App (not PWA). A test cycle over a week to find out is hard to do... Is there any reliable and clear information on this and am I affected? Thank you! . Links about this topic: https://news.ycombinator.com/item?id=28158407 https://www.reddit.com/r/javascript/comments/foqxp9/webkit_will_delete_all_local_storage_including/ https://searchengineland.com/what-safaris-7-day-cap-on-script-writeable-storage-means-for-pwa-developers-332519
Posted
by Kukulkan.
Last updated
.
Post not yet marked as solved
1 Replies
90 Views
We've been working on a SwiftUI app that randomly crashes with an exception. When navigating from one view to another, a rare exception is thrown, maybe every 1 / 200 times or so: <SwiftUI.UIKitNavigationController: 0x109888400> is pushing the same view controller instance (<_TtGC7SwiftUI32NavigationStackHostingControllerVS_7AnyView_: 0x10792d400>) more than once which is not supported and is most likely an error in the application : com.<companyName>.<appName> We haven't coded anything to directly push an instance of a view controller outside of what SwiftUI is doing. It seems to happen when the user taps on a NavigationLink view. It happens both in simulator and on device. Does anyone know what might cause this?
Posted
by GBecker.
Last updated
.
Post not yet marked as solved
0 Replies
56 Views
Hi We getting error in Apple Sign In "Sign-Up not completed", Apple sign in working fine for old Apps and old Bundle ids, But it's not working in new Apps and new Bundle ids We checked with other Apple Developer team accounts Apple Sign In is working on the same source code. But my Team account is getting an error. We enabled signing capabilities and added Sign in with Apple and we added Provisioning profile certificate also , but I am still getting the same error.
Posted Last updated
.
Post not yet marked as solved
0 Replies
59 Views
Hi, I'm trying to build an app to connect to both BR/EDR ("Classic") or BLE devices. For Classic, the recommended flow in Apple docs is: Start your app and initialize your CBCentralManager Pair the phone and the device manually through settings This should automatically call the connectionEventDidOccur From then on it depends on the dev to choose what to do with the information from the callback (peripheral, event, etc), but the callback is simply never fired. Here's my basic implementation of the callback: func centralManager(_ central: CBCentralManager, connectionEventDidOccur event: CBConnectionEvent, for peripheral: CBPeripheral) { print("IOS: connectionEventDidOccur for peripheral: %@", peripheral) if (event == .peerConnected) { print("IOS: Case is peer connected") connectToPeripheral(peripheral.identifier.uuidString) if(!savedPeripheralList.contains(where: { $0.identifier == peripheral.identifier })){ savedPeripheralList.append(peripheral) } } else if (event == .peerDisconnected ){ print("IOS: Peer %@ disconnected!", peripheral) } else { print("IOS: if statement didn't work") } } I'm essentially: Printing the fact that the callback was called Trying to connect the app to the peripheral (by now only the phone is connected) Saving this newly connected peripheral to a local list For context, I've been able to scan and connect to BLE devices like earbuds normally, so my general implementation of other callbacks like didDiscover or didConnect works just fine, this is the only non functional callback. Any ideas would be appreciated, thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
48 Views
The regular case. Open a sheet by clicking a button. Next close the sheet using a Cancel button on it. The isPresented state variable is changed immediately, but while the dismissing animation isn't totally finished it's impossible to click the Button on the main parent screen and call the sheet presentation again. As I understand UIKit works differently and lets us click the Button but just calls a new modal view exactly after the previous animation is finished struct MyView: View { @State private var isPresented = false public var body: some View { VStack { Button(action: { isPresented = true }, label: { Text("Button") }) Spacer() } .sheet(isPresented: $isPresented) { sheetContent } } var sheetContent: some View { VStack { Text("Cancel") .onTapGesture { isPresented = false } Spacer() } } } @Apple Could you please fix it in SwiftUI?
Posted Last updated
.
Post not yet marked as solved
0 Replies
44 Views
Condition: We have an existing app that runs on iPhone and iPad. We want to make it compatible with macOS, along with it we want to leverage some of the macOS native components. We achieved this using macCatalyst, but now we want to build common components using swiftUI for both macOS and iOS platforms. Challenge: Using SwiftUI view for mac development Approach 1: We created a Mac bundle that contained Mac specific views (using Appkit views). This approach worked fine for creating and using components that are specific to macOS. Now while developing and using SwiftUI views in mac bundle we face following error -> (NSHostingViewController symbol not found). Approach 2: We tried creating a separate Mac app and make it part of MacCatalyst app. In this approach we were able to show NSStatusBar and add text using SwiftUI view. But the status bar appearance is inconsistent, sometimes NSStatusBar icon appears but other times it just won't appear. Can anyone help with the right approach for this scenarios
Posted Last updated
.
Post not yet marked as solved
1 Replies
112 Views
I prepare an app to migrate from ObservableObject to Observable, from EnvironmentObject to Environment(MyClass.self) and so so forth. That works OK, very simple. But, that forces to run on macOS 14 or more recent. So I would like to have it conditionally, such as: if macOS 14 available @Environment(ActiveConfiguration.self) var activeConfiguration: ActiveConfiguration otherwise @EnvironmentObject var activeConfiguration: ActiveConfiguration The same for the class declaration: if macOS 14 available @Observable class ActiveConfiguration { var config = Configuration() } otherwise class ActiveConfiguration : ObservableObject { @Published var config = Configuration() } Is there a way to achieve this (I understand it is not possible through extensions of Macros, as we can do for modifiers) ? Could I define 2 classes for ActiveConfiguration, but then what about @Environment ?
Posted
by Claude31.
Last updated
.
Post not yet marked as solved
2 Replies
128 Views
On my shop and content views of my app, I have a shopping cart SF symbol that I've modified with a conditional to show the number of items in the cart if the number of items is above zero. However, whenever I change tabs and back again, that icon disappears even though there should be an item in the cart. I have a video of the error, but I have no idea how to post it. Here is some of the code, let me know if you need to see more of it: CartManager.swift import Foundation import SwiftUI @Observable class CartManager { /*private(set)*/ var products: [Product] = [] private(set) var total: Int = 0 private(set) var numberofproducts: Int = 0 func count() -> Int { numberofproducts = products.count return numberofproducts } func addToCart(product: Product) { products.append(product) total += product.price numberofproducts = products.count } func removeFromCart(product: Product) { products = products.filter { $0.id != product.id } total -= product.price numberofproducts = products.count } } ShopPage.swift import SwiftUI struct ShopPage: View { @Environment(CartManager.self) private var cartManager var columns = [GridItem(.adaptive(minimum: 135), spacing: 0)] @State private var searchText = "" let items = ["LazyHeadphoneBean", "ProperBean", "BabyBean", "RoyalBean", "SpringBean", "beanbunny", "CapBean"] var filteredItems: [Bean] { guard searchText.isEmpty else { return beans } return beans.filter { $0.imageName.localizedCaseInsensitiveContains(searchText) } } var body: some View { NavigationStack { ZStack(alignment: .top) { Color.white .ignoresSafeArea(edges: .all) VStack { AppBar() .environment(cartManager) ScrollView() { LazyVGrid(columns: columns, spacing: 20) { ForEach(productList, id: \.id) { product in NavigationLink { beanDetail(product: product) .environment(cartManager) } label: { ProductCardView(product: product) .environment(cartManager) } } } } } .navigationBarDrawer(displayMode: .always)) } } .environment(cartManager) } var searchResults: [String] { if searchText.isEmpty { return items } else { return items.filter { $0.contains(searchText)} } } } #Preview { ShopPage() .environment(CartManager()) } struct AppBar: View { @Environment(CartManager.self) private var cartManager var body: some View { NavigationStack { VStack (alignment: .leading){ HStack { Spacer() NavigationLink(destination: CartView() .environment(cartManager) ) { CartButton(numberOfProducts: cartManager.products.count) } } Text("Shop for Beans") .font(.largeTitle .bold()) } } .padding() .environment(CartManager()) } } CartButton.swift import SwiftUI struct CartButton: View { var numberOfProducts: Int var body: some View { ZStack(alignment: .topTrailing) { Image(systemName: "cart.fill") .foregroundStyle(.black) .padding(5) if numberOfProducts > 0 { Text("\(numberOfProducts)") .font(.caption2).bold() .foregroundStyle(.white) .frame(width: 15, height: 15) .background(Color(hue: 1.0, saturation: 0.89, brightness: 0.835)) .clipShape(RoundedRectangle(cornerRadius: 50)) } } } } #Preview { CartButton(/*numberOfProducts: 1*/numberOfProducts: 1) }
Posted
by KittyCat.
Last updated
.
Post not yet marked as solved
0 Replies
94 Views
Hello. I have the following question. I call the nextBatch() method on MusicItemCollection to get the next collection of artist albums. Here is my method: func allAlbums2(artist: Artist) async throws -> [Album] { var allAlbums: [Album] = [] artist.albums?.forEach { allAlbums.append($0) } guard let albums = artist.albums else { return [] } var albumsCollection = albums while albumsCollection.hasNextBatch { let response = try await albumsCollection.nextBatch() if let response { albumsCollection = response var albums = [Album]() albumsCollection.forEach({ albums.append($0)}) allAlbums.append(contentsOf: albums) } } return allAlbums } The problem is as follows. Sometimes it happens that some albums not in nextBatch are not returned (I noticed one, I didn't check the others). This happens once every 50-100 requests. The number of batches is returned the same when the album is skipped. The same albums are returned in the batch where the album was missed. I have a question, do I have a problem with multi-threading or something else or is it a problem in the API. The file contains prints of the returned batches. One with a missing album, the other without a missing one. There are about 3000 lines in the file. I will be grateful for a hint or an answer. Thank you! Here link to file: https://files.fm/u/nj4r5wgyg3
Posted
by 20a.
Last updated
.
Post not yet marked as solved
0 Replies
73 Views
I got this SSML from w3. org. AVSpeechUtterance(ssmlRepresentation:) is not complying with the contour. It doesn't change hz. <?xml version="1.0"?> <speak version="1.1" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/10/synthesis http://www.w3.org/TR/speech-synthesis11/synthesis.xsd" xml:lang="en-US"> <prosody contour="(0%,+20Hz) (10%,+30%) (40%,+10Hz)"> good morning </prosody> </speak> override func viewDidLoad() { super.viewDidLoad() guard let localUtterance = AVSpeechUtterance(ssmlRepresentation: self.speechSML) else { print("SML did not work.") return } self.utterance = localUtterance self.utterance.voice = self.voiceNoelle } self.synthesizer.speak(self.utterance)
Posted Last updated
.