Core Graphics

RSS for tag

Harness the power of Quartz technology to perform lightweight 2D rendering with high-fidelity output using Core Graphics.

Core Graphics Documentation

Posts under Core Graphics tag

56 Posts
Sort by:
Post not yet marked as solved
1 Replies
1.4k Views
I needed an infinite canvas for my app which is basically a drawing board where one can draw things using pen. So, I thought of having a very large custom UIView inside a UIScrollView. And in the custom view, I could keep drawing things. But, I ended up with a warning saying something like below and nothing drawn on screen. [<CALayer: 0x5584190> display]: Ignoring bogus layer size (50000, 50000) Which means, I can't have such a big CALayer to draw things. Now, solution? alternative? Then comes CATiledLayer. I made my large UIView backed by CATiledLayer now. After having a proper levelOfDetails and levelOfDetailsBias value, things worked like charm. Until I ended up facing another problem. Since, CATiledLayer caches drawing in different zoom levels if I try to scale the view after changing the drawing content the cached drawings appear and then the new contents get drawn. I don't find an option to invalidate caches in different levels. All the solutions I came across leads me to clear the entire contents of the CATiledLayer on drawing content change which won't help again. Do I miss something here? Is there a way with which I can clear caches at different levels? Or is there any other solutions which could solve my need? Can someone help me with this?
Posted Last updated
.
Post not yet marked as solved
0 Replies
437 Views
I would like to get some information of the connected display such as vendor number, eisaId, … after connecting the external display via “screen mirroring” -> “use as Separate Display” When the same display was connected through HDMI port or extend mode in screen mirroring, the information is not identical: HDMI Other display found - ID: 19241XXXX, Name: YYYY (Vendor: 19ZZZ, Model: 57WWW) Screen mirroring - extend mode Other display found - ID: 41288XX, Name: AAA (Vendor: 163ZYYBBB, Model: 16ZZWWYYY) I tried to get display information with the below method. func configureDisplays() { var onlineDisplayIDs = [CGDirectDisplayID](repeating: 0, count: 16) var displayCount: UInt32 = 0 guard CGGetOnlineDisplayList(16, &onlineDisplayIDs, &displayCount) == .success else { os_log("Unable to get display list.", type: .info) return } for onlineDisplayID in onlineDisplayIDs where onlineDisplayID != 0 { let name = DisplayManager.getDisplayNameByID(displayID: onlineDisplayID) let id = onlineDisplayID let vendorNumber = CGDisplayVendorNumber(onlineDisplayID) let modelNumber = CGDisplayModelNumber(onlineDisplayID) let serialNumber = CGDisplaySerialNumber(onlineDisplayID) if !DEBUG_SW, DisplayManager.isAppleDisplay(displayID: onlineDisplayID) { let appleDisplay = AppleDisplay(id, name: name, vendorNumber: vendorNumber, modelNumber: modelNumber, serialNumber: serialNumber, isVirtual: isVirtual, isDummy: isDummy) os_log("Apple display found - %{public}@", type: .info, "ID: \(appleDisplay.identifier), Name: \(appleDisplay.name) (Vendor: \(appleDisplay.vendorNumber ?? 0), Model: \(appleDisplay.modelNumber ?? 0))") } else { let otherDisplay = OtherDisplay(id, name: name, vendorNumber: vendorNumber, modelNumber: modelNumber, serialNumber: serialNumber, isVirtual: isVirtual, isDummy: isDummy) os_log("Other display found - %{public}@", type: .info, "ID: \(otherDisplay.identifier), Name: \(otherDisplay.name) (Vendor: \(otherDisplay.vendorNumber ?? 0), Model: \(otherDisplay.modelNumber ?? 0))") } } } Can we have the same display information when connect with an external display via HDMI port and extend mode in Screen Mirroring?
Posted
by lgminh.
Last updated
.
Post marked as solved
9 Replies
909 Views
Hi all: I have a macOS application which capture mouse events: CGEventMask eventMask = CGEventMaskBit(kCGEventMouseMoved) | CGEventMaskBit(kCGEventLeftMouseUp) | CGEventMaskBit(kCGEventLeftMouseDown) | CGEventMaskBit(kCGEventRightMouseUp) | CGEventMaskBit(kCGEventRightMouseDown) | CGEventMaskBit(kCGEventOtherMouseUp) | CGEventMaskBit(kCGEventOtherMouseDown) | CGEventMaskBit(kCGEventScrollWheel) | CGEventMaskBit(kCGEventLeftMouseDragged) | CGEventMaskBit(kCGEventRightMouseDragged) | CGEventMaskBit(kCGEventOtherMouseDragged); _eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap, kCGEventTapOptionDefault, eventMask, &MouseCallback, nil); _runLoopRef = CFRunLoopGetMain(); _runLoopSourceRef = CFMachPortCreateRunLoopSource(NULL, _eventTap, 0); CFRunLoopAddSource(_runLoopRef, _runLoopSourceRef, kCFRunLoopCommonModes); CGEventTapEnable(_eventTap, true); CGEventRef MouseCallback(CGEventTapProxy proxy, CGEventType type, CGEventRef event, void *refcon) { NSLog(@"Mouse event: %d", type); return event; } This mouse logger need accessibility privilege granted in Privacy & Security. But I found that if accessibility turned off while CGEventTap is running, left & right click are blocked, unless restart macOS. Although replace kCGEventTapOptionDefault to kCGEventTapOptionListenOnly can fix this issue, but I have other feature which require kCGEventTapOptionDefault. So I try to detect accessibility is disabled and remove CGEventTap: [[NSDistributedNotificationCenter defaultCenter] addObserver:self selector:@selector(didToggleAccessStatus:) name:@"com.apple.accessibility.api" object:nil suspensionBehavior:NSNotificationSuspensionBehaviorDeliverImmediately]; } However, the notification won't be sent if user didn't turn off accessibility but removed it from list. Worse, AXIsProcessTrusted() continues to return true. Is there a way to fix mouse blocked, or detect accessibility removed? Thanks!
Posted Last updated
.
Post not yet marked as solved
1 Replies
906 Views
Hello there, I am building an app that's going to be keyboard oriented, meaning that the UI will be minimal and only live n the menu bar; all core functions will be performed from the keyboard hotkeys that should be available from wherever in the system. I know about a Swift library called Hotkey that's doing it and it seems to work, however it uses the Carbon API which is deprecated for many years, plus its code is double dutch to me and, since it relies on a legacy code I wish I could atleast understand it to maintain my version of it in case MacOS finally sheds of the Carbon API completely. Is there a way to realize global hotkey in a more modern way?
Posted
by NunoNuno.
Last updated
.
Post not yet marked as solved
0 Replies
1.2k Views
I'm using CoreGraphics and Image I/O in a MacOS command-line tool. My program works fine, but after the first drawing to a bitmap context there are messages output to the console like the following: 2022-12-20 16:33:47.824937-0500 RandomImageGenerator[4436:90170] Metal API Validation Enabled AVEBridge Info: AVEEncoder_CreateInstance: Received CreateInstance (from VT) AVEBridge Info: connectHandler: Device connected (0x000000010030b520)Assert - (remoteService != NULL) - f: /AppleInternal/Library/BuildRoots/43362d89-619c-11ed-a949-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/AppleAVEBridge/AppleAVEEncoder/AppleAVEEncoder.c l: 291 AVE XPC Error: could not find remote service Assert - (err == noErr) - f: /AppleInternal/Library/BuildRoots/43362d89-619c-11ed-a949-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/AppleAVEBridge/AppleAVEEncoder/AppleAVEEncoder.c l: 1961 AVE ERROR: XPC failed AVEBridge Info: stopUserClient: IOServiceClose was successful. AVEBridge Error: AVEEncoder_CreateInstance: returning err = -12908 These messages get in the way of my own console output. How do I stop these messages from being displayed? This post on StackOverflow (https://stackoverflow.com/questions/37800790/hide-strange-unwanted-xcode-logs) does not appear to be relevant to this issue.
Posted Last updated
.
Post not yet marked as solved
1 Replies
449 Views
Rectangle() .fill( RadialGradient.radialGradient( colors: [.blue, .yellow], center: UnitPoint(x:0.5, y:0.5), startRadius: 0, endRadius: 50) ) .frame(width: 100, height: 100) In the above code I have a Rectangle with simple radial gradient as follow: I wanna apply an arbitrary transformation matrix to the gradient so I can achieve following effects: I tried following.. but it applying the transformation matrix to frame instead of shader/gradient Rectangle() .overlay( RadialGradient.radialGradient( colors: [.blue, .yellow], center: UnitPoint(x:0.5, y:0.5), startRadius: 0, endRadius: 50) .transformEffect( CGAffineTransform( -0.5000000596046448, 0.4999999403953552, -0.972577691078186, -0.9725777506828308, 0.5000000596046448, 1.4725778102874756) .translatedBy(x: -50, y: -100) ) ) .frame(width: 100, height: 100) it result in transformation of frame intreat of shader/gradient: Thanks in advance 🙌🏻
Posted Last updated
.
Post not yet marked as solved
0 Replies
548 Views
I'm exploring using the CARemoteLayerClient/Server API to render a layer from another process as is described in the docs, but can't seem to get a very simple example to work. Here's a very simple example of what I'd expect to work: // Run with `swift file.swift` import AppKit let app = NSApplication.shared class AppDelegate: NSObject, NSApplicationDelegate { let window = NSWindow( contentRect: NSMakeRect(200, 200, 400, 200), styleMask: [.titled, .closable, .miniaturizable, .resizable], backing: .buffered, defer: false, screen: nil ) func applicationDidFinishLaunching(_ notification: Notification) { window.makeKeyAndOrderFront(nil) let view = NSView() view.frame = NSRect(x: 0, y: 0, width: 150, height: 150) view.layerUsesCoreImageFilters = true view.wantsLayer = true let server = CARemoteLayerServer.shared() let client = CARemoteLayerClient(serverPort: server.serverPort) print(client.clientId) client.layer = CALayer() client.layer?.backgroundColor = NSColor.red.cgColor // Expect red rectangle client.layer?.bounds = CGRect(x: 0, y: 0, width: 100, height: 100) let serverLayer = CALayer(remoteClientId: client.clientId) serverLayer.bounds = CGRect(x: 0, y: 0, width: 100, height: 100) view.layer?.addSublayer(serverLayer) view.layer?.backgroundColor = NSColor.blue.cgColor // Background blue to confirm parent layer exists window.contentView?.addSubview(view) } } let delegate = AppDelegate() app.delegate = delegate app.run() In this example I'd expect there to be a red rectangle appearing as the remote layer. If I inspect the server's layer hierarchy I see the correct CALayerHost with the correct client ID being created, but it doesn't display the correct contents being set from the client side. After investigating this thread: https://bugs.chromium.org/p/chromium/issues/detail?id=312462 and some demo projects, I've found that the workarounds previously found to make this API work no longer seem to work on my machine (M1 Pro, Ventura). Am I missing something glaringly obvious in my simple implementation or is this a bug?
Posted Last updated
.
Post marked as solved
1 Replies
561 Views
I am writing a tool that tracks statistics about key strokes. For that I create an event tap using CGEventTapCreate (docs). Since my code does not alter events, I create the tap using the kCGEventTapOptionListenOnly option. Do I still need to minimize the runtime of my event handling callback for fast processing of keyboard events? I assume that a listen-only handler does not block the OS-internal event handling queue, but I can't find anything assertive for that in the documentation. Many thanks in advance.
Posted
by rsto.
Last updated
.
Post not yet marked as solved
2 Replies
514 Views
Hello, There is a problem when querying flag state using the following API CGEventSourceFlagsState(kCGEventSourceStateHIDSystemState). When there is external mouse movement the flag gets reset by a low OS-level event corresponding to mouse move. As a result the modifier keys don’t work as expected when using Java’s Robot which in-turn uses Apple's native CGEvent APIs. The issue occurs at CRobot.m#L295 the flags gets reset or cleared when mouse is moved physically in unison with Robot's key events. The difference can be seen in the logs without and with mouse move for typing 'A'. (log attached) Logs Due to this issue applications that use Java's Robot on Mac don’t work as expected, in particular this behavior breaks the usability of the on-screen accessibility related keyboard application - TouchBoard. More details of this use case here. https://github.com/adoptium/adoptium-support/issues/710 The Robot is initialized with the following initial configurations - https://github.com/openjdk/jdk/blob/ac6af6a64099c182e982a0a718bc1b780cef616e/src/java.desktop/macosx/native/libawt_lwawt/awt/CRobot.m#L125 Are we missing anything during initialization of Robot which cause this issue? Why does an external mouse movement cause the event flags to reset? Since a low level OS event corresponding to mouse move is causing the flag to reset, there might be an issue within the CGEventSourceFlagsState() API. Is there a reason behind why an external mouse event causes CGEventFlag state to reset to 0 ? Is there any known issue regarding CGEventSourceFlagsState() and a workaround for it?
Posted
by harsh11.
Last updated
.
Post not yet marked as solved
2 Replies
1.3k Views
I used the following code to decode the image in png format into the allocated memory block *imageData. - (void)decodeImage:(UIImage*)image { GLubyte* imageData = (GLubyte*)malloc(image.size.width * image.size.height * 4); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef imageContext = CGBitmapContextCreate(imageData, image.size.width, image.size.height, 8, image.size.width * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault); CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, image.size.width, image.size.height), image.CGImage); CGContextRelease(imageContext); CGColorSpaceRelease(colorSpace); int bytesPerRow = image.size.width * 4; //you can log [330, 150] RGBA value here, then gets wrong alpha value; int targetRow = 330; int targetCol = 150; u_int32_t r = (u_int32_t)imageData[targetRow*bytesPerRow*4 + targetCol*4 + 0]; u_int32_t g = (u_int32_t)imageData[targetRow*bytesPerRow*4 + targetCol*4 + 1]; u_int32_t b = (u_int32_t)imageData[targetRow*bytesPerRow*4 + targetCol*4 + 2]; u_int32_t a = (u_int32_t)imageData[targetRow*bytesPerRow*4 + targetCol*4 + 3]; free(imageData); } The CGBitmapInfo used is kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault. The problem is that the alpha channel is lost in the decoded result: For example, the RGBA values ​​of the target png at [row, col] = [330, 150] are R = 240, B = 125, G = 106, A = 80. If the decoding is correct, the expected result should be R = 75, G = 39, B = 33, A = 80 with AlphaPremultiplied. However, after decoding the png on iOS17, the result is R = 75, G = 39, B = 33, A = 255, where the alpha values ​​are all forced to 255. Xcode Version 15.0 beta (15A5160n). iPhone14 Pro. The png file:
Posted Last updated
.
Post not yet marked as solved
3 Replies
941 Views
CGRequestScreenCaptureAccess() and CGPreflightScreenCaptureAccess() should return true or false depending on whether the app has screen recording permission. CGRequestScreenCaptureAccess() will bring up a dialog if there is no permission. The dialog directs the user to the System Settings, where the permission can be set. CGRequestScreenCaptureAccess() returns immediately in any case. If it brings up a dialog, that happens from a separate process (Finder?) That dialog stays up even if the app quits before the dialog is dismissed. Since CGRequestScreenCaptureAccess() returns immediately, I tried tracking the permission state by setting up a timer to repeatedly call CGPreflightScreenCaptureAccess() until the permission is set, but if it returned false to begin with, it will continue to return false, even if the app has been given permission for screen capture. Why is that? When the permission to capture is set in the System Settings, a dialog says, "(App Name) may not be able to record the contents of your screen until it is quit." I imagine that is related to CGPreflightScreenCaptureAccess() keeping returning false. But why "may not be able"? When does the permission setting change take effect immediately? How can this be detected?
Posted
by ymeroz.
Last updated
.
Post not yet marked as solved
14 Replies
14k Views
Hi There, I just bought a IIYAMA G-MASTER GB3461WQSU-B1 which has a native resolution of 3440x1440 but my MacBook Pro (Retina, 15-inch, Mid 2014) doesn't recognise the monitor and I can't run it at its full resolution. It is currently recognised as a PL3461WQ 34.5-inch (2560 x 1440). Is there anything that I can do to get it sorted or I will have to wait until this monitor driver is added to the Big Sur list? Thanks
Posted Last updated
.
Post not yet marked as solved
5 Replies
751 Views
I'm dynamically creating a UIImage from NSData using [UIImage imageWithData:data]. The call succeeds and returns a valid UIImage but every time I make the call CGImageCopyImageSource:4692: *** CGImageGetImageSource: cannot get CGImageSourceRef from CGImageRef (CGImageSourceRef was already released) is printed in the Console - not the Xcode console but the Console app streaming from the device. I haven't determined this is actually a problem but I have customer reports of my app crashing after a long period of time where this particular code path is being called frequently. If I put a symbolic breakpoint at ERROR_CGImageCopyImageSource_WAS_CALLED_WITH_INVALID_CGIMAGE is is hit. I'm not sure what I could be doing to cause this error since I'm passing valid data in and getting what looks like valid output.
Posted
by jhndnnqsc.
Last updated
.
Post not yet marked as solved
0 Replies
552 Views
I want to rotate 10 bit/component, 3 component RGB CGImages (or NSImages) by 90 degree angles. The images are loaded from 10 bpc heif files. This is for a Mac app, so I don't have access to UIKit. My first thought is to use the Accelerate vImage framework. However, vImage does not seem to support 10 bpc images. In fact, I've tried this approach without success. I know I can do this using the CIImage.oriented() method, but I don't want the substantial overhead of creating a CIImage from a CGImage and then rendering to a CGImage. Any suggestions? Efficiency/speed are important. Thanks.
Posted
by mallman.
Last updated
.
Post not yet marked as solved
0 Replies
1.5k Views
In iOS 16, can not displaying PDF included gradients. PDF displayed normally if used Radial or Linear Gradient. Complex shape gradients are not displayed. The PDF is not displayed inside the iOS-application resources and when open file in iPhone use Files app normally. Attached an example with a regular Diamond gradient, which was created use Figma. iOS 15 displays the example file correctly. You can open file if change file format: Example.json rename to -> Example.pdf. P.S. I can't attached pdf or zip files. Example.json
Posted
by Tvorec.
Last updated
.
Post not yet marked as solved
1 Replies
755 Views
I'm working on a toy Swift implementation of Teamviewer where users can collaborate. To achieve this, I'm creating a secondary, remotely controlled cursor on macOS using Cocoa. My goal is to allow this secondary cursor to manipulate windows and post mouse events below it. I've managed to create the cursor and successfully made it move and animate within the window. However, I'm struggling with enabling mouse events to be fired by this secondary cursor. When I try to post synthetic mouse events, it doesn't seem to have any effect. Here's the relevant portion of my code: func click(at point: CGPoint) { guard let mouseDown = CGEvent(mouseEventSource: nil, mouseType: .leftMouseDown, mouseCursorPosition: point, mouseButton: .left), let mouseUp = CGEvent(mouseEventSource: nil, mouseType: .leftMouseUp, mouseCursorPosition: point, mouseButton: .left) else { return } mouseDown.post(tap: .cgSessionEventTap) mouseUp.post(tap: .cgSessionEventTap) } I have enabled the Accessibility features, tried posting to specific PIDs, tried posting events twice in a row (to ensure it's not a focus issue), replaced .cgSessionEventTap with .cghidEventTap, all to no avail. Here's the full file if you'd like more context: import Cocoa import Foundation class CursorView: NSView { let image: NSImage init(image: NSImage) { self.image = image super.init(frame: .zero) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func draw(_ dirtyRect: NSRect) { super.draw(dirtyRect) image.draw(in: dirtyRect) } } @NSApplicationMain class AppDelegate: NSObject, NSApplicationDelegate { var window: NSWindow! var userCursorView: CursorView? var remoteCursorView: CursorView? var timer: Timer? var destination: CGPoint = .zero var t: CGFloat = 0 let duration: TimeInterval = 2 let clickProbability: CGFloat = 0.01 func applicationDidFinishLaunching(_ aNotification: Notification) { let screenRect = NSScreen.main!.frame window = NSWindow(contentRect: screenRect, styleMask: .borderless, backing: .buffered, defer: false) window.level = NSWindow.Level(rawValue: Int(CGWindowLevelForKey(.maximumWindow))) window.backgroundColor = NSColor.clear window.ignoresMouseEvents = true let maxHeight: CGFloat = 70.0 if let userImage = NSImage(named: "userCursorImage") { let aspectRatio = userImage.size.width / userImage.size.height let newWidth = aspectRatio * maxHeight userCursorView = CursorView(image: userImage) userCursorView!.frame.size = NSSize(width: newWidth, height: maxHeight) window.contentView?.addSubview(userCursorView!) } if let remoteImage = NSImage(named: "remoteCursorImage") { let aspectRatio = remoteImage.size.width / remoteImage.size.height let newWidth = aspectRatio * maxHeight remoteCursorView = CursorView(image: remoteImage) remoteCursorView!.frame.size = NSSize(width: newWidth, height: maxHeight) window.contentView?.addSubview(remoteCursorView!) // Initialize remote cursor position and destination remoteCursorView!.frame.origin = randomPointWithinScreen() destination = randomPointWithinScreen() } window.makeKeyAndOrderFront(nil) window.orderFrontRegardless() NSCursor.hide() NSEvent.addGlobalMonitorForEvents(matching: [.mouseMoved, .leftMouseDragged, .rightMouseDragged]) { [weak self] event in self?.updateCursorPosition(with: event) } // Move the remote cursor every 0.01 second timer = Timer.scheduledTimer(withTimeInterval: 0.01, repeats: true) { [weak self] _ in self?.moveRemoteCursor() } // Exit the app when pressing the escape key NSEvent.addGlobalMonitorForEvents(matching: .keyDown) { event in if event.keyCode == 53 { NSApplication.shared.terminate(self) } } } func updateCursorPosition(with event: NSEvent) { var newLocation = event.locationInWindow newLocation.y -= userCursorView!.frame.size.height userCursorView?.frame.origin = newLocation } func moveRemoteCursor() { if remoteCursorView!.frame.origin.distance(to: destination) < 1 || t >= 1 { destination = randomPointWithinScreen() t = 0 let windowPoint = remoteCursorView!.frame.origin let screenPoint = window.convertToScreen(NSRect(origin: windowPoint, size: .zero)).origin let screenHeight = NSScreen.main?.frame.height ?? 0 let cgScreenPoint = CGPoint(x: screenPoint.x, y: screenHeight - screenPoint.y) click(at: cgScreenPoint) } else { let newPosition = cubicBezier(t: t, start: remoteCursorView!.frame.origin, end: destination) remoteCursorView?.frame.origin = newPosition t += CGFloat(0.01 / duration) } } func click(at point: CGPoint) { guard let mouseDown = CGEvent(mouseEventSource: nil, mouseType: .leftMouseDown, mouseCursorPosition: point, mouseButton: .left), let mouseUp = CGEvent(mouseEventSource: nil, mouseType: .leftMouseUp, mouseCursorPosition: point, mouseButton: .left) else { return } // Post the events to the session event tap mouseDown.post(tap: .cgSessionEventTap) mouseUp.post(tap: .cgSessionEventTap) } func randomPointWithinScreen() -> CGPoint { guard let screen = NSScreen.main else { return .zero } let randomX = CGFloat.random(in: 0...screen.frame.width / 2) let randomY = CGFloat.random(in: 100...screen.frame.height) return CGPoint(x: randomX, y: randomY) } func cubicBezier(t: CGFloat, start: CGPoint, end: CGPoint) -> CGPoint { let control1 = CGPoint(x: 2 * start.x / 3 + end.x / 3, y: start.y) let control2 = CGPoint(x: start.x / 3 + 2 * end.x / 3, y: end.y) let x = pow(1 - t, 3) * start.x + 3 * pow(1 - t, 2) * t * control1.x + 3 * (1 - t) * pow(t, 2) * control2.x + pow(t, 3) * end.x let y = pow(1 - t, 3) * start.y + 3 * pow(1 - t, 2) * t * control1.y + 3 * (1 - t) * pow(t, 2) * control2.y + pow(t, 3) * end.y return CGPoint(x: x, y: y) } func applicationWillTerminate(_ aNotification: Notification) { // Show the system cursor when the application is about to terminate NSCursor.unhide() } } extension CGPoint { func distance(to point: CGPoint) -> CGFloat { return hypot(point.x - x, point.y - y) } }
Posted Last updated
.