2D Graphics

RSS for tag

Discuss integrating two-dimensional graphics into your app.

Posts under 2D Graphics tag

76 results found
Sort by:
Post not yet marked as solved
88 Views

Opacity problems with usdz

Hi, I'm having problems with opacity when trying to show a usdz file with arkit, when I try to show it it looks semi transparent. My goal is to make this look completely opaque. This is my usda file: Files and results here: https://drive.google.com/drive/folders/1I_WXRiVvBzIjYjvXS-IQYHircLPM-ztS?usp=sharing #usda 1.0 ( upAxis = "Y" ) def Xform "TexModel" ( kind = "component" ) { def Mesh "card" ( prepend apiSchemas = ["MaterialBindingAPI"] ) { float3[] extent = [(-250, -250, 0), (250, 250, 0)] int[] faceVertexCounts = [4] int[] faceVertexIndices = [0, 1, 2, 3] rel material:binding = /TexModel/Eight8K_png point3f[] points = [(-250, -250, 0), (250, -250, 0), (250, 250, 0), (-250, 250, 0)] texCoord2f[] primvars:st = [(0, 0), (1, 0), (1, 1), (0, 1)] ( interpolation = "varying" ) } def Material "Eight8K_png" { token outputs:surface.connect = /TexModel/Eight8K_png/PbrPreview.outputs:surface def Shader "PbrPreview" { uniform token info:id = "UsdPreviewSurface" color3f inputs:diffuseColor.connect = /TexModel/Eight8K_png/Texture.outputs:rgb float inputs:opacity.connect = /TexModel/Eight8K_png/Texture.outputs:alpha float inputs:opacityThreshold = 0.01 float inputs:metallic = 0 float inputs:roughness = 0.4 token outputs:surface } def Shader "PrimvarST" { uniform token info:id = "UsdPrimvarReader_float2" token inputs:varname = "st" float2 outputs:result } def Shader "Texture" { uniform token info:id = "UsdUVTexture" asset inputs:file = @Eight8K.png@ float2 inputs:st.connect = /TexModel/Eight8K_png/PrimvarST.outputs:result token inputs:wrapS = "repeat" token inputs:wrapT = "repeat" float outputs:alpha float3 outputs:rgb } } } Files and results here: https://drive.google.com/drive/folders/1I_WXRiVvBzIjYjvXS-IQYHircLPM-ztS?usp=sharing
Asked
by yadisnel.
Last updated
.
Post not yet marked as solved
106 Views

Error while converting MTLTexture to CVPixelBuffer

I am getting an error from the graphics driver while converting the EnvironmentTexture(from ARKIT.AREnvironmentProbeAnchor) to CVPixelBuffer. The EnvironmentTexture is an IMTLTexture. I am using Xamarin.iOS. This is the code that I use to convert the IMTLTexture to CVPixelBuffer. buffers[i] = new CVPixelBuffer((nint)epAnchor.EnvironmentTexture.Width, (nint)epAnchor.EnvironmentTexture.Height, CVPixelFormatType.CV32RGBA); GetEnvironmentTextureSlice(buffers[i], epAnchor.EnvironmentTexture, i); public void GetEnvironmentTextureSlice(CVPixelBuffer pixelBuffer, Metal.IMTLTexture texture, int id) { Metal.MTLRegion mtlRegion = Metal.MTLRegion.Create2D((nuint)0, 0, 256, 256); nuint bytesPerPixel = 4; nuint bytesPerRow = bytesPerPixel * (nuint)mtlRegion.Size.Width; // (nuint)pixelBuffer.BytesPerRow; nuint bytesPerImage = bytesPerRow * (nuint)mtlRegion.Size.Height; pixelBuffer.Lock(CVPixelBufferLock.None); texture.GetBytes(pixelBuffer.BaseAddress, (nuint)pixelBuffer.BytesPerRow, mtlRegion, 0); pixelBuffer.Unlock(CVPixelBufferLock.None); } The error I am getting from the driver is AGX: Texture read/write assertion failed: bytes_per_row = used_bytes_per_row I tried with different values of pixelBuffer.BytesPerRow but still getting the error. Can someone help me?
Asked Last updated
.
Post not yet marked as solved
84 Views

SpriteKit. How to change the background speed?

Recently began to study spratkite. I ran into a question. Created a moving background. How to change its speed? import SpriteKit import GameplayKit class BgDeceleration { //Background Speed     var bgDuration : Double = 5 //Toggle     var bgToggle = true {         willSet {bgDuration = bgToggle ? 5.0 : 10.0}         didSet {}     } } let course = bgDeceleration() class GameScene: SKScene {     //Textures     var bgTexture: SKTexture!     //Sprite Nodes     var bg = SKSpriteNode()     //Sprite Objects     var bgObject = SKNode()     override func didMove(to view: SKView) {         //Background Textures         bgTexture = SKTexture(imageNamed: "bg.png")         createObjects()         createGame()     }     func createObjects() {         self.addChild(bgObject)     }     func createGame() {         createBg()     }     func createBg() {         bgTexture = SKTexture(imageNamed: "bg.png")            let moveBg = SKAction.moveBy(x: -bgTexture.size().width, y: 0, duration: bgDeceleration.bgDuration)         let replaceBg = SKAction.moveBy(x: bgTexture.size().width, y: 0, duration: 0)         let moveBgForever = SKAction.repeatForever(SKAction.sequence([moveBg, replaceBg]))         for i in 0..3 {             bg = SKSpriteNode(texture: bgTexture)             bg.position = CGPoint(x: bgTexture.size().width * CGFloat(i), y: size.height/2.0)             bg.size.height = self.frame.height             bg.run(moveBgForever)             bg.zPosition = -1             bgObject.addChild(bg)         }     } } extension GameScene {     override func touchesBegan(_ touches: SetUITouch, with event: UIEvent?) {         //Toggle         bgDeceleration.bgToggle.toggle()     } } Found a solution. But it does not work https://stackoverflow.com/questions/30226379/gradually-increasing-speed-of-scrolling-background-in-spritekit possibly because of the swift version. please share the solution if possible
Asked
by SergeTu.
Last updated
.
Post not yet marked as solved
88 Views

Issue accessing Buffer through Argument Buffer

I am currently trying to use Tier 2 Argument Buffers with an array of buffers to access an indirect buffer but am running into some issues. I am trying to get a basic example up and running but am having trouble getting the shader to read the value in the buffer. Accessing the buffer directly, without an argument buffer, works fine and shows the expected value (12345). The argument buffer shows the buffer as well (has the same cpu address in the debugger), but it seems to have a different device address than the direct one, and also returns 0xDEADBEEF instead of the correct value, which I assume is out of bounds memory or such. The metal debugger, however, correctly links together the buffers, so I can inspect the buffer in the debugger through the argument buffer, and it contains the correct value. I have the following (Rust) code: rust // Setup let argument_desc = mtl::ArgumentDescriptor::new(); argument_desc.set_data_type(mtl::MTLDataType::Pointer); argument_desc.set_index(0); argument_desc.set_array_length(1024); let encoder = device.new_argument_encoder(mtl::Array::from_slice(&[argument_desc])); let argument_buffer = device.new_buffer(encoder.encoded_length(), mtl::MTLResourceOptions::empty()); encoder.set_argument_buffer(&argument_buffer, 0); let buffer = self.device.new_buffer_with_data( [12345u32].as_ptr() as _, mem::size_of::u32() as _, mtl::MTLResourceOptions::StorageModeShared | mtl::MTLResourceOptions::CPUCacheModeDefaultCache ); encoder.set_buffer(0, &buffer, 0); // Command Encoding let encoder = command_buffer.new_compute_command_encoder(); // ...set pipeline state encoder.set_buffer(0, Some(&bufferArray), 0); encoder.use_resource(&buffer, mtl::MTLResourceUsage::Read); encoder.set_bytes( 1, mem::size_of::u32() as _, &0 as *const _ as _, ); encoder.set_buffer( 2, Some(&buffer), 0, ); encoder.dispatch_thread_groups( mtl::MTLSize { width: 1, height: 1, depth: 1, }, mtl::MTLSize { width: 1, height: 1, depth: 1, }, ); This is the compute kernel (with Xcode debug annotations): msl #include metal_stdlib #include simd/simd.h using namespace metal; struct Argument { constant uint32_t *ptr; }; kernel void main0( constant Argument *bufferArray [[buffer(0)]], // bufferArray = 0x400400000 constant uint32_t& buffer_index [[buffer(1)]], // buffer_index = 0 constant uint32_t *buffer [[buffer(2)]] // buffer = 0x400024000 ) { uint32_t x = *buffer; // x = 12345 constant uint32_t *ptr = bufferArray[buffer_index].ptr; // ptr = 0x40002000 uint32_t y = *ptr; // y = 0xDEADBEEF } If anyone has any ideas as to why the buffer access seems to be invalid, I'd greatly appreciate it.
Asked Last updated
.
Post not yet marked as solved
47 Views

Animation bug closing app quickly (multitasking)

Hi, first time for me here and I’d like to thank you in advance for your responses and support. Currently on iPhone 12 Pro Max running beta 14.6 iOS update. The following problem still occurs with the officials iOS version..any fix/knowledge about it? Opening an app and (while other apps are opened in the background) closing it quickly, makes the left app in the background to slide to the left screen with a blank-white, laggish animation. Thanks again in advance, have a great day.
Asked Last updated
.
Post not yet marked as solved
80 Views

What am I doing wrong when trying to draw a VNC frame-buffer quickly to an NSView on macOS?

Greetings! I'm using libvncclient to create a specialized VNC viewer on macOS (developing on Mojave). I have already completely written this app with C++ and FLTK on Linux, *BSD and even on macOS. I want something that is 'native' macOS, so I chose Cocoa and Objective-C. I do not wish to use Swift right now. Also I'm writing this all programmatically and NOT using Xcode. I really don't want to use Xcode due to how buggy it is on Mojave. For the VNC viewer, I'm using a subclassed NSView with the following setup. This viewer is embedded in an NSScrollView: objective-c (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; return self; } (BOOL)isFlipped { return YES; } (BOOL)opaque { return YES; } There are two major events that are used for drawing: invalidating the changed rectangle(s) on the NSView viewer and then actually telling the viewer to draw. Here's the invalidating part: objective-c /* this fires *every* time something is changed on the vnc server's screen */ static void handleFrameBufferUpdate (rfbClient * cl, int x, int y, int w, int h) { dispatch_async(dispatch_get_main_queue(), ^{ NSRect rUp = NSMakeRect(x, y, w, h); [vncViewer setNeedsDisplayInRect:rUp]; }); } Here is the event that tells the view to draw itself after a certain number of invalidation calls: objective-c static void handleFinishedFrameBufferUpdate (rfbClient * cl) { dispatch_async(dispatch_get_main_queue(), ^{ [vncViewer displayIfNeededIgnoringOpacity]; }); } All of the pixel data from the VNC server is written by libvncclient to a frame-buffer -- an array of uint8_t (or uchar, depending on your architecture) in RGBA format. It's 32-bits-per-pixel, 8 bits-per-sample, 4 samples-per-pixel. For the actual drawing, here is the relevant code (the pointer to the frame-buffer array of uint8_t is referred to as: vnc.vncClient-frameBuffer below): objective-c (void)drawRect:(NSRect)dirtyRect { //... [vnc setBytesPerPixel:vnc.vncClient-format.bitsPerPixel / 8]; [vnc setBuffSize:vnc.vncClient-width * vnc.vncClient-height * vnc.bytesPerPixel]; NSImage * img = [[NSImage alloc] initWithSize:NSMakeSize(vnc.vncClient-width, vnc.vncClient-height)]; NSBitmapImageRep * rep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:&vnc.vncClient-frameBuffer pixelsWide:vnc.vncClient-width pixelsHigh:vnc.vncClient-height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:vnc.vncClient-width * [vnc bytesPerPixel] bitsPerPixel:32 ]; [img addRepresentation:rep]; [img drawInRect:[self bounds]]; //... } When I connect to a VNC server, the screen isn't fully filled out, but has multiple white non-image rectangles shifting around with each update: (I'd like to post a pic here, but this system doesn't allow it) I've found out, unfortunately, that either the server or libvncclient is setting the alpha part of each pixel to 0, effectively hiding it. I added this hackish code to set every pixel to full alpha before I do any drawing: objective-c /* if there's an alpha byte, set it to 255 */ if ([vnc bytesPerPixel] == 2 || [vnc bytesPerPixel] == 4) { for (int i = ([vnc bytesPerPixel] - 1); i [vnc buffSize]; i+= [vnc bytesPerPixel]) vnc.vncClient-frameBuffer[i] = 255; } The viewer now fills more of the image, but I'm still getting some shifting areas of white rectangles. (I'd like to post a picture here, but this system won't allow it) Is there any way I can get the NSView to 'retain' what has been drawn on it without it clearing each time there is an update? Is this subclassed NSView the right tool for this job, or should I be using something else? The VNC server updates the frame-buffer many times a second, and I need my viewer to be responsive and not 'laggy'. Thanks!
Asked Last updated
.
Post not yet marked as solved
82 Views

Adjusting L16 pixel format values in custom CIFilter/CIKernel

Is there documentation describing the semantics of a Metal CIKernel function? I have image data where each pixel is a signed 16-bit integer. I need to convert that into any number of color values, starting with a simple shift from signed to unsigned (e.g. the data in one image ranges from about -8,000 to +20,000, and I want to simply add 8,000 to each pixel's value). I've got a basic filter working, but it treats the pixel values as floating point, I think. I've tried using both sample_t and sample_h types in my kernel, and simple arithmetic: extern "C" coreimage::sample_h heightShader(coreimage::sample_h inS, coreimage::destination inDest) { coreimage::sample_h r = inS + 0.1; return r; } This has an effect, but I don't really know what's in inS. Is it a vector of four float16? What are the minimum and maximum values? They seem to be clamped to 1.0 (and perhaps -1.0). Well, I’ve told CI that my input image is CIFormat.L16, which is 16-bit luminance, so I imagine it's interpreting the bits as unsigned? Anyway, where is this documented, if anywhere (the correspondence between input image pixel format and the actual values that get passed to a filter kernel)? Is there a type that lets me work on the integer values? This document - https://developer.apple.com/metal/MetalCIKLReference6.pdf implies that I can only work with floating-point values. But it doesn't tell me how they're mapped. Any help would be appreciated. Thanks.
Asked
by JetForMe.
Last updated
.
Post not yet marked as solved
34 Views

CGAffineTransformScale doesn't work on iPhone12 or iOS 14

on iPhone X, or iOS 13.X , I used CGAffineTransformScale to respond to pinch gesture and enlarge photos on an UIViewController from  UIViewController UIGestureRecognizerDelegate code snippet is as follows and used to work fine, but has no effect on iOS 14 or iPhone12. Any hints?  static CGFloat sPreviousScale = 0.0;     if([recognizer state] == UIGestureRecognizerStateEnded) {         sPreviousScale = 1.0;         return;     }     CGFloat newScale = [recognizer scale] - sPreviousScale +1.0;     CGAffineTransform currentTransformation = self.mImageView.transform;     CGAffineTransform newTransform = CGAffineTransformScale(currentTransformation, newScale, newScale);     // perform the new transform     self.mImageView.transform = newTransform;   sPreviousScale = [recognizer scale];
Asked Last updated
.
Post not yet marked as solved
30 Views

App images needed

Hi, new apple developer finishing my first app and need images for it. Icons and another images. Where can I get that? getty licenses seems not to permit this. (Extensive developer experience since 30yrs. OSS Java mostly.)
Asked
by TessSnow.
Last updated
.
Post not yet marked as solved
182 Views

How to correctly handle HDR10 in custom Metal Core Image Kernel?

I've got a custom Metal Core Image kernel (written with CIImageProcessorKernel) that I'm trying to make work properly with HDR video (HDR10 PQ to start). I understand that for HDR video, the rgb values coming into the shader can have values below 0.0 or above 1.0. However, I don't understand how the 10-bit integer values (ie. 0-1023) in the video are mapped into floating point. What are the minimum and maximum values in floating point? ie. What will a 1023 (pure white) pixel be in floating point in the shader. At 11:32 in WWDC20 session 10009, - https://developer.apple.com/videos/play/wwdc2020/10009/ Edit and play back HDR video with AVFoundation, there's an example of a Core Image Metal kernel that isn't HDR aware and therefore won't work. It's inverting the values that come in by subtracting them from 1.0, which clearly breaks down when 1.0 is not the maximum possible value. How should this be implemented to be HDR aware? metal extern “C” float4 ColorInverter(coreimage::sample_t s, coreimage::destination dest) {   return float4(1.0 - s.r, 1.0 - s.g, 1.0 - s.b, 1.0); }
Asked
by armadsen.
Last updated
.
Post not yet marked as solved
65 Views

Can someone walk me through the app launch sequence at a low level?

Sorry if this is a bad question or if this is asked so much it's like a routine to copy-and-paste this sort of stuff whenever someone else asks but I've been trying to get into Xcode and Swift app development for a while now and I do it on and off in a cycle. I try to get into it and start working with Xcode directly and doing stuff the proper way but I get frustrated by the amount of stuff that seems to go over my head and try to find a different way to make my coding projects / experiments a reality. When I make a new project, there's tons of stuff going on and I feel like I'm just expected to work around it with no idea how any of it works. I guess my head works very much from the ground up, needing to know how things work down to the most basic level for me to incorporate any knowledge into my understanding of a subject. When I try to create a new project from the game template (I use game because it seems closest to my graphing experiment ideas,) there's all sorts of stuff created; AppDelegate, GameScene, ViewControler, Main.storyboard etc and when I launch all this stuff just works. I look at the dev documentation and start to figure stuff out like ok when the app launches it finds the storyboard but how? Where's the option that tells the app the name of the file to launch as the storyboard? How does the viewControler represented in the storyboard know which file represents it? Ok so the dev doc says that the first step in the app launch sequence is UIApplicationMain() but where does it actually get executed from? The app? The OS? There are arguments passed to it so that's where the app delegate comes in but where are the actual args specified? I guess what I'm looking for is the answers to "how would I create all this from scratch? How would I create this without Xcode, go from empty folder with a .app extension to minimal working 2d graphing app and how does all of it work? How does the macOS interpret and properly execute an app?" I'm used to working with building apps in Java and whatnot from scratch, building windows with methods and one main method specified for debug by the compiler / fed to the jar file to first be executed when the file is opened and stuff like that. Ever since reading the Swift guide I found at the official Swift website I've absolutely fallen in love with the language; never before have I thought a language was 'my kind of language' or whose syntax is just perfectly logical to me. I suppose one of the things I really want is to be able to use Swift like Java when it comes to apps and have everything in the code and be able to trace it for myself and work with things through objects and methods and such I hope this makes sense to someone and they can point me in the right direction and get me started reading at the right spot to click on links and branch out my reading from there. Can't wait to get into actual app development the right way instead of using hack solutions like working through Java, C, C++ or whatever and sometimes having to use the app stuff anyways and also using Swift outside of playing around in playgrounds and thinking 'I really wish I could just create an app that just compiles and launches this like the playground'
Asked Last updated
.
Post not yet marked as solved
78 Views

Best practices for displaying very large images (macOS SwiftUI)

I’m writing an app that, among other things, displays very large images (e.g. 106,694 x 53,347 pixels). These are GeoTIFF images, in this case containing digital elevation data for a whole planet. I will eventually need to be able to draw polygons on the displayed image. There was a time when one would use CATiledLayer, but I wonder what is best today. I started this app in Swift/Cocoa, but I'm toying with the idea of starting over in SwiftUI (my biggest hesitation is that I have yet to upgrade to Big Sur). The image data I have is in strips, with an integral number of image rows per strip. Strips are not guaranteed to be contiguous in the file. Pixel formats vary, but in the motivating use case are 16 bits per pixel, with the values signifying meters. As a first approximation, I can simply display these values in a 16 bpp grayscale image. Is the right thing to do to set up a CoreImage pipeline? As I understand it that should give me some automatic memory management, right? I’m hoping to find out the best approach before I spend a lot of time going down the wrong path.
Asked
by JetForMe.
Last updated
.