Streaming is available in most browsers,
and in the WWDC app.
Explore advanced rendering with RealityKit 2
Create stunning visuals for your augmented reality experiences with cutting-edge rendering advancements in RealityKit. Learn the art of writing custom shaders, draw real-time dynamic meshes, and explore creative post-processing effects to help you stylize your AR scene.
- Building an Immersive Experience with RealityKit
- Creating a Fog Effect Using Scene Depth
- Displaying a Point Cloud Using Scene Depth
- Explore the RealityKit Developer Forums
- Have a question? Ask with tag wwdc21-10075
- Search the forums for tag wwdc21-10075
♪ Bass music playing ♪ ♪ Courtland Idstrom: Hello, my name is Courtland Idstrom, and I'm an engineer on the RealityKit team. In this video, I'm going to show you how to use the new rendering features in RealityKit 2. RealityKit is a framework designed to make building AR apps simple and intuitive. Rendering is a key piece of RealityKit, centered around highly realistic, physically based rendering. Since our first release in 2019, we've been working on your feedback and we're shipping a major update to RealityKit. In the "Dive into Reality Kit 2" session, we covered the evolution of RealityKit, providing many enhancements -- from updates to the ECS system, more evolved material and animation capabilities, and generating audio and texture resources at runtime. To showcase these improvements, we built an app that turns your living room into an underwater aquarium. In this talk, we'll show some of the new rendering features that went into the app. RealityKit 2 provides control and flexibility with how objects are rendered, allowing you to create even better AR experiences. This year we bring advancements to our material system, enabling you to add your own materials by authoring custom Metal shaders. Custom post effects allow you to augment RealityKit's post effects with your own. New mesh APIs allow mesh creation, inspection, and modifications at runtime. Let's start with the most requested feature in RealityKit 2, support for custom shaders. RealityKit's rendering centers around a physically based rendering model. Its built-in shaders make it easy to create models that look natural next to real objects across a range of lighting conditions. This year, we're building on these physically based shaders and exposing the ability for you to customize the geometry and surface of models using shaders. The first of our shader APIs is geometry modifier. A geometry modifier is a program, written in the Metal Shading Language, that gives you the opportunity to change the vertices of an object every frame as it's rendered on the GPU. This includes moving them and customizing their attributes, such as color, normal, or UVs. It's run inside of RealityKit's vertex shader, and is perfect for ambient animation, deformation, particle systems, and billboards. Our seaweed is a great example of ambient animation. The seaweed is moving slowly due to the movement of water around it. Let's take a closer look. Here you can see the wireframe of the seaweed as created by our artist; this shows the vertices and triangles comprising the mesh. We're going to write a shader program that executes on each vertex to create our motion. We'll use a sine wave, a simple periodic function, to create movement. We're simulating water currents so we want nearby vertices to behave similarly, regardless of their model's scale or orientation. For this reason, we use the vertex's world position as an input to the sine function. We include a time value as well, so that it moves over time. Our first sine wave is in the Y dimension to create up-down movement. To control the period of the motion, we'll add a spatial scale. And we can control the amount of its movement with an amplitude. We'll apply the same function to the X and Z dimensions so it moves in all three axes. Now, let's look at the model as a whole. One thing we haven't yet accounted for: vertices close to the base of the stalk have very little room for movement, while ones at the top have the highest freedom to move. To simulate this, we can use the vertex's y-coordinate relative to the object's origin as a scaling factor for all three axes, which gives us our final formula. Now that we have a plan for our shader, let's take a look at where to find these parameters. Geometry parameters are organized into a few categories. The first is uniforms, values that are the same for every vertex of the object within one frame. We need time for our seaweed. Textures contain all textures authored as part of the model, plus an additional custom slot, which you can use as you see fit. Material constants have any parameters, such as tint or opacity scale, authored with the object or set via code. Geometry contains some read-only values, such as the current vertex's model position or vertex ID. We need both model and world positions for our seaweed movement. Geometry also has read-write values, including normal, UVs, and model position offset. Once we have computed our offset, we'll store it here to move our vertices. Let's dive into Metal shader. We start out by including RealityKit.h. Now we declare a function with the visible function attribute. This instructs the compiler to make it available separately from other functions. The function takes a single parameter, which is RealityKit's geometry_parameters. We'll retrieve all values through this object. Using the geometry member of params, we'll ask for both the world position and model position. Next we calculate a phase offset, based on the world position at the vertex and time. Then we apply our formula to calculate this vertex's offset. We store the offset on geometry, which will get added to the vertex's model position. We have our geometry modifier, but it's not yet hooked up to our seaweed. Let's switch to our ARView subclass, written in Swift. We start by loading our app's default Metal library, which contains our shader. Next we construct a geometryModifier using our shader's name and library. For each material on the seaweed, we create a new custom material. We pass the existing material as the first parameter to CustomMaterial, so that it inherits the textures and material properties from the base material while adding our geometry modifier. It looks pretty nice! Since we're underwater, we've kept the animation pretty slow. By tweaking amplitude and phase, the same effect can be extended to grass, trees, or other foliage. Now that we've shown how to modify geometry, let's talk about shading. This is our octopus from the underwater scene, looking great with our built-in shader. As they do, our octopus transitions between multiple looks. The second look has a reddish color. Our artist has authored two base color textures, one for each look. In addition to the color change, the red octopus has a higher roughness value, making it less reflective. And, to make our octopus even more special, we wanted to create a nice transition between looks. Here you can see the transition in action. Mesmerizing. While each look can be described as a physically based material, for the transition itself, we need to write a surface shader. So what is a surface shader? A surface shader allows you to define the appearance of an object. It runs inside the fragment shader for every visible pixel of an object. In addition to color, this includes surface properties such as normal, specular, and roughness. You can write shaders that enhance an object's appearance or replace it entirely, creating new effects. We've seen the two base-color textures for our octopus. For the transition effect, our artist has encoded a special texture for us. This texture is actually a combination of three different layers. There's a noise layer on top creating localized transition patterns. We have a transition layer, which dictates the overall movement, starting at the head and moving towards the tentacles. And there's a mask layer for areas that we don't want to change color, such as the eye and underside of the tentacles. These three layers are combined into the red, green, and blue channels of our texture, which we assign to the custom texture slot. With our textures set up, let's look at how to access these from a surface shader. Similar to the geometry modifier, the surface shader has access to uniforms, textures, and material constants. Time is an input to our octopus transition. We'll sample textures authored with our model and read material constants, allowing our artist to make model-wide adjustments. Geometry -- such as position, normal, or UVs -- appear in a geometry structure. These are the interpolated outputs from the vertex shader. We'll use UV0 as our texture coordinate. A surface shader writes a surface structure. Properties start with default values, and we're free to calculate these values in any way we see fit. We'll be calculating base color and normal. Then, four surface parameters: roughness, metallic, ambient occlusion, and specular. Now that we know where our values live, let's start writing our shader. We'll do this in three steps. First calculate the transition value, where 0 is a fully purple octopus and 1 is fully red. Using the transition value, we'll calculate color and normal and then fine-tune by assigning material properties. Let's get started. First step: transition. We're building the octopus surface function, which takes a surface_parameters argument. Since we're using textures, we declare a sampler. On the right, you can see what our octopus looks like with an empty surface shader -- it's gray and a little bit shiny. RealityKit puts you in complete control of what does or does not contribute to your model's appearance. In order to compute color, there's a few things we need to do first. We'll store some convenience variables. We access our UV0, which we'll use as a texture coordinate. Metal and USD have different texture coordinate systems, so we'll invert the y-coordinate to match the textures loaded from USD. Now we'll sample our transition texture -- the three-layered texture our artist created. Our artist set up a small function that takes the mask value plus time, and returns 0 to 1 values for blend and colorBlend. Second step: color and normal. With our previously computed blend variable, we can now calculate the octopus's color and see the transition. To do this, we sample two textures: the base color and the secondary base color, which we've stored in emissive_color. Then we blend between the two colors using the previously computed colorBlend. We'll multiply by base_color_tint -- a value from the material -- and set our base color on the surface. Next we'll apply the normal map, which adds surface deviations, most noticeable on the head and tentacles. We sample the normal map texture, unpack its value, and then set on the surface object. Onto material properties. Here's our octopus so far, with color and normal. Let's see how surface properties affect its look. Roughness, which you'll see on the lower body; ambient occlusion, which will darken up the lower portions; and specular, which gives us a nice reflection on the eye and some additional definition on the body. Let's add these to our shader. We sample four textures on the model, one for each property. Next we scale these values with material settings. In addition, we're also increasing the roughness as we transition from purple to red. Then we set our four values on the surface. Similar to before, we need to apply the shader to our model. We assign this material to our model in our ARView subclass. First we load our two additional textures, then load our surface shader. Like before, we're constructing new materials from the object's base material, this time with a surface shader and our two additional textures. And we're done. So to recap, we've shown the seaweed animation using geometry modifiers and how to build an octopus transition with surface shaders. While we've demonstrated them separately, you can combine the two for even more interesting effects. Moving on to another highly requested feature, support for adding custom post processing effects. RealityKit comes with a rich suite of camera-matched post effects like motion blur, camera noise, and depth of field. These effects are all designed to make virtual and real objects feel like they're part of the same environment. These are available for you to customize on ARView. This year, we're also exposing the ability for you to create your own fullscreen effects. This allows you to leverage RealityKit for photo realism, and add new effects to tailor the result for your app. So what is a post process? A post process is a shader or series of shaders that execute after objects have been rendered and lit. It also occurs after any RealityKit post effects. Its inputs are two textures: color and a depth buffer. The depth buffer is displayed as greyscale here; it contains a distance value for each pixel relative to the camera. A post process writes its results to a target color texture. The simplest post effect would copy source color into target color. We can build these in a few ways. Apple's platforms come with a number of technologies that integrate well with post effects, such as Core Image, Metal Performance Shaders, and SpriteKit. You can also write your own with the Metal Shading Language. Let's start with some Core Image effects. Core Image is an Apple framework for image processing. It has hundreds of color-processing, stylization, and deformation effects that you can apply to images and video. Thermal is a neat effect -- something you might turn on for an underwater fish finder. Let's see how easy it is to integrate with RealityKit. All of our post effects will follow the same pattern. You set render callbacks, respond to prepare with device, and then post process will be called every frame. Render callbacks exist on RealityKit's ARView. We want both the prepareWithDevice and postProcess callbacks. Prepare with device will be called once with the MTLDevice. This is a good opportunity to create textures, load compute or render pipelines, and check device capabilities. This is where we create our Core Image context. The postProcess callback is invoked each frame. We'll create a CIImage, referencing our source color texture. Next we create our thermal filter. If you're using a different Core Image filter, this is where you'd configure its other parameters. Then we create a render destination, which targets our output color texture and utilizes the context's command buffer. We ask Core Image to preserve the image's orientation and start the task. That's it! With Core Image, we've unlocked hundreds of prebuilt effects that we can use. Now let's see how we can use Metal Performance Shaders to build new effects. Let's talk about bloom. Bloom is a screen space technique that creates a glow around brightly lit objects, simulating a real-world lens effect. Core Image contains a bloom effect, but we're going to build our own so we can control every step of the process. We'll build the effect with Metal Performance Shaders, a collection of highly optimized compute and graphics shaders. To build this shader, we're going to construct a graph of filters using color as the source. We first want to isolate the areas that are bright. To do this, we use an operation called "threshold to zero." It converts color to luminance and sets every pixel below a certain brightness level to 0. We then blur the result using a Gaussian blur, spreading light onto adjacent areas. Efficient blurs can be challenging to implement and often require multiple stages. Metal Performance Shaders handles this for us. Then we add this blurred texture to the original color, adding a glow around bright areas. Let's implement this graph as a post effect. We start by creating an intermediate bloomTexture. Then execute our ThresholdToZero operation, reading from sourceColor and writing to bloomTexture. Then we perform a gaussianBlur in place. Finally, we add our original color and this bloomed color together. That's it! Now that we've seen a couple ways to create post effects, let's talk about a way to put effects on top of our output using SpriteKit. SpriteKit is Apple's framework for high performance, battery-efficient 2D games. It's perfect for adding some effects on top of our 3D view. We'll use it to add some bubbles on the screen as a post effect, using the same prepareWithDevice and postProcess callbacks. We have the same two steps as before. In prepareWithDevice, we'll create our SpriteKit renderer and load the scene containing our bubbles. Then in our postProcess callback, we'll copy our source color to target color, update our SpriteKit scene, and render on top of the 3D content. prepareWithDevice is pretty straightforward -- we create our renderer and load our scene from a file. We'll be drawing this over our AR scene, so we need our SpriteKit background to be transparent. In postProcess, we first blit the source color to the targetColorTexture; this will be the background that SpriteKit renders in front of. Then advance our SpriteKit scene to the new time so our bubbles move upward. Set up a RenderPassDescriptor and render onto it. And that's it! We've shown how to utilize existing frameworks to make post effects, but sometimes you really do need to make one from scratch. You can also author a full-screen effect by writing a compute shader. For our underwater demo, we needed a fog effect that applies to virtual objects and camera passthrough. Fog simulates the scattering of light through a medium; its intensity is proportional to the distance. To create this effect, we needed to know how far each pixel is from the device. Fortunately, ARKit and RealityKit both provide access to depth information. For LiDAR-enabled devices, ARKit provides access to sceneDepth, containing distances in meters from the camera. These values are extremely accurate at a lower resolution than the full screen. We could use this depth directly but it doesn't include virtual objects, so they wouldn't fog correctly. In our postProcess, RealityKit provides access to depth for virtual content and -- when scene understanding is enabled -- approximated meshes for real-world objects. The mesh builds progressively as you move, so it contains some holes where we haven't currently scanned. These holes would show fog as if they were infinitely far away. We'll combine data from these two depth textures to resolve this discrepancy. ARKit provides depth values as a texture. Each pixel is the distance, in meters, of the sampled point. Since the sensor is at a fixed orientation on your iPhone or iPad, we'll ask ARKit to construct a conversion from the sensor's orientation to the current screen orientation, and then invert the result. To read virtual content depth, we need a little bit of info about how RealityKit packs depth. You'll notice that, unlike ARKit's sceneDepth, brighter values are nearer to the camera. Values are stored in a 0 to 1 range, using an Infinite Reverse-Z Projection. This just means that 0 means infinitely far away, and 1 is at the camera's near plane. We can easily reverse this transform by dividing the near plane depth by the sampled depth. Let's write a helper function to do this. We have a Metal function taking the sample's depth and projection matrix. Pixels with no virtual content are exactly 0. We'll clamp to a small epsilon to prevent divide by zero. To undo the perspective division, we take the last column's z value and divide by our sampled depth. Great! Now that we have our two depth values, we can use the minimum of the two as an input to our fog function. Our fog has a few parameters: a maximum distance, a maximum intensity at that distance, and a power curve exponent. The exact values were chosen experimentally. They shape our depth value to achieve our desired fog density. Now we're ready to put the pieces together. We have our depth value from ARKit, a linearized depth value from RealityKit, and a function for our fog. Let's write our compute shader. For each pixel, we start by sampling both linear depth values. Then we apply our fog function using our tuning parameters, which turns linear depth into a 0 to 1 value. Then we blend between source color and the fog color, depending on fogBlend's value, storing the result in outColor. To recap, RealityKit's new post process API enables a wide range of post effects. With Core Image, we've unlocked hundreds of ready-built effects. You can easily build new ones with Metal Performance Shaders, add screen overlays with SpriteKit, and write your own from scratch with Metal. For more information about Core Image or Metal Performance Shaders, see the sessions listed. Now that we've covered rendering effects, let's move onto our next topic, dynamic meshes. In RealityKit, mesh resources store mesh data. Previously, this opaque type allowed you to assign meshes to entities. This year, we're providing the ability to inspect meshes, create, and update meshes at runtime. Let's look at how we can add special effects to the diver. In this demo, we want to show a spiral effect where the spiral contours around the diver. You can also see how the spiral is changing its mesh over time to animate its movement. Let's have a look at how to create this using our new mesh APIs. The effect boils down into three steps. We use mesh inspection to measure the model by examining its vertices. We then build a spiral, using the measurements as a guide. And finally, we can update the spiral over time. Starting with mesh inspection. To explain how meshes are stored, let's look at our diver model. In RealityKit, the Diver's mesh is represented as a mesh resource. With this year's release, MeshResource now contains a member called Contents. There is where all of the processed mesh geometry lives. Contents contains a list of instances and models. Models contain the raw vertex data, while instances reference them and add a transform. Instances allow the same geometry to be displayed multiple times without copying the data. A model can have multiple parts. A part is a group of geometry with one material. Finally, each part contains the vertex data we're interested in, such as positions, normals, texture coordinates, and indices. Let's first look at how we would access this data in code. We'll make an extension on MeshResource.Contents, which calls a closure with the position of each vertex. We start by going through all of the instances. Each of these instances map to a model. For each instance, we find its transform relative to the entity. We can then go into each of the model's parts and access the part's attributes. For this function, we're only interested in position. We can then transform the vertex to the entity space position and call our callback. Now that we can visit the vertices, let's look at how we want to use this data. We'll section our diver into horizontal slices. For each slice, we'll find the bounding radius of our model, and do this for every slice. To implement this, we'll start by creating a zero-filled array with numSlices elements. We then figure out the bounds of the mesh along the y-axis to create our slices. Using the function we just created, for each vertex in the model, we figure out which slice it goes in and we update the radius with the largest radius for that slice. Finally, we return a Slices object containing the radii and bounds. Now that we've analyzed our mesh to know how big it is, let's look at how to create the spiral mesh. The spiral is a dynamically generated mesh. To create this mesh, we need to describe our data to RealityKit. We do this with a mesh descriptor. The mesh descriptor contains the positions, normals, texture coordinates, primitives, and material indices. Once you have a mesh descriptor, you can generate a mesh resource. This invokes RealityKit's mesh processor, which optimizes your mesh. It will merge duplicate vertices, triangulate your quads and polygons, and represent the mesh in the most efficient format for rendering. The result of this processing gives us a mesh resource, which we can assign to an entity. Note that normals, texture coordinates, and materials are optional. Our mesh processor will automatically generate correct normals and populate them. As part of the optimization process, RealityKit will regenerate the topology of the mesh. If you need a specific topology, you can use MeshResource.Contents directly. Now that we know how creating a mesh works, let's look at how to create the spiral. To model the spiral, let's take a closer look at a section.
A spiral is also known as a helix. We'll build this in evenly spaced segments. We can calculate each point using the mathematical definition of a helix and the radius from our analyzed mesh. Using this function for each segment on the helix, we can define four vertices. P0 and P1 are exactly the values that p() returns. To calculate P2 and P3, we can offset P0 and P1 vertically with our given thickness. We're creating triangles, so we need a diagonal. We'll make two triangles using these points. Time to put it all together. Our generateSpiral function needs to store positions and indices. Indices reference values in positions. For each segment, we'll calculate four positions and store their indices -- i0 is the index of p0 when it's added to the array. Then we add the four positions and six indices -- for two triangles -- to their arrays. Once you have your geometry, creating a mesh is simple. First, create a new MeshDescriptor. Then assign positions and primitives. We're using triangle primitives, but we could also choose quads or polygons. Once those two fields are populated, we have enough to generate a MeshResource. You can also provide other vertex attributes like normals, textureCoordinates, or material assignments. We've covered how to create the mesh. The last thing in our spiral example is mesh updates. We use mesh updates to get the spiral to move around the diver. To update the mesh, there's two ways. We could create a new MeshResource each frame using the MeshDescriptors API. But this is not an efficient route, as it will run through the mesh optimizer each frame. A more efficient route is to update the contents in the MeshResource. You can generate a new MeshContents and use it to replace the mesh. There is one caveat, however. If we created our original mesh using MeshDescriptor, RealityKit's mesh processor will have optimized the data. Topology is also reduced to triangles. As a result, make sure you know how your mesh is affected before applying any updates. Let's have a look at code for how you can update the spiral. We start by storing the contents of the existing spiral. Create a new model from the existing model. Then, for each part, we replace triangleIndices with a subset of indices. Finally, with the new contents, we can call replace on the existing MeshResource. And that's it for dynamic meshes. To summarize the key things about dynamic meshes, we've introduced a new Contents field in the MeshResource. This container allows you to inspect and modify a mesh's raw data. You can create new meshes using MeshDescriptor. This flexible route allows you to use triangles, quads, or even polygons, and RealityKit will generate an optimized mesh for rendering. Finally, to update meshes, we've provided the ability to update a MeshResource's contents, which is ideal for frequent updates. To wrap up, today we've shown off some of the new rendering features in RealityKit 2. Geometry modifiers let you move and modify vertices. Surface shaders allow you to define your model's surface appearance. You can use post effects to apply effects to the final frame, and dynamic meshes make it easy to create and modify meshes at runtime. To see more of this year's features, don't miss "Dive into RealityKit 2." And for more information about RealityKit, watch "Building Apps with RealityKit." We're very excited about this year's release, and can't wait to see the experiences you build with it. Thank you. ♪
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.