LiquidFun Tutorial with Metal and Swift – Part 2
In this LiquidFun tutorial, you’ll learn how to simulate water on iOS using LiquidFun, and render it on screen with Metal and Swift. By Allen Tan.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
LiquidFun Tutorial with Metal and Swift – Part 2
35 mins
Update 5/12/2015: Updated for Xcode 6.3 / Swift 1.2.
Welcome back to our 2-part tutorial series that teaches you how to use LiquidFun with Metal and Swift!
In the first part of the series, you learned how to integrate LiquidFun with Swift and used that knowledge to create an invisible liquid particle system.
In this second part of the series, you’ll learn how to render your LiquidFun particles onscreen using projection transformations, uniform data and shaders in Metal. You’ll also get to move them around in a simulated physics world for some water-splashing fun.
After all, you didn’t name your project LiquidMetal for nothing.
Getting Started
First, make sure you have a copy of the project from Part 1, either by going through the first tutorial or by downloading the finished project.
Before proceeding with Metal, I recommend going through the Introduction to Metal Tutorial if you haven’t already. To keep this part short, I’ll breeze through the basic setup of Metal and focus only on new concepts that aren’t in the other Metal tutorials on our site.
Create a Metal Layer
You first need to create a CAMetalLayer
, which acts as the canvas upon which Metal renders content.
Inside ViewController.swift, add the following properties and new method:
var device: MTLDevice! = nil
var metalLayer: CAMetalLayer! = nil
func createMetalLayer() {
device = MTLCreateSystemDefaultDevice()
metalLayer = CAMetalLayer()
metalLayer.device = device
metalLayer.pixelFormat = .BGRA8Unorm
metalLayer.framebufferOnly = true
metalLayer.frame = view.layer.frame
view.layer.addSublayer(metalLayer)
}
Now replace printParticleInfo()
in viewDidLoad
with a call to this new method:
createMetalLayer()
Inside createMetalLayer
, you store a reference to an MTLDevice
, which you’ll use later to create the other Metal objects that you’ll need. Next, you create a CAMetalLayer
with default properties and add it as a sublayer to your current view’s main layer. You call createMetalLayer
from viewDidLoad
to ensure your Metal layer is set up along with the view.
Create a Vertex Buffer
The next step is to prepare a buffer that contains the positions of each particle in your LiquidFun world. Metal needs this information to know where to render your particles on the screen.
Still in ViewController.swift, add the following properties and new method:
var particleCount: Int = 0
var vertexBuffer: MTLBuffer! = nil
func refreshVertexBuffer () {
particleCount = Int(LiquidFun.particleCountForSystem(particleSystem))
let positions = LiquidFun.particlePositionsForSystem(particleSystem)
let bufferSize = sizeof(Float) * particleCount * 2
vertexBuffer = device.newBufferWithBytes(positions, length: bufferSize, options: nil)
}
Here you add two new properties, particleCount
to keep track of how many particles you have, and vertexBuffer
to store the MTLBuffer
Metal requires to access the vertex positions.
Inside refreshVertexBuffer
, you call LiquidFun.particleCountForSystem
to get the number of particles in the system, and store the result in particleCount
. Next, you use the MTLDevice
to create a vertex buffer, passing in the position array directly from LiquidFun.particlePositionsForSystem
. Since each position has an x- and y-coordinate pair as float types, you multiply the size in bytes of two Float
s by the number of particles in the system to get the size needed to create the buffer.
Call this method at the end of viewDidLoad
:
refreshVertexBuffer()
Now that you’ve given Metal access to your particles, it’s time to create the vertex shader that will work with this data.
Create a Vertex Shader
The vertex shader is the program that takes in the vertex buffer you just created and determines the final position of each vertex onscreen. Since LiquidFun’s physics simulation calculates the particle positions for you, your vertex shader only needs to translate LiquidFun particle positions to Metal coordinates.
Right-click the LiquidMetal group in the Project Navigator and select New File…, then select the iOS\Source\Metal File template and click Next. Enter Shaders.metal for the filename and click Create.
First, add the following structs to Shaders.metal:
struct VertexOut {
float4 position [[position]];
float pointSize [[point_size]];
};
struct Uniforms {
float4x4 ndcMatrix;
float ptmRatio;
float pointSize;
};
You’ve defined two structs:
-
VertexOut
contains data needed to render each vertex. The[[position]]
qualifier indicates thatfloat4 position
contains the position of the vertex onscreen, while the[[point_size]]
qualifier indicates thatfloat pointSize
contains the size of each vertex. Both of these are special keywords that Metal recognizes, so it knows exactly what each property is for. -
Uniforms
contains properties common to all vertices. This includes the points-to-meters ratio you used for LiquidFun (ptmRatio
), the radius of each particle in the particle system (pointSize
) and the matrix that translates positions from screen points to normalized device coordinates (ndcMatrix
). More on this later.
Next is the shader program itself. Still in Shaders.metal, add this function:
vertex VertexOut particle_vertex(const device packed_float2* vertex_array [[buffer(0)]],
const device Uniforms& uniforms [[buffer(1)]],
unsigned int vid [[vertex_id]]) {
VertexOut vertexOut;
float2 position = vertex_array[vid];
vertexOut.position =
uniforms.ndcMatrix * float4(position.x * uniforms.ptmRatio, position.y * uniforms.ptmRatio, 0, 1);
vertexOut.pointSize = uniforms.pointSize;
return vertexOut;
}
The shader’s first parameter is a pointer to an array of packed_float2
data types—a packed vector of two floats, commonly containing x and y position coordinates. Packed vectors don’t contain the extra bytes commonly used to align data elements in a computer’s memory. You’ll read more about that a bit later.
The [[buffer(0)]]
qualifier indicates that vertex_array
will be populated by the first buffer of data that you send to your vertex shader.
The second parameter is a handle to the Uniforms
structure. Similarly, the [[buffer(1)]]
qualifier indicates that the second parameter is populated by the second buffer of data sent to the vertex shader.
The third parameter is the index of the current vertex inside the vertex array, and you use it to retrieve that particular vertex from the array. Remember, the GPU calls the vertex shader many times, once for each vertex to render. For this app, the vertex shader will be called once per water particle to render.
Inside the shader, you get the vertex’s position in LiquidFun’s coordinate system, then convert it to Metal’s coordinate system and output it via vertexOut
.
To understand how the final position is computed, you have to be aware of the different coordinate systems with which you’re working. Between LiquidFun and Metal, there are three different coordinate systems:
- the physics world’s coordinate system;
- the regular screen coordinate system; and
- the normalized screen coordinate system.
Given a regular iPhone 5s screen (320 points wide by 568 points high), these translate to the following coordinate systems:
- The screen coordinate system (red) is the easiest to understand and is what you normally use when positioning objects onscreen. It starts from (0, 0) at the bottom-left corner and goes up to the screen’s width and height in points at the upper-right corner.
- The physics world coordinate system (blue) is how LiquidFun sees things. Since LiquidFun operates in smaller numbers, you use
ptmRatio
to convert screen coordinates to physics world coordinates and back. - The normalized device coordinate system (green) is Metal’s default coordinate system and is the trickiest to work with. While the previous two coordinate systems both agree that the origin (0, 0) is at the lower-left corner, Metal’s coordinate system places it at the center of the screen. The coordinates are device agnostic, so no matter the size of the screen, (-1,-1) is the lower-left corner and (1, 1) is the upper-right corner.
Since the vertex buffer contains vertices in LiquidFun’s coordinate system, you need to convert it to normalized device coordinates so it comes out at the right spot on the screen. This conversion happens in a single line:
vertexOut.position =
uniforms.ndcMatrix * float4(position.x * uniforms.ptmRatio, position.y * uniforms.ptmRatio, 0, 1);
You first convert the vertex to regular screen coordinates by multiplying the x- and y-positions by the points-to-meters ratio. You use these new values to create a float4
to represent XYZW coordinates. Finally, you multiply the XYZW coordinates by a “mathemagical” matrix that translates your coordinates to normalized screen coordinates using an orthographic projection. You’ll get acquainted with this matrix very soon.
Note: I won’t explain in depth what the z- and w-components are for. As far as this tutorial goes, you need these components to do 3D matrix math.
The z-component specifies how far or near the object is from the camera, but this doesn’t matter much when dealing with a 2D coordinate space. You need the w-component because matrix multiplication formulas work on 4×4 matrices. Long story short, the x-, y-, and z-components are divided by the w-component to get the final 3D coordinates. In this case, w is 1 so that the x-, y-, and z-components don’t change.
If you wish to learn more, you can read about homogeneous coordinates on Wikipedia for more information.
Note: I won’t explain in depth what the z- and w-components are for. As far as this tutorial goes, you need these components to do 3D matrix math.
The z-component specifies how far or near the object is from the camera, but this doesn’t matter much when dealing with a 2D coordinate space. You need the w-component because matrix multiplication formulas work on 4×4 matrices. Long story short, the x-, y-, and z-components are divided by the w-component to get the final 3D coordinates. In this case, w is 1 so that the x-, y-, and z-components don’t change.
If you wish to learn more, you can read about homogeneous coordinates on Wikipedia for more information.