Metal Tutorial with Swift 3 Part 3: Adding Texture

In part 3 of our Metal tutorial series, you will learn how to add textures to 3D objects using Apple’s built-in 3D graphics framework. By Andrew Kharchyshyn.

Leave a rating/review
Save for later
Share
You are currently viewing page 2 of 5 of this article. Click here to view the first page.

A Wild Race Condition Appears!

Things are running smoothly, but there is a problem that could cause you some major pain later.

Have a look at this graph (and the explanation below):

gifgif

Currently, the CPU gets the “next available buffer”, fills it with data, and then sends it to the GPU for processing.

But since there’s no guarantee about how long the GPU takes to render each frame, there could be a situation where you’re filling buffers on the CPU faster than the GPU can deal with them. In that case, you could find yourself in a scenario where you need a buffer on the CPU, even though it’s in use on the GPU.

On the graph above, the CPU wants to encode the third frame while the GPU draws the first frame, but its uniform buffer is still in use.

So how do you fix this?

The easiest way is to increase the number of buffers in the reuse pool so that it’s unlikely for the CPU to be ahead of the GPU. This would probably fix it, but wouldn’t be 100% safe.

Patience. That’s what you need to solve this problem like a real Metal ninja.

Like A Ninja

Like an undisciplined ninja, the CPU lacks patience, and that’s the problem. It’s good that the CPU can encode commands so quickly, but it wouldn’t hurt the CPU to wait a bit to avoid this racing condition.

feel like a ninja

Fortunately, it’s easy to “train” the CPU to wait when the buffer it wants is still in use.

For this task you’ll use semaphores, a low-level synchronization primitive. Basically, semaphores allow you to keep track of the count of limited resources are available, and block when no more resources are available.

Here’s how you’ll use a semaphore in this example:

  • Initialize with the number of buffers. You’ll be using the semaphore to keep track of how many buffers are currently in use on the GPU, so you’ll initialize the semaphore with the number of buffers that are available (3 to start in this case).
  • Wait before accessing a buffer. Every time you need to access a buffer, you’ll ask the semaphore to “wait”. If a buffer is available, you’ll continue running as usual (but decrement the count on the semaphore). If all buffers are in use, this will block the thread until one becomes available. This should be a very short wait in practice as the GPU is fast.
  • Signal when done with a buffer. When the GPU is done with a buffer, you will “signal” the semaphore to track that it’s available again.

Note: To learn more about semaphores, check out this great explanation.

Note: To learn more about semaphores, check out this great explanation.

This will make more sense in code than in prose. Go to BufferProvider.swift and add the following property:

var avaliableResourcesSemaphore: DispatchSemaphore

Now add this to the top of init:

avaliableResourcesSemaphore = DispatchSemaphore(value: inflightBuffersCount)

Here you create your semaphore with an initial count equal to the number of available buffers.

Now open Node.swift and add this at the top of render:

_ = bufferProvider.avaliableResourcesSemaphore.wait(timeout: DispatchTime.distantFuture)

This will make the CPU wait in case bufferProvider.avaliableResourcesSemaphore has no free resources.

Now you need to signal the semaphore when the resource becomes available.

While you’re still in render, find:

let commandBuffer = commandQueue.makeCommandBuffer()

And add this below:

commandBuffer.addCompletedHandler { (_) in
  self.bufferProvider.avaliableResourcesSemaphore.signal()
}

When the GPU finishes rendering, it executes a completion handler to signal the semaphore and bumps its count back up again.

Also in BufferProvider.swift, add this method:

deinit{
  for _ in 0...self.inflightBuffersCount{
    self.avaliableResourcesSemaphore.signal()
  }
}

deinit simply does a little cleanup before object deletion. Without this, your app would crash when the semaphore is waiting and you’d deleted BufferProvider.

Build and run. Everything should work as before — ninja style!

IMG_3030

Performance Results

You must be eager to see if there’s been any performance improvement. As you did before, open the Debug Navigator tab and select the FPS row.

after

These are my stats: the CPU Frame Time decreased from 1.7ms to 1.2ms. It looks like a small win, but the more objects you draw, the more value it gains. Please note that your actual results will depend on the device you’re using.

Texturing

Note: If you skipped the previous section, start with this version of the project.

So, what are textures? Simply put, textures are 2D images that are typically mapped to 3D models.

Think about some real life objects, such as a orange. How would the orange’s texture look in Metal? Probably something like this:

mandarines

If you wanted to render an orange, you’d first create a sphere-like 3D model, then you would use a texture similar to the one above, and Metal would map it.

Texture Coordinates

Contrary to the bottom-left origination of OpenGL, Metal’s textures originate in the top-left corner. Standards — aren’t they great?

Here’s a sneak peek of the texture you’ll use in this tutorial.

coords

With 3D graphics, it’s typical to see the texture coordinate axis marked with letter s for horizontal and t for vertical, just like the image above.

To differentiate between iOS device pixels and texture pixels, you’ll refer to texture pixels as texels.

Your texture has 512×512 texels. In this tutorial, you’ll use normalized coordinates, which means that coordinates within the texture are always within the range of 0->1. So therefore:

  • The top-left corner has the coordinates (0.0, 0.0)
  • The top-right corner has the coordinates (1.0, 0.0)
  • The bottom-left corner has the coordinates (0.0, 1.0)
  • The bottom-right corner has the coordinates (1.1, 1.1)

When you map this texture to your cube, normalized coordinates will be important to understand.

Using normalized coordinates isn’t mandatory, but it has some advantages. For example, say you want to switch texture with one that has the resolution of 256×256 texels. If you use normalized coordinates, it’ll “just work”, as long as the new texture maps correctly.

Using Textures in Metal

In Metal, an object that represents texture is any object that conforms to MTLTexture protocol. There are countless texture types in Metal, but for now all you need is a type called MTLTexture2D.

Another important protocol is MTLSamplerState. An object that conforms to this protocol basically instructs the GPU how to use the texture.

When you pass a texture, you’ll pass the sampler as well. When using multiple textures that need to be treated similarly, you use the same sampler.

Here is a small visual to help illustrate how you’ll work with textures:

texture_diagrm

For your convenience, the project file contains a special, handcrafted class named MetalTexture that holds all the code to create MTLTexture from the image file in bundle.

Note: I’m not going to delve into it here, but if you want to learn how to create MTLTexture, refer to this post on MetalByExample.com.

Note: I’m not going to delve into it here, but if you want to learn how to create MTLTexture, refer to this post on MetalByExample.com.

Andrew Kharchyshyn

Contributors

Andrew Kharchyshyn

Author

Over 300 content creators. Join our team.