Metal Tutorial with Swift 3 Part 3: Adding Texture

In part 3 of our Metal tutorial series, you will learn how to add textures to 3D objects using Apple’s built-in 3D graphics framework. By Andrew Kharchyshyn.

Leave a rating/review
Save for later
Share
You are currently viewing page 4 of 5 of this article. Click here to view the first page.

Handling Texture on the GPU

At this point, you’re done working on the CPU side of things, and it’s all GPU from here.

Add this image to your project.

Open Shaders.metal and replace the entire file with the following:

#include <metal_stdlib>
using namespace metal;

// 1
struct VertexIn{
  packed_float3 position;
  packed_float4 color;
  packed_float2 texCoord;  
};

struct VertexOut{
  float4 position [[position]];
  float4 color;
  float2 texCoord; 
};

struct Uniforms{
  float4x4 modelMatrix;
  float4x4 projectionMatrix;
};

vertex VertexOut basic_vertex(
                              const device VertexIn* vertex_array [[ buffer(0) ]],
                              const device Uniforms&  uniforms    [[ buffer(1) ]],
                              unsigned int vid [[ vertex_id ]]) {
  
  float4x4 mv_Matrix = uniforms.modelMatrix;
  float4x4 proj_Matrix = uniforms.projectionMatrix;
  
  VertexIn VertexIn = vertex_array[vid];
  
  VertexOut VertexOut;
  VertexOut.position = proj_Matrix * mv_Matrix * float4(VertexIn.position,1);
  VertexOut.color = VertexIn.color;
  // 2
  VertexOut.texCoord = VertexIn.texCoord; 
  
  return VertexOut;
}

// 3
fragment float4 basic_fragment(VertexOut interpolated [[stage_in]],
                              texture2d<float>  tex2D     [[ texture(0) ]],    
// 4
                              sampler           sampler2D [[ sampler(0) ]]) {  
// 5
  float4 color = tex2D.sample(sampler2D, interpolated.texCoord);               
  return color;
}

Here’s all the things you changed:

  1. The vertex structs now contain texture coordinates.
  2. You now pass texture coordinates from VertexIn to VertexOut.
  3. Here you receive the texture you passed in.
  4. Here you receive the sampler.
  5. You use sample() on the texture to get color for the specific texture coordinate from the texture by using rules specified in sampler.

Almost done! Open MySceneViewController.swift and replace this line:

objectToDraw = Cube(device: device)

With this:

objectToDraw = Cube(device: device, commandQ:commandQueue)

Build and run. Your cube should now be texturized!

IMG_3049

Colorizing a Texture (Optional)

At this point, you’re ignoring the cube’s color values and simply using color values from the texture. But what if you need to texturize the object’s color, instead of covering it up?

In the fragment shader, replace this line:

float4 color = tex2D.sample(sampler2D, interpolated.texCoord);

With:

float4 color =  interpolated.color * tex2D.sample(sampler2D, interpolated.texCoord);

You should get something like this:

IMG_3048

You did this just to see how you can combine colors inside the fragment shader. And yes, it’s as simple as doing a little multiplication.

But don’t continue until you revert that last change — because it really doesn’t look that good. :]

Adding User Input

All this texturing is cool, but it’s rather static. Wouldn’t it be cool if you could rotate the cube with your finger and see your beautiful texturing work from every angle?

You can use UIPanGestureRecognizer to detect user interactions.

Open MySceneViewController.swift, and add these two new properties:

let panSensivity:Float = 5.0
var lastPanLocation: CGPoint!

Now add two new methods:

//MARK: - Gesture related
// 1
func setupGestures(){
  let pan = UIPanGestureRecognizer(target: self, action: #selector(MySceneViewController.pan))
  self.view.addGestureRecognizer(pan)
}
  
// 2
func pan(panGesture: UIPanGestureRecognizer){
  if panGesture.state == UIGestureRecognizerState.changed {
    let pointInView = panGesture.location(in: self.view)
    // 3
    let xDelta = Float((lastPanLocation.x - pointInView.x)/self.view.bounds.width) * panSensivity
    let yDelta = Float((lastPanLocation.y - pointInView.y)/self.view.bounds.height) * panSensivity
    // 4
    objectToDraw.rotationY -= xDelta
    objectToDraw.rotationX -= yDelta
    lastPanLocation = pointInView
  } else if panGesture.state == UIGestureRecognizerState.began {
    lastPanLocation = panGesture.location(in: self.view)
  }
}

Here’s what’s going on in the code above:

  1. Create a pan gesture recognizer and add it to your view.
  2. Check if the touch moved.
  3. When the touch moves, calculate how much it moved using normalized coordinates. You also apply panSensivity to control rotation speed.
  4. Apply the changes to the cube by setting the rotation properties.

Now add the following to the end of viewDidLoad():

setupGestures()

Build and run.

Hmmm, the cube spins all by itself. Why is that? Think through what you just did and see if you can identify the problem here and how to solve it. Open the spoiler to check if your assumption is correct.

[spoiler]The cube’s rotation properties were modified inside updateWithDelta. This is called every frame, which overwrites the rotation values that you set when the touch moves.

You need to remove updateWithDelta from Cube.swift to solve the problem.

Build and run to see if you can control the cube:

IMG_3049

The cube should now rotate when you pan the screen. Congrats on creating a very cool effect!

[/spoiler]

Debugging Metal

Like any code, you’ll need to do a little debugging to make sure your work is free of errors. And if you look closely, you’ll notice that at some angles, the sides are a little “crispy”.

lupe

To fully understand the problem, you’ll need to debug. Fortunately, Metal comes with some stellar tools to help you.

While the app is running, press the Capture the GPU Frame button.

Screen Shot 2015-01-14 at 1.23.27 PM

Pressing the button will automatically pause the app on a breakpoint; Xcode will then collect all values and states of this single frame.

Xcode may put you into assistant mode, meaning that it splits your main area into two. You don’t need all that, so feel free to return to regular mode. Also, select All MTL Objects in the debug area as shown in the screenshot:

Screen Shot 2015-01-14 at 1.46.25 PM

In the left sidebar, select the final line (the commit) and at last, you have proof that you’re actually drawing in triangles, not squares!

In the debug area, find and open the Textures group.

Screen Shot 2015-01-14 at 1.54.18 PM

Why do you have two textures? You only passed in one, remember?

One texture is for the cube image, and the other is formed from the fragment shader and the one shown to the screen.

The weird part is this other texture has non-Retina resolution. Ah-ha! So the reason why your cube was a bit crispy is because the non-Retina texture stretched to fill the screen. You’ll fix this in a moment.

Fixing Drawable Texture Resizing

There is one more problem to debug and solve before you can officially declare your mastery of Metal. Run your app again and rotate the device into landscape mode.

IMG_3050

Not the best view, eh?

The problem here is that when the device rotates, its bounds change. However, the displayed texture dimensions don’t have any reason to change.

Fortunately, it’s pretty easy to fix. Open MetalViewController.swift and take a look at this setup code in viewDidLoad:

   
device = MTLCreateSystemDefaultDevice()
metalLayer = CAMetalLayer()
metalLayer.device = device
metalLayer.pixelFormat = .bgra8Unorm
metalLayer.framebufferOnly = true
metalLayer.frame = view.layer.frame
view.layer.addSublayer(metalLayer)

The important line is metalLayer.frame = view.layer.frame, which sets the layer frame just once. You just need to update it when the device rotates.

Override viewDidLayoutSubviews like so:

//1
override func viewDidLayoutSubviews() {
  super.viewDidLayoutSubviews()
    
  if let window = view.window {
    let scale = window.screen.nativeScale
    let layerSize = view.bounds.size
    //2
    view.contentScaleFactor = scale
    metalLayer.frame = CGRect(x: 0, y: 0, width: layerSize.width, height: layerSize.height)
    metalLayer.drawableSize = CGSize(width: layerSize.width * scale, height: layerSize.height * scale)
  }    
}

Here’s what the code is doing:

  1. Gets the display nativeScale for the device (2 for iPhone 5s, 6 and iPads, 3 for iPhone 6 Plus)
  2. Applies the scale to increase the drawable texture size.

Now delete the following line in viewDidLoad:

metalLayer.frame = view.layer.frame

Build and run. Here is a classic before-and-after comparison.

compare

The difference is even more obvious when you’re on an iPhone 6+.

Now rotate to landscape — does it work?

IMG_3052

It’s rather flat now, but at least the background is a rich green and the edges look far better.

If you repeat the steps from the debug section, you’d see the texture’s dimensions are now correct. So, what’s the problem?

Think through what you just did and try to figure out what’s causing you pain. Then check the answer below to see if you figured it out — and how to solve it.

[spoiler]The projection matrix is formed by the aspect ratio. After you rotate the device, the aspect ratio changes, so you also need to update projection matrix.

To fix this, add the following one-liner at the end of viewDidLayoutSubviews:

projectionMatrix = Matrix4.makePerspectiveViewAngle(Matrix4.degrees(toRad: 85.0), aspectRatio: Float(self.view.bounds.size.width / self.view.bounds.size.height), nearZ: 0.01, farZ: 100.0)

Build and run.

Voila!

IMG_3054

[/spoiler]

Andrew Kharchyshyn

Contributors

Andrew Kharchyshyn

Author

Over 300 content creators. Join our team.