OpenGL ES Particle System Tutorial: Part 1/3
Learn how to develop a particle system using OpenGL ES 2.0 and GLKit! This three-part tutorial covers point sprites, particle effects, and game integration. By Ricardo Rendon Cepeda.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
OpenGL ES Particle System Tutorial: Part 1/3
50 mins
- What’s a Particle System?
- What are Point Sprites?
- Getting Started
- Basic Drawing
- Designing Your Particle System
- Implementing Your Particle System
- Adding Vertex and Fragment Shaders
- Creating Shaders as GLSL Programs
- Building Obj-C Bridges
- Sending Shader Data to the GPU
- Adding Particle Shader Variances
- Animating Your Polar Rose
- Using Textures and Point Sprites
- Where To Go From Here?
Creating Shaders as GLSL Programs
Next open Emitter.vsh and add the following code:
// Vertex Shader
static const char* EmitterVS = STRINGIFY
(
// Attributes
attribute float aTheta;
// Uniforms
uniform mat4 uProjectionMatrix;
uniform float uK;
void main(void)
{
float x = cos(uK*aTheta)*sin(aTheta);
float y = cos(uK*aTheta)*cos(aTheta);
gl_Position = uProjectionMatrix * vec4(x, y, 0.0, 1.0);
gl_PointSize = 16.0;
}
);
The code above simply plugs θ into the polar rose equation to obtain x and y coordinates. This coordinate position is then multiplied by a Projection Matrix, resulting in the final XYZW position needed by gl_Position.
Finally, it sets a point size of 16 pixels. When working with GL_POINTS
, shaders must always include a value for gl_PointSize.
Note: Don't know what an XYZW coordinate or a Projection Matrix is? If you're curious, check out homogeneous coordinates on Wikipedia for more information. Without getting too deep into the math, the extra W value allows you to represent all types and any number of affine transformations — that is, a series of translations, rotations, and scales — as a single matrix multiplication.
The GPU is optimized for matrix math, so OpenGL uses XYZW coordinates. You can specify both points and vectors using XYZW values; in the case of points, the W value will always be 1
, while the W value for vectors will always be 0
.
Note: Don't know what an XYZW coordinate or a Projection Matrix is? If you're curious, check out homogeneous coordinates on Wikipedia for more information. Without getting too deep into the math, the extra W value allows you to represent all types and any number of affine transformations — that is, a series of translations, rotations, and scales — as a single matrix multiplication.
The GPU is optimized for matrix math, so OpenGL uses XYZW coordinates. You can specify both points and vectors using XYZW values; in the case of points, the W value will always be 1
, while the W value for vectors will always be 0
.
Now, add the following code to Emitter.fsh:
// Fragment Shader
static const char* EmitterFS = STRINGIFY
(
void main(void)
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
);
This is a one-line program that simply sets the color of all relevant fragments to red, and is sent to gl_FragColor as a 4-channel RGBA representation.
In general, all shader programs have the following characteristics:
- They are very short programs written in GLSL, which is quite similar to C. Why so short? Recall that they are called with every single frame change.
- They have special variable prefixes that determine the type and source of data the shader will receive from the main program:
- Attributes typically change per-vertex (variable θ). Due to their per-vertex nature, they are exclusive to the vertex shader.
- Uniforms typically change per-frame or per-object (constant k). They are accessible to both vertex and fragment shaders.
- They are wrapped in a call to something called
STRINGIFY
. That's a macro you will add later that just returns a pointer to a string containing the text provided to the macro.
Why are you just returning pointers to strings here? That's because your shader code isn't compiled by Xcode. Instead, the file is compiled at runtime when your app is building its shaders. The shader files are actually defining strings that you point to in the app and are handed to the GPU to be compiled and executed.
If your .vsh and .fsh files doesn't seem to be automatically highlighting with GLSL syntax, you'll need to set the filetype for both files in Xcode. Look to the Utilities bar on the right; in the File Inspector set the File Type to OpenGL Shading Language source, as shown below:
You may have to re-open your project to see the syntax highlighting change take effect.
Since shaders run on the GPU, and your app runs on the CPU, you'll need some sort of a “bridge” to feed your shaders the necessary data from the CPU.
Time to switch back to Objective-C!
Building Obj-C Bridges
Click File\New\File... and choose the iOS\Cocoa Touch\Objective-C class subclass template. Enter EmitterShader for the Class and NSObject for the subclass. Make sure both checkboxes are unchecked, click Next, and click Create.
Open up EmitterShader.h and replace the existing file contents with the following:
#import <GLKit/GLKit.h>
@interface EmitterShader : NSObject
// Program Handle
@property (readwrite) GLint program;
// Attribute Handles
@property (readwrite) GLint aTheta;
// Uniform Handles
@property (readwrite) GLint uProjectionMatrix;
@property (readwrite) GLint uK;
// Methods
- (void)loadShader;
@end
Here you create some shader “handles” which tell your Objective-C variables where to find their GPU counterparts. The program
handle will point to the compiled vertex-fragment shader pair. The uProjectionMatrix
handle will point to the view's projection matrix. The other handles correspond to the θ
and k
values you'll pass to the shader's attributes and uniforms.
Open up EmitterShader.m and replace the existing contents of the file with the following:
#import "EmitterShader.h"
@implementation EmitterShader
- (void)loadShader
{
// Attributes
self.aTheta = glGetAttribLocation(self.program, "aTheta");
// Uniforms
self.uProjectionMatrix = glGetUniformLocation(self.program, "uProjectionMatrix");
self.uK = glGetUniformLocation(self.program, "uK");
}
@end
In the code above, you attach the handles to the shader programs so your app knows where to store data for your shaders. For both glGetAttribLocation
and glGetUniformLocation
, the first parameter specifies the shader program to be queried (a vertex-fragment pair) and the second parameter points to the name of the attribute/uniform within the same program.
This is why it’s a good idea to give your GPU and CPU variables the same name — it's a lot easier to keep track of them.
Ok, so your attributes and uniforms are set, but what about the actual program? Since your shaders run on the GPU, they’re only readable at runtime with OpenGL ES 2.0. This means that the CPU needs to give the GPU special instructions to compile and link your shaders and create the program handle.
Note: If you have an error in any of your shader code, Xcode won't warn you. Remember — your shader code isn't compiled, but simply passes them to the GPU as strings for compiling and linking there.
Note: If you have an error in any of your shader code, Xcode won't warn you. Remember — your shader code isn't compiled, but simply passes them to the GPU as strings for compiling and linking there.
The tutorial OpenGL ES 2.0 for iPhone covers shader compilation in more detail, so give that section a read if you need a refresher. Otherwise, the necessary files are available below for a simple copy and paste into your project.
Go to File\New\File... and choose the iOS\Cocoa Touch\Objective-C class subclass template. Enter ShaderProcessor for the Class and NSObject for the subclass. Make sure both checkboxes are unchecked, click Next, and click Create.
Replace the contents of ShaderProcessor.h with the following:
#import <GLKit/GLKit.h>
@interface ShaderProcessor : NSObject
- (GLuint)BuildProgram:(const char*)vertexShaderSource with:(const char*)fragmentShaderSource;
@end
Now, rename ShaderProcessor.m to ShaderProcessor.mm to enable C++ processing. Open up ShaderProcessor.mm and replace the file contents with the following:
#import "ShaderProcessor.h"
#include <iostream>
@implementation ShaderProcessor
- (GLuint)BuildProgram:(const char*)vertexShaderSource with:(const char*)fragmentShaderSource
{
// Build shaders
GLuint vertexShader = [self BuildShader:vertexShaderSource with:GL_VERTEX_SHADER];
GLuint fragmentShader = [self BuildShader:fragmentShaderSource with:GL_FRAGMENT_SHADER];
// Create program
GLuint programHandle = glCreateProgram();
// Attach shaders
glAttachShader(programHandle, vertexShader);
glAttachShader(programHandle, fragmentShader);
// Link program
glLinkProgram(programHandle);
// Check for errors
GLint linkSuccess;
glGetProgramiv(programHandle, GL_LINK_STATUS, &linkSuccess);
if (linkSuccess == GL_FALSE)
{
NSLog(@"GLSL Program Error");
GLchar messages[1024];
glGetProgramInfoLog(programHandle, sizeof(messages), 0, &messages[0]);
std::cout << messages;
exit(1);
}
// Delete shaders
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
return programHandle;
}
- (GLuint)BuildShader:(const char*)source with:(GLenum)shaderType
{
// Create the shader object
GLuint shaderHandle = glCreateShader(shaderType);
// Load the shader source
glShaderSource(shaderHandle, 1, &source, 0);
// Compile the shader
glCompileShader(shaderHandle);
// Check for errors
GLint compileSuccess;
glGetShaderiv(shaderHandle, GL_COMPILE_STATUS, &compileSuccess);
if (compileSuccess == GL_FALSE)
{
NSLog(@"GLSL Shader Error");
GLchar messages[1024];
glGetShaderInfoLog(shaderHandle, sizeof(messages), 0, &messages[0]);
std::cout << messages;
exit(1);
}
return shaderHandle;
}
@end
This code is a straightforward class that carries out a generic process for all shaders: it compiles the shaders and returns a handle to them so they can be executed when required. This class will be used to complete your shader bridge.
Open up EmitterShader.m and add the following lines to the top of the file, just after the first #import
statement:
#import "ShaderProcessor.h"
// Shaders
#define STRINGIFY(A) #A
#include "Emitter.vsh"
#include "Emitter.fsh"
Again in EmitterShader.m, add the following code to the beginning of loadShader
:
// Program
ShaderProcessor* shaderProcessor = [[ShaderProcessor alloc] init];
self.program = [shaderProcessor BuildProgram:EmitterVS with:EmitterFS];
This creates an instance of the ShaderProcessor
you just wrote and uses it to compile and link your shaders.
That's the end of your CPU-GPU shader bridge. If you haven't already, build your program to check for errors. Once again, running your app still produces that same, lovely green screen you've been looking at since you started.
You're almost to the point where you'll actually see the graphics on the screen — there's just a few more pieces of code to add.