ARCore With Kotlin: Getting Started
Learn how to make augmented reality apps with Android using ARCore! By Zahidur Rahman Faisal.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
ARCore With Kotlin: Getting Started
15 mins
- Introduction to ARCore
- Getting Started
- Setting up the Environment
- Using a Real Device
- Using an Emulator
- Adding ARCore Dependencies and Permissions
- Behind the ARCore Scene
- Planes, Anchors and Poses
- ARCore Session
- Augmenting Your First Scene
- Adding Objects
- Attaching Anchors to the Session
- Drawing the Objects
- Where to Go From Here?
Behind the ARCore Scene
The 3D models you’ll use are in main/assets/models. Here, you can find models for a Viking, a cannon and a target.
The OpenGL shaders are in main/assets/shaders. The shaders are from the Google ARCore sample app.
You’ll see a package named common. Inside, there are two folders:
- rendering: This folder contains all the classes related to OpenGL rendering.
-
helper: Here, you’ll find utility classes like
CameraPermissionHelper
andSnackbarHelper
, so you don’t have to write boilerplate code.
Planes, Anchors and Poses
You have a PlaneAttachment
class inside rendering to ease your job. PlaneAttachment
uses a plane and an anchor as inputs. It constructs a pose from that information.
For a quick recap:
- A plane is a real-world planar surface. It consists of a cluster of feature points that appears to lie on a horizontal or vertical surface, such as a floor or walls.
- An anchor points to a fixed location and orientation in physical space to describe the exact position of a virtual object in the real world.
- A pose describes a coordinate transformation from a virtual object’s local frame to the real world coordinate frame.
Imagine the real world around you is an ocean and the planes are ports. A port can anchor many ships, or virtual objects, each with their specific pose.
So, PlaneAttachment
lets you attach an anchor to a plane and retrieve the corresponding pose. ARCore uses the pose as you move around the anchor point.
ARCore Session
The begin project includes an ARCore session
object in MainActivity.kt. The session describes the entire AR state. You’ll use it to attach anchors to planes when the user taps the screen.
In setupSession()
, which you call from onResume(...)
, the app checks if the device supports ARCore. If not, it displays a toast and the activity finishes.
Augmenting Your First Scene
Now that your app is running on a supported device or emulator, it’s time to set up some objects to render in the scene!
Adding Objects
Open MainActivity.kt and add the following properties:
private val vikingObject = ObjectRenderer()
private val cannonObject = ObjectRenderer()
private val targetObject = ObjectRenderer()
Here, you define each property as an ObjectRenderer from the ARCore sample app.
Also, add three PlaneAttachment
properties just below those objects:
private var vikingAttachment: PlaneAttachment? = null
private var cannonAttachment: PlaneAttachment? = null
private var targetAttachment: PlaneAttachment? = null
These are Kotlin nullables initialized as null
You’ll create them later, when the user taps the screen.
Now, you need to set up the objects, which you’ll do in onSurfaceCreated(...)
. Add the following inside the try-catch
block, just below the // TODO
comment:
// 1
vikingObject.createOnGlThread(this@MainActivity, getString(R.string.model_viking_obj), getString(R.string.model_viking_png))
cannonObject.createOnGlThread(this@MainActivity, getString(R.string.model_cannon_obj), getString(R.string.model_cannon_png))
targetObject.createOnGlThread(this@MainActivity, getString(R.string.model_target_obj), getString(R.string.model_target_png))
// 2
targetObject.setMaterialProperties(0.0f, 3.5f, 1.0f, 6.0f)
vikingObject.setMaterialProperties(0.0f, 3.5f, 1.0f, 6.0f)
cannonObject.setMaterialProperties(0.0f, 3.5f, 1.0f, 6.0f)
What you’re doing here is:
- You use the 3D model files from the begin project to set up each of the three objects.
- You set values for ambient, diffuse, specular and specular power on each object. These material properties are the surface characteristics of the rendered model. Changing these values changes the way you see the surface of the object.
Here’s a closer look at what each of the light values does:
- Ambient: The intensity of non-directional surface illumination.
- Diffuse: The reflectivity of the diffuse, or matte, surface.
- Specular: How reflective the specular, or shiny, surface is.
- Specular Power: The surface shininess. Larger values result in a smaller, sharper specular highlight.
Play around with these values to see how your object changes.
Attaching Anchors to the Session
Your next step is to give the user the ability to attach an anchor to the session when they tap on the screen.
To get started, find handleTap(...)
in MainActivity.kt. Add the following inside the innermost if
statement, just above the // TODO
comment before the break
statement:
when (mode) {
Mode.VIKING -> vikingAttachment = addSessionAnchorFromAttachment(vikingAttachment, hit)
Mode.CANNON -> cannonAttachment = addSessionAnchorFromAttachment(cannonAttachment, hit)
Mode.TARGET -> targetAttachment = addSessionAnchorFromAttachment(targetAttachment, hit)
}
You’ll see an error because you still don’t have addSessionAnchorFromAttachment(...)
. You will address this error in a while.
The radio buttons at the top of the screen control the value of mode
. This is a Kotlin enum class that includes a scale factor float value for each mode. The scale factor tunes the size of the corresponding 3D model in the scene.
For each mode, you set a new value for the corresponding PlaneAttachment
in the when
statement.
You use the old attachment and the hit value for the tap, which is an ARCore HitResult defining the intersection of the 3D ray for the tap and a plane.
Now, add addSessionAnchorFromAttachment(...)
at the bottom of MainActivity.kt:
private fun addSessionAnchorFromAttachment(
previousAttachment: PlaneAttachment?,
hit: HitResult
): PlaneAttachment? {
// 1
previousAttachment?.anchor?.detach()
// 2
val plane = hit.trackable as Plane
val anchor = session!!.createAnchor(hit.hitPose)
// 3
return PlaneAttachment(plane, anchor)
}
What you’re doing here is:
- If the
previousAttachment
isn’t null, you remove its anchor from the session. - You take the HitResult plane and create the anchor from the HitResult pose. You then add the anchor to the session.
- Finally, with the above information about the plane and the anchor, you return the PlaneAttachment.
You’re almost ready to see your Viking do some target practice! :]
Drawing the Objects
Your last step is to draw the objects on the screen. You created plane attachments when the user taps, but now you need to draw the objects as part of the screen rendering.
To do that, go to onDrawFrame(...)
and add the following calls to the bottom of the try
block:
drawObject(
vikingObject,
vikingAttachment,
Mode.VIKING.scaleFactor,
projectionMatrix,
viewMatrix,
lightIntensity
)
drawObject(
cannonObject,
cannonAttachment,
Mode.CANNON.scaleFactor,
projectionMatrix,
viewMatrix,
lightIntensity
)
drawObject(
targetObject,
targetAttachment,
Mode.TARGET.scaleFactor,
projectionMatrix,
viewMatrix,
lightIntensity
)
Here, you call the pre-existing drawObject(...)
helper function. It takes the object, its corresponding attachment, the scale factor and the matrices and values needed for OpenGL to draw the object.
The app computes the matrices using these already-present helper functions:
// 1
private fun computeProjectionMatrix(camera: Camera): FloatArray {
val projectionMatrix = FloatArray(maxAllocationSize)
camera.getProjectionMatrix(projectionMatrix, 0, 0.1f, 100.0f)
return projectionMatrix
}
// 2
private fun computeViewMatrix(camera: Camera): FloatArray {
val viewMatrix = FloatArray(maxAllocationSize)
camera.getViewMatrix(viewMatrix, 0)
return viewMatrix
}
// 3
private fun computeLightIntensity(frame: Frame): FloatArray {
val lightIntensity = FloatArray(4)
frame.lightEstimate.getColorCorrection(lightIntensity, 0)
return lightIntensity
}
Here what’s going on in the code above:
- ARCore uses the current session’s camera input to calculate
projectionMatrix
. - It also uses that input to calculate
viewMatrix
. - Finally, it uses the frame, which describes the AR state at a particular point in time, to calculate the
lightIntensity
.
Build and run, then select a radio button at the top to pick an object mode. Find a plane with your camera and tap to place an object.
The angle an object has when you place it depends on your device’s orientation and inclination. Move your device around and place your object with the angle you prefer.
Once you’ve placed all the objects, if you rotate your phone, you’ll see a scene like this:
Move around the scene and watch as your Viking prepares to fire. There’s no stopping your Viking now!