4.
Suspending Functions
Written by Filip Babić
So far, you’ve learned a lot about coroutines. You’ve seen how to launch coroutines and deliver asynchronous work without any overhead from thread allocations or memory leaks. However, the base foundation of coroutines is the ability to suspend code, control its flow at will, and return values from synchronous and asynchronous operations with the same kind of syntax and sequential code structure.
In this chapter, you’ll learn more about how suspendable functions work from within. You will see how you can convert existing code which relies on callbacks, to suspendable functions, which are called in the same way as regular, blocking, functions.
Throughout it all, you will learn what the most important piece of the coroutines puzzle is.
Suspending vs. Non-Suspending
Up until now, you’ve learned that coroutines rely on the concept of suspending code and suspending functions. Suspended code is based on the same concepts as regular code, except the system has the ability to pause its execution and continue it later on. But when you’re using two functions, a suspendable and a regular one, the calls seem pretty much the same.
If you go a step further and duplicate a function you use, but add the suspend
modifier keyword at the start, you would have two functions with the same parameters, but you’d have to wrap the suspendable function in a launch
block. This is how the Kotlin coroutines APIs are built, but the actual function call doesn’t change.
The system differentiates these two functions by the suspend
modifier at compile time, but where and how do these functions work differently and how do both functions work with respect to the suspension mechanism in Kotlin coroutines?
The answer can be found by analyzing the bytecode generated for each function , and by explaining how the call-stack works in both of these cases. You’ll start by analyzing the non-suspendable variant first.
Analyzing a Regular Function
To follow the code in this chapter, select Open… in IntelliJ . Then navigate to the suspending-functions/projects/starter folder and select the suspending_functions project.
If you open up Main.kt, in the starter project, you’ll notice a small main
function. It’s calling a simple, regular, non-suspendable function called getUserStandard
, which doesn’t rely on callbacks or coroutines. There will be four different variants of this function. This variant is the most rudimentary, so let’s analyze it first:
fun getUserStandard(userId: String): User {
Thread.sleep(1000)
return User(userId, "Filip")
}
The function takes in one parameter: the userId
. It puts the current thread to sleep for a second, to mimic a long running operation. After that, it returns a User
.
In reality, the function is simple, and there are no hidden mechanisms at work here. Analyze the bytecode generated by pressing Tools ▶︎ Kotlin ▶︎ Show Kotlin Bytecode. After that you should see the Kotlin Bytecode window opened and by pressing the Decompile button, you can see the generated code, which should look something like this
@NotNull
public static final User getUserStandard(@NotNull String userId) {
Intrinsics.checkParameterIsNotNull(userId, "userId");
Thread.sleep(1000L);
return new User(userId, "Filip");
}
After inspecting it, you can see that it isn’t much different from the actual code. It’s completely straightforward and does what the code says it does.
The only addition to the code is the nullchecks and annotations the compiler adds, to make sure the non-null type system is followed. Once the program starts this function, it will check that the parameters are not null, and return a user after one second.
This function is clean and simple, but the problem here lies in the Thread.sleep(1000)
call. If you call the function on the main thread, you’ll freeze your UI for a second. It’s much better if you implement this using a callback and by creating a new thread for the long-running operation. That is actually the second example; see how you’d implement this using a callback.
Implementing the Function With Callbacks
A better solution to this problem would be having a function which takes in a callback as a parameter. This callback would serve as a means of notifying the program about the user value being ready for use. Furthermore, it would create a separate thread of execution, to offload the main thread.
To do this, replace the getUserStandard()
with the following code:
fun getUserFromNetworkCallback(
userId: String,
onUserReady: (User) -> Unit) {
thread {
Thread.sleep(1000)
val user = User(userId, "Filip")
onUserReady(user)
}
println("end")
}
Make sure to import thread
, like so:
import kotlin.concurrent.thread
Update the main
function to the following:
fun main() {
getUserFromNetworkCallback("101") { user ->
println(user)
}
println("main end")
}
Run the bytecode analyzer again, and you should see the following output:
public static final void getUserFromNetworkCallback(
@NotNull final String userId,
@NotNull final Function1 onUserReady) {
Intrinsics.checkParameterIsNotNull(userId, "userId");
Intrinsics.checkParameterIsNotNull(onUserReady, "onUserReady");
ThreadsKt.thread$default(
false,
false,
(ClassLoader)null,
(String)null,
0,
(Function0)(new Function0 () {
// $FF: synthetic method
// $FF: bridge method
public Object invoke() {
this.invoke();
return Unit.INSTANCE;
}
public final void invoke () {
Thread.sleep(1000L);
User user = new User(userId, "Filip");
onUserReady.invoke(user);
}
}), 31, (Object)null);
String var2 = "end";
System.out.println(var2);
}
It’s quite a big change compared to the previously generated piece of code. Again the system does a series of null checks to enforce the type system. After that, it creates a new thread, and within public final void invoke
of the thread, it calls the wrapped code. The code itself doesn’t change much from the last example, but now it relies on a thread and a callback.
Once the system runs getUserFromNetworkCallback
, it creates a thread. Once the thread is fully set up, it runs the block of code, and propagates the result back using the callback. If you run the code above, you’ll get the following result:
end
main end
User(id=101, name=Filip)
This means the main
function can indeed finish before the getUserFromNetworkCallback
does. The thread it starts lives on after the main thread and so can the code.
This function is a bit better than the last example, since it offloads the work from the main thread, using the callback to finally consume the data. But the problem here is that the code you use to build up a value can throw an exception. This means that you’d have to wrap it in a try/catch
block. But it would be best if the try/catch
block was at the actual place of creating a value. However, if you catch an exception there, how do you propagate it back to the main code?
This is usually done by using a slightly different signature of the callback passed to the function you wish to run, allowing it to pass either a value or an exception. Next, see how to handle both of those paths in which the function can end.
Handling Happy and Unhappy Paths
When programming, you usually have something called a happy path. It’s the course of action your program takes, when everything goes smoothly. Opposite of that, you have an unhappy path, which is when things go wrong.
In the example above, if things went wrong, you wouldn’t have any way of handling that case from within the callback. You’d either have to wrap the entire function call in a try/catch
block, or catch exceptions from within the thread
function. The former is a bit ugly, as you’d really want to have all possible paths handled at the same place. The latter isn’t much better either, as all you can pass to the callback is a value, so you’d have to either pass a nullable value, or an empty object, and go from that.
To make this functionality available and a bit more clean, programmers define the callback as a two-parameter lambda, with the first being the value, if there is any, and the second being the error, if it occurred.
The signature of the function, and its callback, would be next, so replace getUserFromNetworkCallback
in Main.kt:
fun getUserFromNetworkCallback(
userId: String,
onUserResponse: (User?, Throwable?) -> Unit) {
thread {
try {
Thread.sleep(1000)
val user = User(userId, "Filip")
onUserResponse(user, null)
} catch (error: Throwable) {
onUserResponse(null, error)
}
}
}
The callback can now accept either a value or an error. Whichever parameter or path is taken, it should be valid, and non-null
, while the remaining parameter will be null
, showing you that the path it governs hasn’t happened.
Change main
to the following:
fun main() {
getUserFromNetworkCallback("101") { user, error ->
user?.run(::println)
error?.printStackTrace()
}
}
If there is a non-null
user value, you can print it out or do something else with it.
When looking at the bytecode by pressing the Decompile button in the bytecode decompiler window, you should see the following:
public static final void getUserFromNetworkCallback(
@NotNull final String userId,
@NotNull final Function2 onUserResponse) {
Intrinsics.checkParameterIsNotNull(userId, "userId");
Intrinsics.checkParameterIsNotNull(onUserResponse, "onUserResponse");
ThreadsKt.thread$default(
false,
false,
(ClassLoader)null,
(String)null,
0,
(Function0)(new Function0 () {
// $FF: synthetic method
// $FF: bridge method
public Object invoke() {
this.invoke();
return Unit.INSTANCE;
}
public final void invoke () {
try {
Thread.sleep(1000L);
User user = new User(userId, "Filip");
onUserResponse.invoke(user, (Object)null);
} catch (Throwable var2) {
onUserResponse.invoke((Object)null, var2);
}
}
}), 31, (Object)null);
The code hasn’t changed that much, it’s just wrapping everything in a try/catch
, and passing either the pair of (value, null)
or (null, error)
, back to the user. On the other hand, if there is an error, you can print its stack trace or check the error type and so on. This approach is much better than the previous ones, but there’s still one problem with it. It relies on callbacks, so if you needed three or four different requests and values, you’d have to build that dreaded “Callback Hell” staircase. Additionally, there’s the overhead of allocating a new Thread
, every time you call a function like this.
Analyzing a Suspendable Function
The caveats found in the examples with callbacks are things which can be remedied with the use of coroutines. Revise the changes you need to make to the example above, to improve it even further:
- Remove the callback and implement the example with coroutines.
- Provide efficient error handling.
- Remove the new
Thread
allocation overhead.
To implement these improvements, you’ll learn another function from the Coroutines API — suspendCoroutine
. This function allows you to manually create a coroutine and handle its control state and flow. This is unlike the launch
block, which just defined a way in which a coroutine was built, but took care of everything behind the scenes.
But, before we venture into suspendCoroutine
, analyze what happens when you just add the suspend
modifier to any existing function. Add another function to the Main.kt file, with the following signature:
suspend fun getUserSuspend(userId: String): User {
delay(1000)
return User(userId, "Filip")
}
Also make sure to import delay
, like so:
import kotlinx.coroutines.delay
This function is very similar to the first example, except you added the suspend
modifier, and you don’t sleep the thread but call delay
- a suspendable function which suspends coroutines for a given amount of time. Given these changes, you’re probably thinking the difference in bytecode cannot be that big, right?
Well, the bytecode, which you can get using the Decompile button in the Kotlin bytecode decompiler is the following:
@Nullable
public static final Object getUserSuspend(
@NotNull String userId,
@NotNull Continuation var1) {
Object $continuation;
label28: {
if (var1 instanceof < undefinedtype >) {
$continuation = (<undefinedtype>)var1;
if ((((<undefinedtype>)$continuation).label & Integer.MIN_VALUE) != 0) {
((<undefinedtype>)$continuation).label -= Integer.MIN_VALUE;
break label28;
}
}
$continuation = new ContinuationImpl(var1) {
// $FF: synthetic field
Object result;
int label;
Object L $0;
@Nullable
public final Object invokeSuspend (@NotNull Object result) {
this.result = result;
this.label | = Integer.MIN_VALUE;
return MainKt.getUserSuspend((String)null, this);
}
};
}
Object var2 =((<undefinedtype>)$continuation).result;
Object var4 = IntrinsicsKt . getCOROUTINE_SUSPENDED ();
switch(((<undefinedtype>)$continuation).label) {
case 0:
if (var2 instanceof Failure) {
throw ((Failure) var2).exception;
}
((<undefinedtype>)$continuation).L$0 = userId;
((<undefinedtype>)$continuation).label = 1;
if (DelayKt.delay(1000L, (Continuation)$continuation) == var4) {
return var4;
}
break;
case 1:
userId = (String)((<undefinedtype>)$continuation).L$0;
if (var2 instanceof Failure) {
throw ((Failure) var2).exception;
}
break;
default:
throw new IllegalStateException ("call to ’resume’ before ’invoke’ with coroutine");
}
return new User (userId, "Filip");
}
This massive block of code is a huge difference from the previous examples, and it’s a behemoth compared to the very first example you’ve seen. Going over the bits one step at a time, to get a sense of what’s happening, here:
-
One of the first things you’ll notice is the extra parameter to the function — the
Continuation
. It forms the entire foundation of coroutines, and it is the most important thing by which suspendable functions are different from regular ones. Continuations allow functions to work in suspended mode. They allow the system to go back to the originating call site of a function, after it has suspended them. You could say thatContinuation
s are just callbacks for the system or the program currently running, and that by using continuations, the system knows how to navigate the execution of functions and the call stack. -
That being said, all functions actually have a hidden, internal,
Continuation
they are tied to. ThisContinuation
is not tied to the Kotlin Coroutines API, it’s in fact tied to the operating system you’re using and has a different internal implementation based on that. The system uses it to navigate around the call stack and the code in general. However, suspendable functions have an additional instance which they use, so that they can be suspended and that the program can continue with execution, finally using the secondContinuation
, to navigate back to the suspendable function call site or receive its result. -
The second
Continuation
we’re talking about here is the one visible as a function parameter. As mentioned, the system implements a hidden, internalContinuation
that’s not visible in decompiled code as it’s implemented at a very low level. -
The rest of the code first checks which continuation instance we’re in since each suspendable function can create multiple
Continuation
s. Each continuation would describe one flow the function can take. For example, if you calldelay(1000)
on a suspendable function, you’re actually creating another instance of execution, which finishes in one second and returns back to the originating point — the line at whichdelay
was called. -
The code combines
var1
’slabel
andInt.MIN_VALUE
using a bitwise&
(AND) operator and breakslabel28
if it returns true. This means the initial suspend call happened and the code can proceed with the rest of the operations. Otherwise, it calls thegetUserSuspend
from within and uses the bitwise|
(OR) operator with label andInt.MIN_VALUE
as a marker. -
Once that is finished, it checks on the label for the currently active continuation. If the label is zero, it means it hasn’t finished with the first suspend call — the
delay
. In that case it just returns the result from that execution, which is the delayed function call. In the end, it also increases the label to one, to notify that it’s pastdelay
should continue on with the code. In the same block of code, the system uses the previousContinuation
to create a new, wrapped, instance that will return back to this function, with a different label. -
Finally, if the label is one, which is the largest index in the continuation-stack, so to speak, it means the function has resumed after
delay
and that it’s ready to serve you the value — theUser
. If anything went wrong up until that point, the system throws an exception. -
In this, final, instance of the
Continuation
and this function call, callingreturn new User (userId, "Filip");
will propagate theUser
value all the way back to the originating function call, which happened in Main.kt.
There’s another, default, case, which just throws an exception if the system tries to resume
with a continuation or execution flow, but hasn’t actually invoked the function call. This can sometimes happen when a child Job
finishes after its parent. It’s a default, fallback mechanism, for cases which are extremely rare. If you use your coroutines carefully and the way they are supposed to be used, parent Job
s should always wait for their children and this shouldn’t happen.
Briefly, the system uses continuations for small state-machines and internal callbacks, so that it knows how to navigate through the code and which execution flows exist and at which points it should suspend and resume later on. The state is described using the label
and it can have as many states as there are suspension points in the function.
To call the newly created function, you can use the next snippet of code:
fun main() {
GlobalScope.launch {
val user = getUserSuspend("101")
println(user)
}
Thread.sleep(1500)
}
Also make sure to import GlobalScope
, like so:
import kotlinx.coroutines.GlobalScope
The function call is just like the first example. The difference is it’s suspendable, so you can push it in a coroutine, offloading the main thread. You also rely on the internal threads from the Coroutine API, so there’s no additional overhead. The code is sequential, even though it could be asynchronous. And you can use try/catch
blocks, at the call site, even though the value could be produced asynchronously. All points from the previous example have been addressed!
Changing Code to Suspendable
Another question is when should you migrate existing code to suspendable functions and coroutines? This is a relatively biased question, but there are still some objective guidelines you can follow to determine if you’re better off with coroutines or standard mechanisms.
Generally speaking, if your code is filled with complex threading and often allocates new threads to do the work you need, but you don’t have the ability to use a fixed pool of threads, instead of creating new threads as you go, you should migrate to coroutines.
The performance benefits are visible immediately as the Coroutines API already has predefined threading mechanisms which make it easy for you to switch between threads and distribute multiple pieces of work between threads.
This often coincides with the first reason to switch, but if you’re building new threads, due to asynchronous or long-running operations, you’re often abusing callbacks heavily, because the easiest way to communicate between threads is through callbacks. And if you’re using callbacks, you probably have problems with code styling, readability and the cognitive load needed to understand the business logic behind the functions. In that case, you should try to migrate your code to coroutines, as well.
The problem comes when there’s some API which isn’t yours to change. In those cases you cannot change the source code. Let’s say you have the following code, but it’s coming from an external library:
fun readFile(path: String, onReady: (File) -> Unit) {
Thread.sleep(1000)
// some heavy operation
onReady(File(path))
}
This function forces you to use a callback, even though you might have a better way to handle the long-running or asynchronous operation. But you could easily wrap this function with a suspendCoroutine()
:
suspend fun readFileSuspend(path: String): File =
suspendCoroutine {
readFile(path) { file ->
it.resume(file)
}
}
This code is perfectly fine, because if it manages to read a file, it will pass it to the coroutine as a result. If something is wrong, it will throw an exception, and you can catch it at the call site. Having the ability to completely wrap asynchronous operations with coroutines is extremely powerful. But if your functions rely on callbacks to constantly produce values - like subscribing to sockets, then coroutines such as these don’t really make sense. You’re better off implementing such mechanisms with the Flow API, which you’ll learn about in Chapter 11, “Beginning With Coroutine Flow”.
Elaborating Continuations
Having first-class continuations is the key concept which differentiates a standard function from a suspendable one. But what is a continuation after all? Every time a program calls a function, it is added to the program’s call-stack. This is a stack of all the functions, in the order they were called, which are currently held in memory and haven’t yet finished. Continuations manipulate this execution flow and in turn help handle the call-stack.
You’ve already learned that a Continuation
is a callback, but implemented at a very low system level. A more precise explanation would be that it’s an abstract wrapper around the program’s control state. It has the means to control how and when the program will execute further and what its result will be — an exception or a value.
Once a function finishes, the program takes it off the stack, and proceeds with the next function. The trick is how the system knows where to return after each function is executed. This information is held within the aforementioned Continuation
.
Each continuation holds a little information about the context in which the function was called. Like the local variables, the parameters the function got passed, the thread it was called in and so on. By using that information, the system can simply rely on the continuation to tell it where it needs to be, when a function ends.
Try and see what the lifecycle of a function and a Continuation
is, from the function call, to the end.
Living in the Stack
When a program first starts, its call-stack has only one entry — the initial function, usually called main
. This is because within it, no other functions have been called yet. The initial function is important, because when the program reaches its end, it calls back to the continuation of main
, which completes the program, and notifies the system to release it from memory.
As the program lives, it calls other functions, adding them to the stack.
So if you had this code fun main() {}
, the lifecycle of the program-level continuation is contained within the brackets of the main
function. But when another function is called, the first thing the system does is create a new Continuation
for the new function. It adds some information to the new continuation, like what is the parent function and its Continuation
object — in this case main
. It passes the information about which line of code the function was called at, with which arguments and what its return type should be.
Examine what happens with the following code snippet:
fun main() {
val numbers = listOf(1, 2, 5)
}
-
The system creates a continuation, which will live within
listOf
. -
Initially, it knows that it’s been called at the first line of
main
, so it can return at the appropriate position in code when finished. -
Next, it knows that its parent is
main
. This allowslistOf
a way to finish the entire program, propagating calls all the way up to the initialContinuation
. For example, this can happen when an exception occurs. Finally, it knows that the parameter passed tolistOf
is a variable-argument array, with the values1, 2, 5
and that at the end of the function, we should receive back aList<Int>
. -
With all of this information, it navigates the function execution and lifecycle from the calling point to the return statement.
At a deeper level, it’s just like having a local variable declared, calling an initializer function with a pointer to that variable, and setting that value elsewhere — in listOf
and then using a goto
statement to return to a line after the initializer call, having prepared the variable for usage.
Another analogy which could be used to explain continuations is video games. In most video games, you have things which are called checkpoints. When you go on an adventure and pursue a quest, this is like calling a function. You have to go some distance and complete a smaller set of tasks.
When you’re done, you can go back to your checkpoint and finish your quest. On the other hand, if something bad happened — you failed the mission in the game, which would be similar to throwing an exception in computing. You always have the ability to reload the game and restart from the checkpoint. You can achieve similar behavior if you wrap a function in a try/catch
block, as you can effectively return back to the checkpoint and start over.
Handling the Continuation
In the last version of getUser
, you’ll use suspendCoroutine
from the Coroutines API. It’s a top level function which allows you to create coroutines, just like launch
, but specifically for returning values, rather than launching work. Another distinct thing about suspendCoroutine
is that it takes in a lambda as an argument, which is of the type block: (Continuation<T>) -> Unit
. This means that you can handle a Continuation
as a first-class citizen, calling functions on the object as you please. This allows for manual control-state and control-flow manipulation.
The functions available on Continuation
s are resume
, resumeWith
and resumeWithException
. You also have access to the CoroutineContext
, by calling the continuation.context
. You’ll learn about contexts later on in “Chapter 6: Coroutine Context”.
Analyzing the Continuation
more, resume
passes down a successful value of type T
, whichever type you’re trying to return from a coroutine. You use this when you deem the conditions in the coroutine valid, and want to go back to the rest of the code. resumeWithException
takes in a Throwable
, in case something goes awry. This allows you to finish the coroutine with an error, which you can later catch and handle.
This gives the amazing ability to return values from functions, which might be asynchronous, without knowing what’s behind them. Just like an API should be. You’re probably thinking: But what if the function doesn’t end?
In that case, once again, you’ll be waiting for a value, which isn’t coming, resulting yet again in another halting problem, where your code is suspended infinitely.
To remedy this, it’s best to be aggressive with continuations. No matter what, try to always produce a result back, even if it’s only an exception. At least in that case your function will end, and you will have something to handle. Conveniently enough, the Continuation
has a function to do just that. It’s called resumeWith
, and it takes in the aforementioned Result
monad. The Result
can only be one of the two states at a certain time. Either a Success
, holding the value you need, or a Failure
, holding the exception.
It also holds some utility functions, like the runCatching
, which receives a lambda it tries to run to get the Success
case with some value. In case something goes wrong it catches the exception with the help of a try/catch
block and returns a Failure
result in the end. After the continuation receives the Result
, it unwraps it and you get the value or the exception, so that you can handle it yourself.
Whenever you’re using suspendCoroutine
, or any other way of resuming values with continuations, it’s strongly recommended to enforce this approach so you don’t end up with coroutines that never finish.
Creating Your Own Suspendable API
One of the things we mentioned Jetbrains had in mind for the Coroutines API was extensibility. You’ve seen how you can turn your own functions into suspendable ones, but another thing you can do is create an API-like set of utilities which hide the thread and context switching ceremony.
We’ve prepared some examples for you in Api.kt. Open it up, and you should see a few functions. Let’s go over them one by one.
The first one is a convenience method, which uses suspendCoroutine
, and the Result
’s runCatching
to try and process a value for you.
suspend fun <T : Any> getValue(provider: () -> T): T =
suspendCoroutine { continuation ->
continuation.resumeWith(Result.runCatching { provider() })
}
If you were to call this function somewhere in your code, it would look something like this:
GlobalScope.launch {
val user = getValue { getUserFromNetwork("101") }
println(user)
}
This allows you to abstract away all of the functions which try to fetch data through the network, file-reading or database lookups and push them to the background thread. This allows the main thread to only worry about rendering the data and the rest of the code just fetches it.
The next two examples are extremely simple, and are useful for thread-switching:
fun executeBackground(action: suspend () -> Unit) {
GlobalScope.launch { action() }
}
fun executeMain(action: suspend () -> Unit) {
GlobalScope.launch(context = Dispatchers.Main) { action() }
}
The first one takes in an action
lambda block, and runs it in the background, using the default launch
context. The second one also takes in the action
block, but runs it using the Dispatchers.Main
context, so you can easily switch to the main thread, without knowing the details of the implementation.
Using them, you’d have code similar to this:
executeBackground {
val user = getValue { getUserFromNetwork("101") }
executeMain { println(user) }
}
The naming could be a bit better, but you get the idea behind this. Now you have the same behavior as with GlobalScope.launch
blocks, but you don’t rely upon knowing which scope and which functions are used behind the scenes.
This is great when you’re building the base business logic layer, as you could provide both the main and background contexts, and scopes in which you’d run the functions. And in the concrete implementations, or subclasses of the base presenter, view model or controller, you’d simply call these functions, and let the core part of the layer worry about threading.
Play around with these more and build even more utility functions on top of them, according to your needs.
Returning Values Using withContext
Now that you have a way to switch between threads and Dispatcher
s, using a nice abstraction API, it’s time to learn about another amazing coroutine builder, called withContext
.
It allows you to return a value from another CoroutineContext
, through the means of suspension. As mentioned before, you’ll learn more about the context
later in the book, but this is a good point to learn about another useful suspending function.
One of the most important things to understand about suspending functions and coroutines is that they don’t block the running thread. This means that whenever you’re in a coroutine and you call a suspend function, it will pause the coroutine, rather than block the thread. We’ve gone over this a few times, but let’s see how it works in practice.
Replace the contents of Main.kt with the following code:
fun main() {
GlobalScope.launch(Dispatchers.Main) { // 1
val user = getUserSuspend("101") // 2
println(user) // 4
}
}
// 3
suspend fun getUserSuspend(userId: String): User = withContext(Dispatchers.Default) {
delay(1000)
User(userId, "Filip")
}
This snippet is very similar to the previous approach you’ve used, but it has one very important distinction. The way you handle threading and the Dispatcher
s is reversed!
Instead of launching a coroutine in the background and then pushing the data to the main thread, you do the opposite — you launch a coroutine on the main thread and push the data fetch to the background. This is much more intuitive, but also showcases how easy it is to bridge the main and background threads using withContext
.
The snippet above runs the following four steps:
- You launch a coroutine on the main thread, scoping the work of the coroutine to be in the same thread as
main
. - By calling a suspend function, you release the main thread for other work until the data is ready.
- Within
getUserSuspend
you fetch the user usingwithContext(Dispatchers.Default)
, ensuring the operation runs in the background, on a different thread. - You print the user, once the data is ready.
This really showcases how easy it is to have nice and sequential code with coroutines that still runs smoothly and doesn’t cause any UI freezes.
Key Points
- Having callbacks as a means of providing result values can be pretty ugly and cognitive-heavy.
- Coroutines and suspendable functions remove the need for callbacks and excessive thread allocation.
- What separates a regular function from a suspendable one is the first-class continuation support, which the Coroutine API uses internally.
- Continuations are already present in the system, and are used to handle function lifecycle — returning the values, jumping to statements in code, and updating the call-stack.
- You can think of continuations as low-level callbacks, which the system calls to when it needs to navigate through the call-stack.
- Continuations always persist a batch of information about the context in which the function is called — the parameters passed, call site and the return type.
- There are three main ways in which the continuation can resolve - in a happy path returning a value the function is expected to return, throwing an exception in case something goes bad, and blocking infinitely because of flawed business logic.
- Utilizing the
suspend
modifier, and functions likelaunch
andsuspendCoroutine
, you can create your own API, which abstracts away the threading used for executing code. -
withContext
is a great way to bridge between the main and background thread while still writing clean and sequential code.
Where to Go From Here?
In this chapter you’ve learned a lot about the foundation of coroutines. Through an extensive overview of differences between suspendable and non-suspendable functions you’ve seen how suspendable functions utilize Continuation
s to navigate around and return values as results.
The next chapter, “Chapter 5: Async/Await”, relies heavily on the usage of functions which leverage continuations and suspendable functions to return values from code which may or may not be asynchronous and long-running. Similar to what you did using withContext
. So read on to learn more about how you can process values from functions which used to require a ton of callbacks!