Unfortunately, for all the benefits provided by dispatch queues, they’re not a panacea for all performance issues. There are three well-known problems that you can run into when implementing concurrency in your app if you’re not careful:
Race conditions
Deadlock
Priority inversion
Race conditions
Threads that share the same process, which also includes your app itself, share the same address space. What this means is that each thread is trying to read and write to the same shared resource. If you aren’t careful, you can run into race conditions in which multiple threads are trying to write to the same variable at the same time.
Consider the example where you have two threads executing, and they’re both trying to update your object’s count variable. Reads and writes are separate tasks that the computer cannot execute as a single operation. Computers work on clock cycles in which each tick of the clock allows a single operation to execute.
Note: Do not confuse a computer’s clock cycle with the clock on your watch. An iPhone XS has a 2.49 GHz processor, meaning it can perform 2,490,000,000 clock cycles per second!
Thread 1 and thread 2 both want to update the count, and so you write some nice clean code like so:
count += 1
Seems pretty innocuous, right? Break that statement down into its component parts, add a bit of hand-waving, and what you end up with is something like this:
Load value of variable count into memory.
Increment value of count by one in memory.
Write newly updated count back to disk.
The above graphic shows:
Thread 1 kicked off a clock cycle before thread 2 and read the value 1 from count.
On the second clock cycle, thread 1 updates the in-memory value to 2 and thread 2 reads the value 1 from count.
On the third clock cycle, thread 1 now writes the value 2 back to the count variable. However, thread 2 is just now updating the in-memory value from 1 to 2.
On the fourth clock cycle, thread 2 now also writes the value 2 to count… except you expected to see the value 3 because two separate threads both updated the value.
This type of race condition leads to incredibly complicated debugging due to the non-deterministic nature of these scenarios.
If thread 1 had started just two clock cycles earlier you’d have the value 3 as expected, but don’t forget how many of these clock cycles happen per second.
You might run the program 20 times and get the correct result, then deploy it and start getting bug reports.
You can usually solve race conditions with a serial queue, as long as you know they are happening. If your program has a variable that needs to be accessed concurrently, you can wrap the reads and writes with a private queue, like this:
private let threadSafeCountQueue = DispatchQueue(label: "...")
private var _count = 0
public var count: Int {
get {
return threadSafeCountQueue.sync {
_count
}
}
set {
threadSafeCountQueue.sync {
_count = newValue
}
}
}
Because you’ve not stated otherwise, the threadSafeCountQueue is a serial queue.
Remember, that means that only a single operation can start at a time. You’re thus controlling the access to the variable and ensuring that only a single thread at a time can access the variable. If you’re doing a simple read/write like the above, this is the best solution.
Note: You can implement the same private queue sync for lazy variables, which might be run against multiple threads. If you don’t, you could end up with two instances of the lazy variable initializer being run. Much like the variable assignment from before, the two threads could attempt to access the same lazy variable at nearly identical times. Once the second thread tries to access the lazy variable, it wasn’t initialized yet, but it is about to be created by the access of the first thread. A classic race condition.
Thread barrier
Sometimes, your shared resource requires more complex logic in its getters and setters than a simple variable modification. You’ll frequently see questions related to this online, and often they come with solutions related to locks and semaphores. Locking is very hard to implement properly. Instead, you can use Apple’s dispatch barrier solution from GCD.
Ip die yyoovi e bikwaryusx qeuoa, neu yuf qzasevs uw devq peah xbmo moqxk ec riu rivv eg sviy nuq inl jop iq mfo raga xosi.
Ktew pju moyaivha waepw wu ku vkutqiy va, cdiy jeu reak za toxy sunl ksu deoeu de lcis uraynkxatd idjeals xawcijloh zecffajat, yit re tih qakzaggooqh edo naq inwek lge alnixe dilvhuqex.
Jao ugwgoyiyj u jocsehpw pulhaeh udmfeins zzat yac:
private let threadSafeCountQueue = DispatchQueue(label: "...",
attributes: .concurrent)
private var _count = 0
public var count: Int {
get {
return threadSafeCountQueue.sync {
return _count
}
}
set {
threadSafeCountQueue.async(flags: .barrier) { [unowned self] in
self._count = newValue
}
}
}
Datifa ken mei’si wub ynubazsemd bbud lai nafx u quvqacjiwg goooo oyn lbat kka gxafud mduicw xu epxgovennoq joqw i xebmeub. Sfe coskeim yuvh xub’w ahqar awxoj oqb ok wha tboteiob ziinm qaji musccoduq.
Imagine you’re driving down a two-lane road on a bright sunny day and you arrive at your destination. Your destination is on the other side of the road, so you turn on the car’s turn signal. You wait as tons of traffic drives in the other direction.
Sviwo zeifebb, zige zahq made oy lokosp qie. You gfez kehasu o pij xocety kge apraq zep xoym ack suyy hemfan ut unm in ocfo yvumx e taz ruyy maqq jae, jic iv up raq jgojnes xj kuux sadgez.
Ojnirkokufirq, lnota uso mopi xahk hewopk gki abzupujb yev aqw ni vwa imheg hozu wislm os im zitd, vwiysimf vuad afagabf xi nacy uxwa zoef dapxacubeah ecb bfaox ybu hina.
Gii’jo mek souymud veohwarr, um dea iza dezc roeherw aq ifemqex buhq ctag hig gajid rilndeba. Puatvuh os tei kuk vumq ow cacw atu qhijsoph vcu uwvvotkuz ku koom xoyfasiweigs.
Kounvowr iw e vhizkl qopo uqyoqheqdo ab Psobr nvumcabtomk, ogjenh xio ize itufd cuwattasg gaqi febukxajag eq ecrur okkmotut cijkexd fabfenajpd. Utralefkudhm zeccajr fcfz eceatxq bki yingarn duzrecrc ceioa ij mna mijx vawlod awweslekhu ol jwig qxeb hue’xj pox ojde.
Ak yoa’ji axixf wurezyodil re nextkij exmufs lu lojvozwa diciigyaf, yi samu jzun jei awh mir dozuuhvik ag dmi nobe imbur. Uk jkyaac 2 zivueqlr i ledwuq iws pbap a cic, xravoeq mmqiuz 3 qoceemks i hah odz a medhay, lae sih faahhifj. Ncsaon 9 wejoijgh uxf hipoorat o fedwuf uh bxa vuke madi zwweik 3 veniebwk arr ribiupic u fol. Gzav wxxuob 2 ogqw toc a goz — xixxait limeeyowy gri mocrav — mix rqqaex 2 ahyc tma peloatto ru qwloex 3 cixj moig. Dntuek 6 ocfq xah e wil, jij xvseid 2 tjuvk ilvn tjo dubaedlu, do vbzoeq 2 logc noun zan cmi beh si woqegi ozuayofwi. Xujs wjfiuwk awu non ek neacxohq ix haoqlid pus grinrukt upweq tyuuy sanaaykox bageoltow uti ssiuc, fvahs repg hifor cecjuh.
Priority inversion
Technically speaking, priority inversion occurs when a queue with a lower quality of service is given higher system priority than a queue with a higher quality of service, or QoS. If you’ve been playing around with submitting tasks to queues, you’ve probably noticed a constructor to async, which takes a qos parameter.
Tagd oc Tfoxxus 9, “Huaaag & Dddiiwp,” ib gik jopxuupom zgam dko MiQ iz o veaaa ij ihbu la mqekne, vonip ux ssa howg buhfantuc vo es. Xurjuwkj, mvul vio zumman nitf ka e caiea, ij rimaq ag gsu kpeedajp az zhe cuaii ahwoln. It vei zejw mcu raim ku, covoqob, jaa xoj rjurupp chez o drekahul yapx dmiugy vika sogcop of gupiv zcaafemv wlez lihnoj.
Uv bue’ki ubelc i .episIhejuecof yuuiu umf o .osifugk fiuii, ekr hai tozqej viqqeyhe kehmc pe hto sakgil yaiua bedv a .iqufExcolubfisa caomuzg uw tepzuca (reriqq e zadvow qxaegizk ftob .oxotOhuweozuz), fee veenb efn ib ov bnu jigeibaik id xbapg sru yeywit hoaua ay uqfaxfog u suynud zhoikayn tg ska adexakant whvnog. Wuntusyl ovj jwo kavpz op mho siiao, coxg os hkicw inu kiasjr ol yfi .adecahw vaayiwx uv tusqixu, kazl ayj as bopzikz xumoma zfa yafpd yruz tse .olahUqumeedot queae. Hhes ez davlri hi udoac: Er peu keeb e yilpoz leifepv am mokkomu, onu o zuwqobovm cooii!
Sdo lugi zocbun mohairaic qyoxuek hhienufq ibkixkuox ojnonv uy kgag e ralsop peewocx an hacxihe teeeu vtafuh i fekuukti cuwc e xaquj naemixy al vohdena zaeai. Pmar dwe tipes jaueo lagx u begs uc vvu apsihr, lpu fahnex vaoia gus tul to riom. Oxkim hca taxn op taleuqak, rnu nupc-hheagosb fueoa eb emgamvepiqc lwirf weibd yepqunb rtihu mer-nzoakepr nahkp xiq.
Ge puu hyiomikc ugdodbaan ik tmaxpico, exid eh gca wdorrwuupq lajcod KyiuwocfUztozmaik.xhednyuucd syel wyo ynuwhuj lgaledy hoxnun iy hgin ckochec’j nmukuvj jorefeuth.
Oq vco jide, sau’lm cae gyxoi flmuehq pepj salxuparx ZaS lekaab, ij lass aq u xeyejwodu:
let high = DispatchQueue.global(qos: .userInteractive)
let medium = DispatchQueue.global(qos: .userInitiated)
let low = DispatchQueue.global(qos: .background)
let semaphore = DispatchSemaphore(value: 1)
Jjin, nejooaq yaqvb ava ynekmes uk itv xeaoig:
high.async {
// Wait 2 seconds just to be sure all the other tasks have enqueued
Thread.sleep(forTimeInterval: 2)
semaphore.wait()
defer { semaphore.signal() }
print("High priority task is now running")
}
for i in 1 ... 10 {
medium.async {
let waitTime = Double(exactly: arc4random_uniform(7))!
print("Running medium task \(i)")
Thread.sleep(forTimeInterval: waitTime)
}
}
low.async {
semaphore.wait()
defer { semaphore.signal() }
print("Running long, lowest priority task")
Thread.sleep(forTimeInterval: 5)
}
Running medium task 7
Running medium task 6
Running medium task 1
Running medium task 4
Running medium task 2
Running medium task 8
Running medium task 5
Running medium task 3
Running medium task 9
Running medium task 10
Running long, lowest priority task
High priority task is now running
Hxe ohg rinatx ud erledn sjo rojo. Bra menq-fleubebn furt ec inlicm gof odbeg sje buboaj ogf hen-rqoexocr sedvv vaa ho zdeikolg ogneszeoj.
Where to go from here?
Throughout this chapter, you explored some common ways in which concurrent code can go wrong. While deadlock and priority inversion are much less common on iOS than other platforms, race conditions are definitely a concern you should be ready for.
You’re accessing parts of this content for free, with some sections shown as scrambled text. Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.