Instruction 02

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

New Names

Years ago, all the frameworks had two-letter prefixes. This was to help with name spacing and collisions. For example, many frameworks might have wanted something called Image. Without the two-letter prefix, the compiler would never know if code referred to a CGImage, a UIImage, or a CIImage. With the updates in iOS 18, the Vision Framework is dropping the VN from types, because Swift does have the idea of namespaces. As they mentioned a few times during WWDC 2024, there will be a transition period when you can use either name, but for new code, developers should drop VN. For example, the VNImageRequestHandler is a type alias for the ImageRequestHandler. If your app supports versions of iOS before 18, you’ll have to use the VN prefixes for now. You’ll need to decide how you want to migrate going forward.

let request: VNRequest

if #available(iOS 18, *) {
  request = AnimalRecognitionRequest()
} else {
  request = VNRecognizeAnimalsRequest()
}

try? handler.perform[request]

Concurrency With Async/Await

By using the async/await pattern instead of completion block handlers, code becomes more readable. Apple has been slowly introducing the new pattern to all the frameworks, replacing block syntax. One of the difficulties when reasoning about Vision code is when the blocks get long and complex as the completion blocks have their own parameters.

let request = VNRecognizeTextRequest { request, error in
  guard let observations = request.results as?
    [VNRecognizedTextObservation],
    error == nil else {
      print("Error: \(error?.localizedDescription ?? "Unknown error")")
      return
  }

    let recognizedText = observations.compactMap
      { $0.topCandidates(1).first?.string }
    print("Recognized Text: \(recognizedText)")
}

let handler = VNImageRequestHandler(cgImage: image, options: [:])
do {
  try handler.perform([request])
} catch {
  print("Failed to perform request: \(error.localizedDescription)")
}
Task {
  let request = VNRecognizeTextRequest()
  let handler = VNImageRequestHandler(cgImage: image, options: [:])

  do {
    try await handler.perform([request])

    guard let observations = request.results as?
      [VNRecognizedTextObservation] else {
        throw VisionError.noResults
    }

    let recognizedText = observations.compactMap
      { $0.topCandidates(1).first?.string }
    print("Recognized Text: \(recognizedText)")
  } catch {
    print("Error recognizing text: \(error.localizedDescription)")
  }
}

Simulator Compute Devices

You’ll remember that in the earlier examples, you added the following snippet to get any Vision framework code to work with the simulator:

#if targetEnvironment(simulator)
  request.usesCPUOnly = true
#endif
#if targetEnvironment(simulator)
if let supportedDevices = try? request.supportedComputeStageDevices {
  if let mainStage = supportedDevices[.main] {
    if let cpuDevice = mainStage.first(where: { device in
      device.description.contains("CPU") }) {
      request.setComputeDevice(cpuDevice, for: .main)
    }
  }
}
#endif
See forum comments
Download course materials from Github
Previous: Instruction 01 Next: Demo