Introduction

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

You might be thinking: There’s actually no AI in this module. AFAIK there’s no Genmoji API that enables an app to generate a Genmoji from user input. When entering text in your app, a user can switch to the new emoji keyboard and either select a custom emoji or enter a description of an image that AI will generate. The user types something like “cat wearing a Santa suit” and the AI generates a few options for the user to choose from. Their choice then appears inline in the rest of their text, like emoji do now.

You the developer don’t need to implement this — it will just happen, as part of the new emoji keyboard.

Your responsibility is to ensure your app can work with text that includes inline images. This means enabling the user to input rich text and storing the string as NSAttributedString, where custom emojis are stored as NSAdaptiveImageGlyph attachments.

Apple documentation indicates that you need to work with NSAttributedString to handle image glyphs. Although there is evidence of an AdaptiveImageGlyph API, it’s very bare bones. This means you need to refresh your UIKit-fu and/or wrap UIKit views or view controllers in UIViewRepresentable or UIViewControllerRepresentable.

Here’s what you’ll learn in this lesson:

  • Implement Genmoji support in rich text view and plain text view configurations.
  • Explain how NSAdaptiveImageGlyph enables cross-platform Genmoji display.
See forum comments
Download course materials from Github
Previous: Foreword Next: Instruction