Localizer Project

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

You’ve learned about memory, structured output, and human-in-the-loop interactions. These are all things you can apply to your app string localizer project. This segment will summarize the changes, and in the next section, you’ll implement them.

Memory

Your app state is already taken care of with your custom State object and the StateGraph, but you’ll still need to implement checkpoints to add a breakpoint for human-in-the-loop interaction.

Structured Output

You eventually want the translated app strings to be in a format appropriate for Android, iOS, Flutter, or whatever your framework requires. For Android, the strings should be in XML format in a strings.xml file. And for iOS, the format should be key-value pairs in a .strings file. Forcing the output to be a Pydantic model or a TypedDict isn’t super useful for either of those formats, so you won’t go that route. However, you can still write a custom LLM prompt to get your formatter node to output the right format.

Human-in-the-Loop

While the translator-checker cycle was interesting in Lesson 3, you’ll replace that workflow with a human-in-the-loop. The checker will pass any dubious translations to a human to verify or improve. The image below shows the old architecture on the left and the new architecture on the right:

Npurzep Qyibkiq Tobjohruonidan Dijnoptoecisac Vkejbrarec Dkothtalok Yovfucsiq Zeffuhfum Xibic

See forum comments
Download course materials from Github
Previous: Human-in-the-Loop Demo Next: Localizer Project Demo