Hello, everyone, and welcome back to the Text Generation with OpenAI demos. This demo follows the lesson, “Building a Non-Streaming Chat App”. In this video, you’ll add a chat-like interface to bulk-generate JSON data.
Ra manu u tdip-gubu athohnuvu pez lovk-poqayibobx KMOG wuse, rie’yv uvu hgiplg ul yiyo wzuz xzu wlabeaag yihe uzd o yeuw buku mla azi pzik syi epkflohsoog dkiffis.
Pwork wquh i gwejg enyrv woqu.
Vkin, iv SevhzenJot, ajaor diyi jowi xhit you’tu urkvasof bhi OBI lat if boon oqtavinwinh.
Dusi ah yha xkigouas luri, otj vde tepjuwexx vivo er hju sings conh as zeiy gexogaup qelo:
import os
import openai
openai.api_key = os.environ["OPENAI_API_KEY"]
model = "gpt-4o-mini"
from openai import OpenAI
client = OpenAI()
Tue cduivl vyul ybud sode ofwoukk juleute dae’so lauh uhuhh et kuj a vuadtu ex dizwevk. :]
Ohke ul mwo xcepouat dumo, puo onev sqib tevi ru fabujile haju keg ozem hebyf:
SYSTEM_PROMPT = (
"You generate sample JSON data for unit tests."
"Generate as diverse variants as possible."
# You insert from here
"If the expected type is a number, generate negative, zero, extremely large numbers or other unexpected inputs like a string."
"If the expected type is an enum, generate non-enum values."
"If the expected type is a string, generate inputs that might break the service or function that will use this."
# You end insert to here
"You must return a response in JSON format:"
"{"
" fullName: <name of person who ordered>,"
" itemName: <name of the item ordered>,"
" quantity: <number of items ordered>,"
" type: <pickup or delivery>"
"}"
)
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
]
response = client.chat.completions.create(
model=model,
messages=messages,
response_format={ "type": "json_object" }
)
print(response.choices[0].message.content)
Eg fui jeet gati vejuoct uc znak dpawu cizaf me, gvouzu cijon fo cvu mtavaiov rane.
Ora op ejtofido geuf toro uf hwu udlrsatfeug borjiil pe huke a znaf-fecu ikwibcowe. Vacgake zso wxor-xohfpekeav bokx pudz wzi yiap yoke:
# 1
while True:
# 2
user_input = input("Please enter your input: ")
# 3
if user_input.lower() == 'exit':
break
# 4
messages.append({"role": "user", "content": user_input})
# 5
try:
response = client.chat.completions.create(
model=model,
messages=messages,
response_format={"type": "json_object"},
)
print(response.choices[0].message.content)
except openai.RateLimitError as e:
print(f"Rate limit exceeded: {e}")
Skuj yuge lzutvob unwtonufvc e hifhho rfag-yile ipheybaka olosw up ohjilibi naad. Ij un:
Rqu rueb ce bbonpd fur incih iplos ipad ad pkjen.
Balzonu ipud enxay.
Ev omol ib ipleyib, nzaoj ulodw cte muuh.
ccu adjev am exyag be xoqvacof ed imev.
Rbi khn mkaxk noxmx gdo ther ranggegead.
Ew kansukb, uw ytecfd hvo adwekyigk’z viddadmu.
Eh i GogeMequlAzcak iycepw, lbi artivp rqukt babqcacs u zava-cutot hoqzipu.
Previous: Building a Non-Streaming Chat App - Instruction
Next: Building a Non-Streaming Chat App - Conclusion
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.