Text Generation with OpenAI

Nov 14 2024 · Python 3.12, openAI 1.52, JupyterLab

Lesson 01: Introduction to OpenAI & API Setup

OpenAI Platform - Demo

Episode complete

Play next episode

Next
Transcript

Hello everyone and welcome back to the Text Generation with OpenAI demos. This follows lesson 1, introduction to open ai and api setup. In this video, you will get to know the OpenAI platform dashboard better.

Demo

From the previous chapter, you had the chance to sign up and see the offerings of OpenAI. From that, you should be able to understand how important model choice and API usage are in keeping your costs in check.

In this demo, you’ll see how to check your usage and drill down to what your usage and bill consist of. You’ll also explore more of the platform pages from OpenAI, such as rate limits and the playground. Lastly, you’ll play more with the tokenizer tool in order to get a better grasp of how tokens are counted.

To start, go to Usage. It’ll be pretty empty for you since you haven’t used the APIs yet. Nevertheless, you’ll see here an example of a project that used the API extensively within a month.

You can see the total bill on the right, including the billing limit and a button Increase Limit for raising the limits. On the left, you’ll see a breakdown of each model used, including the fine-tuned models. You’ll also see the usage for each day in the month and also for each model.

Once you have more usage of the APIs you should be able to see more data in your Usage page.

OpenAI provides a tool to try out the APIs without writing a single line of code. This is called playground. To access it, go to this link. You should see a chat-like interface. Here, you can enter a system prompt, which you’ll learn in a later lesson.

Go ahead and write some prompts.

Each user is assigned a certain level of allowed API calls per minute and other limits. You start with Level 1. To check your limits, go here.

In this demo, you’ll see an account with Level 4 limits. At the bottom of the page, you’ll see how much you need to spend in OpenAI bills to reach the next level of limits.

In the Instruction section, you’ve seen OpenAI’s tokenizer tool. This allows you to see how inputs are split into tokens and allows you to count them. You’ll input a quote, a block of code, and some short stories to get a sense of how many tokens these texts are.

Go ahead and enter:

“The Answer to the Great Question… Of Life, the Universe and Everything… Is… Forty-two,’ said Deep Thought, with infinite majesty and calm.

How many tokens was it? 42? It’s 31. :]

How about the famous piece of code from John Carmack?

float Q_rsqrt(float number)
{
  long i;
  float x2, y;
  const float threehalfs = 1.5F;

  x2 = number * 0.5F;
  y  = number;
  i  = * ( long * ) &y;                       // evil floating point bit level hacking
  i  = 0x5f3759df - ( i >> 1 );               // ??
  y  = * ( float * ) &i;
  y  = y * ( threehalfs - ( x2 * y * y ) );   // 1st iteration
  // y  = y * ( threehalfs - ( x2 * y * y ) );   // 2nd iteration, this can be removed

  return y;
}

That’s 171 tokens. Notice how symbols are tokenized.

Try to input an entire Kodeco article. How many tokens do you see?

Can you guess how many books you can fit in 1M tokens?

To summarize, OpenAI provides tools for you to monitor your usage and limits it according to your previous usage. With the tokenizer tool, you can also predict how many tokens you’ll consume.

Hope that helps! Soon, you’ll be using this knowledge in practice.

See forum comments
Cinema mode Download course materials from Github
Previous: OpenAI Setup - Instruction Next: OpenAI & API Setup - Conclusion