Instruction 02

Understanding How Effective Azure AI Search is at Building RAGs

How do the differences between Azure AI Search and LangChain contribute to building better RAG apps? A key factor in building a successful RAG app is understanding your use case. Then, you need to know about existing techniques for indexing, prompt engineering, embedding, querying, testing, and tuning to enhance performance. Azure AI Search supports these techniques and more, as does LangChain, but the effort involved differs.

LangChain offers a more straightforward approach to applying these techniques: You simply find the right APIs and use them. Azure AI Search often requires consideration of other factors, like resource regions, subscription tiers, and permissions. However, Azure AI Search may also have certain features already enabled, potentially reducing the effort needed to enhance your app.

Understanding Indexing

While AI search is at the core of this module, indexing comes first. Before performing search queries, the data needs to be indexed. Indexing involves storing data in a way that’s optimized for search. You’ll give your index a suitable name, specify the searchable attributes, and create the index.

Note: For a basic tier plan on Azure AI Search, you can index up to 50MB of data. Anything beyond that will incur charges.

Vectors are a special representation of data, where data is stored in multiple dimensions. Think of vector representations as a space. In a two-dimensional space, an item can be identified by two properties. In a three-dimensional space, you can identify an item by three unique properties. Vectors are multi-dimensional, often in the thousands when it comes to LLMs.

Indexing for generative AI uses vectors — when you create an index, Azure AI Search automatically embeds the data.

Understanding Embedding

Embedding data first transforms the given data into a numerical vector representation suitable for embedding in vector spaces. Embedding works with many kinds of data besides text, including media formats. Documents are arranged in vector spaces based on conceptual similarities. This means similar items are arranged closer to each other, while unrelated data are farther apart.

Azure can index information uploaded in multiple formats, including JSON and text format. It can also use an indexer to convert your data into any of these formats before indexing.

When it comes to queries, vector storage indexes provide other features such as scoring and ranking. This allows for the application of many techniques to enhance search results.

In the next segment, you’ll see a demo of the RAG at work using Azure AI Search.

See forum comments
Download course materials from Github
Previous: Demo 01 Next: Demo 02