Building a Complete RAG Application in Azure with No Code
March 28, 2025 1 Comment
Retrieval-Augmented Generation (RAG) is a hot item in the AI world right now as organisations are finding it a useful pattern for building LLM-based chat applications against an easily updateable knowledge store, without the expense of re-training the LLM. The pattern provides a base for AI generated responses that are as reliable, context-bounded, and current as the data in the knowledge store (which can be as simple as a collection of documents). Even better, RAG provides a means for the LLM to respond with citations so you can be confident of where the answer is sourced from:
Two things are critical to making a RAG application possible:
-
- Reliable and high-quality components (especially the LLM and the search capability over the knowledge store)
- Carefully constructed workflow solutions for handling both the ingestion of data and the chat interface
Both of these requirements can be met using Microsoft Azure services – and best of all, with no coding required!
As for the ingestion workflow, Stephen W. Thomas already has provided an excellent video guide for building this using Azure OpenAI, Azure AI Search, and Azure Logic Apps. He takes you through the process step-by-step, including the provisioning of all the necessary services and the permissions required.
Because Stephen’s guide is so thorough, I don’t need to repeat any of it here. However, the video does not cover how to build the chat workflow, which is worth a discussion – especially because I found a couple of potential traps with the Microsoft provided template.

