Cookie Consent by Free Privacy Policy Generator Aktuallisiere deine Cookie Einstellungen ๐Ÿ“Œ Creating AI Apps Using RAG & LangChain: A Step-by-Step Developer Guide!


๐Ÿ“š Creating AI Apps Using RAG & LangChain: A Step-by-Step Developer Guide!


๐Ÿ’ก Newskategorie: Programmierung
๐Ÿ”— Quelle: dev.to

Today, Large language models (LLMs) have emerged as one of the biggest building blocks of modern AI/ML applications. Gone are the days when AI was considered more of a fiction rather than a reality. Every organization is embracing the power of these LLMs to build their personalized applications. The advantages these LLMs provide are enormous and hence it is obvious that the demand for such applications is more.

Companies such as Google, Meta, OpenAI, Anthropic, etc to name a few, have tremendously contributed to the growth of Generative AI. But to build LLM-powered applications, LLMs are just not enough, you need to have some tools, a framework and an approach to make sure the applications are robust and work as expected.

In this article, we are going to discuss one such framework known as retrieval augmented generation (RAG) along with some tools and a framework called LangChain.

What is Retrieval Augmented Generation?

Large language models are great but they too have limitations such as creating fake, biased, made-up responses that are inaccurate and these are referred to as LLM hallucinations. Such responses generated by these LLMs hurt the applications authenticity and reputation. To mitigate such unwanted responses from the LLMs, there are some techniques that have gained popularity. One such approach is retrieval augmented generation (RAG).

RAG Approach

RAG is where the LLM applications are augmented with some external knowledge base to mitigate the effects of hallucination. This way, for any user query, the system goes through the knowledge base to search for the relevant information and finds the most accurate information. There will be no room for hallucination since the custom knowledge source is already present.

See the above image for example, the PDF is our external knowledge base that is stored in a vector database in the form of vector embeddings (vector data). Basically, the PDF document gets split into small chunks of words and these words are then assigned with numerical numbers known as vector embeddings. You need an embedding model to convert text, image, audio, video, into embeddings.

The user query goes through the same LLM to convert it into an embedding and then through the vector database to find the most relevant document. Once the relevant document is found, it is then added with more context through the LLM and finally the response is generated. This way, RAG has become the bread and butter of most of the LLM-powered applications to retrieve the most accurate if not relevant responses. Well, there are some notable AI frameworks such as LangChain and LlamaIndex that help these LLM applications to be robust by providing all the toolkit required. Letโ€™s understand LangChain since we will be using LangChain in our tutorial.

What is LangChain?

LangChain is an open-source AI framework developed by Harrison Chase to help developers to create robust AI applications by provisioning all the components required. LangChain is equipped with memory capabilities, integrations with vector databases, tools to connect with external data sources, logic and APIs. This makes LangChain a powerful framework for building LLM-powered applications.

LangChain modules Image credits: Upstash

LangChain consists of modules such as Model I/O, Retrieval, Chains and Agents, each having their own strengths to help developers build seamless AI applications. Model I/O module handles prompts, LLMs interaction, chat models and output parsers. The retrieval module handles everything related to data management from loading to modifying to text splitters to embedding the data using embedding models. Then comes the Chain module and as the name suggests, it basically interlinks all the tasks together to make sure the tasks happen in a sequential fashion.

The agents act as the brain of the system that handles the decision making. They determine the sequence of actions to take to complete the task. The agent is capable of choosing the tools required for the task. LangChain has many agent toolkit libraries that can be used to build powerful LLM powered applications.

You can install LangChain using the following pip command

pip install langchain

What is SingleStore?

SingleStore is a modern cloud-based relational and distributed database management system that specializes in high-performance, real-time data processing. SingleStore is not just for OLAP and OLTP workloads, but one can also build real-time GenAI applications seamlessly.

singlestore database

SingleStore started to support vector search and storage back in 2017 itself. It has some amazing integrations with todayโ€™s popular AI frameworks such as Langchain, LlamaIndex etc. Supports both SQL and Python and all the data types and this makes it the only database any organization can have instead of having different types of databases for different types of workloads.

RAG with LangChain and SingleStore: Hands-on Tutorial!

Letโ€™s build a simple AI application that can fetch the contextually relevant information from our own custom data for any given user query.

Sign up to SingleStore database to use it as our vector database.

Once you sign up, you need to create a workspace. It is easy and free, so do it.

singlestore workspace

Once you create your workspace, create a database with any name of your wish.

workspace

As you can see from the above screenshot, you can create the database from โ€˜Create Databaseโ€™ tab on the right side.

Now, letโ€™s go to โ€˜Developโ€™ to use our Notebooks feature [just like Jupyter Notebooks]

singlestore database

Create a new Notebook and name it as you wish.
new notebook

Before doing anything, select your workspace and database from the dropdown on the Notebook.

singlestore db workspace

Now, start adding all the below shown code snippets into your Notebook you just created as shown below.

Install the required libraries & dependencies

!pip install langchain --quiet
!pip install --upgrade openai==0.28.1 --quiet
!pip install pdf2image --quiet
!pip install pdfminer.six --quiet
!pip install singlestoredb --quiet
!pip install tiktoken --quiet
!pip install --upgrade unstructured==0.10.14 --quiet

Import the libraries

from langchain.document_loaders import PyPDFLoader
from langchain.chat_models import ChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains import RetrievalQA, ConversationalRetrievalChain
import os

Load your custom document

from langchain.document_loaders import OnlinePDFLoader
loader = OnlinePDFLoader("example.pdf")
data = loader.load()

I am using this publicly available pdf about world tourism barometer.

[If you like to use the same, mention it in the place of example.pdf with the complete url]

Using LangChain framework to split the document into chunks

from langchain.text_splitter import RecursiveCharacterTextSplitter

print(f"You have {len(data)} document(s) in your data")
print(f"There are {len(data[0].page_content)} characters in your document")

text_splitter = RecursiveCharacterTextSplitter(chunk_size = 2000, chunk_overlap = 0)
texts = text_splitter.split_documents(data)

print(f"You have {len(texts)} pages")

Use OpenAI API to generate embeddings for the document chunks

import os
import getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key: ")

When you run the above command, it prompts you to add your OpenAI api key.

Letโ€™s store our document chunks into SingleStore database table

Action required: Make sure you have selected the workspace and the database where you want to store your data.

from langchain.embeddings import OpenAIEmbeddings

embedding = OpenAIEmbeddings()

#from langchain.vectorstores.singlestoredb as s2
from langchain.vectorstores import SingleStoreDB
#from langchain.vectorstores.utils import DistanceStrategy

#s2.ORDERING_DIRECTIVE["DOT_PRODUCT"] = s2.ORDERING_DIRECTIVE[DistanceStrategy.DOT_PRODUCT]

docsearch = SingleStoreDB.from_documents(
    texts,
    embedding,
    table_name = "tourism_pdf",
    #distance_strategy = "DOT_PRODUCT"
)

You can change the table name as per your wish.

Let us check the text chunks and associated embeddings stored inside our database.

select * from tour_pdf limit 1;

Ask a query against your custom data (the pdf that you loaded) using just similarity search to retrieve the top k closest content.

query = "Global inflation is expected to fall or rise in 2023?"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)

The answer you should see is a big paragraph which is less accurate and not so efficient.

Here is the augmented response to the user query

import openai

prompt = f"The user asked: {query}. The most similar text from the document is: {docs[0].page_content}"

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": prompt}
    ]
)
print(response['choices'][0]['message']['content'])

The response you receive will be to the point and highly efficient with more context.

Letโ€™s test when knowledge base (custom documents like pdf) is not provided

from langchain.llms import OpenAI
llm = OpenAI(temperature=0.8)

llm.predict("your query?")

In this case, it wonโ€™t provide the information and might say that it doesnโ€™t have enough data to answer your question/query. This way, the RAG approach mitigates the hallucination effects of LLMs and increases the efficiency.

Finally, you can go to your database and verify if the provided pdf is stored chunkwise. You should see the data as below.

vector data store

Hope you understood how we utilized the RAG approach combined with LangChain framework and SingleStore to store and retrieve data efficiently. If you like to try the above tutorial, you need a free SingleStore account, OpenAI api key and a publicly available pdf.

SingleStore is giving away $600 worth free credits for every new signup.
Try the tutorial and let me know what you think:)

...



๐Ÿ“Œ This AI Paper Outlines the Three Development Paradigms of RAG in the Era of LLMs: Naive RAG, Advanced RAG, and Modular RAG


๐Ÿ“ˆ 57.5 Punkte

๐Ÿ“Œ Evolution of RAGs: Naive RAG, Advanced RAG, and Modular RAG Architectures


๐Ÿ“ˆ 43.13 Punkte

๐Ÿ“Œ AI Series Part V: Creating a RAG chatbot with LangChain (NextJS+Python)


๐Ÿ“ˆ 39.25 Punkte

๐Ÿ“Œ The RAG Triad: Guide to Evaluating and Optimizing RAG Systems


๐Ÿ“ˆ 34.55 Punkte

๐Ÿ“Œ Build a serverless ChatGPT with RAG using LangChain.js


๐Ÿ“ˆ 34.49 Punkte

๐Ÿ“Œ Building a RAG chain using LangChain Expression Language (LCEL)


๐Ÿ“ˆ 34.49 Punkte

๐Ÿ“Œ Using LangChain ReAct Agents for Answering Multi-hop Questions in RAG Systems


๐Ÿ“ˆ 34.49 Punkte

๐Ÿ“Œ Easy Guide to Creating Smart Chatbots with Langchain & GPT-4


๐Ÿ“ˆ 32.77 Punkte

๐Ÿ“Œ Mastering LangChain: Part 1 - Introduction to LangChain and Its Key Components


๐Ÿ“ˆ 31.03 Punkte

๐Ÿ“Œ CVE-2024-1455 | langchain-ai LangChain XMLOutputParser xml entity expansion


๐Ÿ“ˆ 31.03 Punkte

๐Ÿ“Œ CVE-2024-3571 | langchain-ai langchain up to 0.0.352 LocalFileStore path traversal


๐Ÿ“ˆ 31.03 Punkte

๐Ÿ“Œ CVE-2024-3095 | langchain-ai langchain up to 0.1.5 Web Research Retriever server-side request forgery


๐Ÿ“ˆ 31.03 Punkte

๐Ÿ“Œ Meet the โ€˜LangChain Financial Agentโ€™: An AI Fintech Project Built on Langchain and FastAPI


๐Ÿ“ˆ 31.03 Punkte

๐Ÿ“Œ Retrieval-Augmented Generation (RAG): From Theory to LangChain Implementation


๐Ÿ“ˆ 29.89 Punkte

๐Ÿ“Œ Use Amazon SageMaker Studio to build a RAG question answering solution with Llama 2, LangChain, and Pinecone for fast experimentation


๐Ÿ“ˆ 29.89 Punkte

๐Ÿ“Œ Retrieval Augmented Generation (RAG) Inference Engines with LangChain on CPUs


๐Ÿ“ˆ 29.89 Punkte

๐Ÿ“Œ Implementing RAG: How To Write a Graph Retrieval Query in LangChain


๐Ÿ“ˆ 29.89 Punkte

๐Ÿ“Œ How to Build a RAG System with a Self-Querying Retriever in LangChain


๐Ÿ“ˆ 29.89 Punkte

๐Ÿ“Œ Building an Observable arXiv RAG Chatbot with LangChain, Chainlit, and Literal AI


๐Ÿ“ˆ 29.89 Punkte

๐Ÿ“Œ Build a simple RAG chatbot with LangChain...


๐Ÿ“ˆ 29.89 Punkte

๐Ÿ“Œ Chat With Your PDFs: Part 2 - Frontend - An End to End LangChain Tutorial. Build A RAG with OpenAI.


๐Ÿ“ˆ 29.89 Punkte

๐Ÿ“Œ RAG From Scratch โ€“ Tutorial from LangChain Engineer


๐Ÿ“ˆ 29.89 Punkte

๐Ÿ“Œ What is Retrieval Augmented Generation (RAG) and how does Azure AI Search unlock RAG?


๐Ÿ“ˆ 28.75 Punkte

๐Ÿ“Œ RAG Redefined : Ready-to-Deploy RAG for Organizations at Scale.


๐Ÿ“ˆ 28.75 Punkte

๐Ÿ“Œ RAG is Dead. Long Live RAG!


๐Ÿ“ˆ 28.75 Punkte

๐Ÿ“Œ โ€˜RAG Me Upโ€™: A Generic AI Framework (Server + UIs) that Enables You to Do RAG on Your Own Datasetย Easily


๐Ÿ“ˆ 28.75 Punkte

๐Ÿ“Œ Developerโ€™s Guide : Modular, Flexible, Scalable Prod ready RAG


๐Ÿ“ˆ 26.87 Punkte

๐Ÿ“Œ t3n Daily: Adobe & Figma, Ethereum & NFT, Steuer & Homeoffice, KI & Gruselfrau


๐Ÿ“ˆ 25.2 Punkte

๐Ÿ“Œ Creating a Knowledge-Based Chatbot with OpenAI Embedding API, Pinecone, and Langchain.js


๐Ÿ“ˆ 24.87 Punkte

๐Ÿ“Œ A Quick Guide to RAG Using Algoboost for Embedding Vector Inference


๐Ÿ“ˆ 24.77 Punkte

๐Ÿ“Œ Using ChatGPT with Your Own Data using LangChain and Supabase


๐Ÿ“ˆ 24.71 Punkte

๐Ÿ“Œ Create LLM Powered Apps Using Langchain and OpenAI API


๐Ÿ“ˆ 23.92 Punkte











matomo