Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update base.py to remove ValueError #8050

Closed
wants to merge 2 commits into from

Conversation

epinoia-au
Copy link

  • Description: This update removes the ValueError raised in the self.memory and _chain_type methods. These errors were causing issues when users attempted to use chains without intending to save. The methods now instead raise warnings, allowing chains to function as intended when not being saved.
  • Issue: Not applicable. This is an improvement rather than a bug fix.
  • Dependencies: No new dependencies introduced.
  • Tag maintainer: @hwchase17, as this change affects the Memory functionality of LangChain.

All checks for linting and testing are passing. No new integration is added so there are no additional tests or example notebooks included.

Maintainer responsibilities:

Removal of ValueError for self.memory and _chain_type methods as it breaks chains when user is not trying to save.
@vercel
Copy link

vercel bot commented Jul 21, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Jul 21, 2023 7:11am

@dosubot dosubot bot added Ɑ: memory Related to memory module 🤖:improvement Medium size change to existing code to handle new use-cases labels Jul 21, 2023
@hwchase17
Copy link
Contributor

this raised an error for a reason - these chains are not json serializable and not ready to be saved. are you changing this because you want to save them? if so we should prioritize making them savable

@epinoia-au
Copy link
Author

epinoia-au commented Jul 21, 2023

this raised an error for a reason - these chains are not json serializable and not ready to be saved. are you changing this because you want to save them? if so we should prioritize making them savable

I apologise as a I made a mistake based on the type hinting and have pushed a commit since then that should now pass all checks. Agree that making them savable is the solution but I am encountering issues when then Chain class is called which breaks the code. Not trying to save them, just creating my chains in a local utils.py file as a class.

@baskaryan
Copy link
Collaborator

could you provide some more info about steps to reproduce the error?

@epinoia-au
Copy link
Author

Sure thing so I have a utils.py file as per below:

import re
import os

import gradio as gr
from langchain.chains import (
    ConversationalRetrievalChain,
    RetrievalQAWithSourcesChain,
)
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.llms import HuggingFaceTextGenInference
from langchain.memory import ConversationBufferMemory
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain.vectorstores import Chroma

os.environ["OPENAI_API_KEY"] = "OPENAI_API_KEY"


EMBEDDINGS = OpenAIEmbeddings()
# EMBEDDINGS = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")

VECTOR_DB = Chroma(
    persist_directory="./chroma_db", embedding_function=EMBEDDINGS
)


class ChainManager:
    def __init__(self, llm_name=None, chain_type=None):
        if llm_name:
            self.llm = self.select_model(llm_name)
        else:
            self.llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)

        # TODO: ValueError: Saving of memory is not yet supported.
        self.chain = None  # Initialize chain as None

        if chain_type:
            self.select_chain(
                chain_type
            )  # Update chain if chain_type is provided

    def select_model(self, evt: gr.SelectData):
        if evt.value == "OpenAI GPT-4":
            self.llm = ChatOpenAI(model="gpt-4", temperature=0)
        elif evt.value == "OpenAI GPT-3.5-turbo":
            self.llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
        elif evt.value == "OpenAI GPT-3.5-turbo-16k":
            self.llm = ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0)
        elif evt.value == "Falcon40B-Instruct":
            inference_server_url = (
                ""  # f'https://{pod["id"]}-80.proxy.runpod.net'
            )
            self.llm = HuggingFaceTextGenInference(
                inference_server_url=inference_server_url,
                max_new_tokens=1000,
                top_k=10,
                top_p=0.95,
                typical_p=0.95,
                temperature=0,
                repetition_penalty=1.03,
            )

        return self.llm

    def select_chain(self, evt: gr.SelectData):
        if evt.value == "Chat":
            self.chain = self.conversational_chain()
        elif evt.value == "Q&A":
            self.chain = self.qa_chain()

    def conversational_chain(self, vector_db=VECTOR_DB, **kwargs):
        conversational_chain = ConversationalRetrievalChain.from_llm(
            llm=self.llm,
            retriever=vector_db.as_retriever(),
            # memory=ConversationBufferMemory(),
            condense_question_llm=self.llm,
            return_source_documents=True,
            verbose=False,  # Set to true for debugging
            # TODO: ValueError: Saving of memory is not yet supported.
        )
        return conversational_chain

    def qa_chain(self, vector_db=VECTOR_DB):
        retriever_from_llm = MultiQueryRetriever.from_llm(
            retriever=vector_db.as_retriever(), llm=self.llm
        )
        qa_chain = RetrievalQAWithSourcesChain.from_chain_type(
            self.llm,
            retriever=retriever_from_llm,
            return_source_documents=True,
        )
        return qa_chain

and a dummy app.py file as per below:

import gradio as gr

from utils import ChainManager

INIT_LLM_EVT = gr.SelectData(
    None, {"index": 0, "value": "OpenAI GPT-3.5-turbo", "selected": True}
)
INIT_CHAIN_EVT = gr.SelectData(
    None, {"index": 0, "value": "Q&A", "selected": True}
)
CHAIN_MANAGER = ChainManager(llm_name=INIT_LLM_EVT, chain_type=INIT_CHAIN_EVT)


def response(message: str, chat_history: list, chain=CHAIN_MANAGER.chain):
    return None


with gr.Blocks() as demo:
    with gr.Row():
        chatbot = gr.Chatbot(scale=3, show_label=False)
        with gr.Accordion(label="sources", open=False):
            chunk_source = gr.JSON()  # Need to show chunk, content, page
    with gr.Row():
        user_message = gr.Textbox(
            show_label=False,
            placeholder="Enter your message and press enter",
            scale=3,
        )

    user_message.submit(
        fn=response,
        inputs=[user_message, chatbot],
        outputs=[user_message, chatbot, chunk_source],
    )


demo.queue()
if __name__ == "__main__":
    demo.launch()

When I use gradio app.py I get the saving not supported error. As an FYI this code was when I was familiarising myself with langchain and gradio and since then there has been a lot of refactoring which seems to have eliminitated the issue but I still think it's interesting.

@hwchase17
Copy link
Contributor

i looked into this - this actually doesn't have anything to do with memory, but rather with RetrievalQAWithSourcesChain

Also, i think this is caused by the following lines:

def response(message: str, chat_history: list, chain=CHAIN_MANAGER.chain):
    return None

the issue arises when your using hte chain as a partialed variable value, because then its trying to get serialized. I would just do something like

def response(message: str, chat_history: list, ):
    chain=CHAIN_MANAGER.chain

or something

@hwchase17 hwchase17 closed this Aug 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:improvement Medium size change to existing code to handle new use-cases Ɑ: memory Related to memory module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants