Building Essence Towards Personalized Knowledge Model - PKM
-
Updated
Sep 16, 2024 - Jupyter Notebook
Building Essence Towards Personalized Knowledge Model - PKM
Gemma2(9B), Llama3-8B-Finetune-and-RAG, code base for sample, implemented in Kaggle platform
An AI agent that writes SEO-optimised blog posts and outputs a properly formatted markdown document.
Webapp to answer questions about my resume leveraging Langchain, OpenAI, Streamlit
Q&A System using BERT and Faiss Vector Database
Generative AI projetc using LangChain for similarity search. Input 3 articles urls and ask something about the topic
Contexi lets you interact with entire codebase context using a local LLM on your system.
ChatPDF leverages Retrieval Augmented Generation (RAG) to let users chat with their PDF documents using natural language. Simply upload a PDF, and interactively query its content with ease. Perfect for extracting information, summarizing text, and enhancing document accessibility.
In this project I have built an end to end advanced RAG project using open source llm model, Mistral using groq inferencing engine.
Faiss with sqlite
Advanced RAG pipeline using Re-Ranking after initial retrieval
GPU constrained! No More. Microsoft released Phi3 specially designed for memory/compute constrained environments. The model support ONXX CPU runtime which offers amazing inference speed even on mobile cpu.
This Python library provides a suite of advanced methods for aggregating multiple embeddings associated with a single document or entity into a single representative embedding.
It allows users to upload PDFs and ask questions about the content within these documents.
This is a RAG project to chat with your uploaded PDF , made using Langchain and Anthropic Claude 3 used as LLM , hosted using Streamlit
An advanced AI-powered solution enhances network diagnostics by leveraging large language models (LLMs). It parses various logs to identify patterns and anomalies, providing actionable insights for diagnosing and resolving network issues efficiently. This simplifies analysis, enabling quicker and more accurate problem detection and resolution.
📚 RAG in Memory (Streamlit - Langchain - FAISS - OpenAI)
A Streamlit-based application that empowers equity research analysts to conduct in-depth research using news articles. It leverages the power of large language models (LLMs) like Google Gemini Pro to analyze vast amounts of news data and generate insightful answers to your queries.
Add a description, image, and links to the faiss-vector-database topic page so that developers can more easily learn about it.
To associate your repository with the faiss-vector-database topic, visit your repo's landing page and select "manage topics."