Welcome to the Advanced RAG Comparison and Analysis Lab!
This Streamlit-based sandbox allows AI enthusiasts, students, and researchers to explore the internals of Retrieval Augmented Generation (RAG).
Stay tuned! More advanced RAG techniques & features are underway and will be added soon.
https://rag-by-mk.streamlit.app/
- Interactive Playground to tweak and test RAG components.
- Side-by-Side Comparison of various indexing, retrieval, and LLM configurations.
- Detailed Metadata Inspection to understand each processing stage.
- Indexing Backbones: Vector Store & Knowledge Graph indexes
- LLM Selection: Choose models like Llama3, DeepSeek-r1-distill-llama-70b (via Groq API)
- Retrieval Techniques: Vector-only, Sparse (BM25), and Hybrid search
- Query Optimization: Enable Query Transformations like HyDE
- Re-ranking: Use sentence transformer models to refine results
- LLM-as-a-Judge: Evaluate output coherence and relevance
- Frontend: Streamlit
- Framework: LlamaIndex
- LLMs: Groq API (Llama3, DeepSeek Coder)
- Embeddings: FastEmbed
- Rerankers: Sentence Transformers
- Sparse Retrieval: BM25
- Document Parsing: PyMuPDF, python-docx, openpyxl, Markdown
LLM/
├── data_for_rag/ # Upload documents here
├── persisted_index/ # Stores indexes and document states
├── ui/ # Streamlit UI components
├── utils/ # Helper functions
├── workflow/ # RAG logic (rag_workflow.py)
├── config.py # App configuration
├── main.py # Entry-point script
└── requirements.txt # Dependencies
- Python 3.9+
- Groq API Key
git clone https://github.com/MohanKrishnaGR/LLM.git
cd LLM
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r LLM/requirements.txtstreamlit run LLM/main.py- Add Documents: Place files in
LLM/data_for_rag/. - Configure Sidebar: Choose LLM, index type, chunk size, etc.
- Build Index: Click 'Create / Update Index'.
- Experiment: Adjust retrieval & transformation settings.
- Query & Analyze: Run queries and view detailed metadata.
Built by Mohan Krishna G R, passionate about GenAI and RAG systems.