The RAG Microservice is an intelligent assistant designed to provide detailed, structured, and professional responses about Kushagra Sikka's professional profile. It uses Retrieval-Augmented Generation (RAG) to retrieve relevant information and generate well-formatted, concise answers tailored for recruiters and collaborators.
This system is tailored to Kushagra's recruiters and professional network, allowing them to:
- Fetch structured professional summaries.
- Retrieve details about technical skills with usage examples.
- Explore academic achievements, work experience, and projects.
- Gain insights into recent contributions and research focus.
- Access contact information and links to professional profiles.
The assistant ensures:
- Accurate and verified information retrieval from curated documents.
- Well-formatted responses using bullet points for readability.
- A professional tone to enhance user experience and utility.
-
"Who is Kushagra Sikka?"
Provides a professional overview, recent impact, and technical expertise. -
"What are Kushagra's technical skills?"
Details programming languages, cloud platforms, and tools with usage examples. -
"Tell me about Kushagra's achievements."
Highlights teaching impact, technical accomplishments, and academic recognition.
The system integrates:
- Backend: FastAPI + Haystack for RAG implementation.
- Frontend: React + TailwindCSS for a user-friendly interface.
- Deployment: Docker, Jenkins, and AWS EC2 for production readiness.
- Document Store: Stores structured professional data (e.g., skills, projects, achievements).
- Retriever: Fetches relevant documents based on the query.
- Prompt Builder: Constructs dynamic prompts tailored to the query type.
- Generator: Uses a text generation model to produce well-formatted, bullet-pointed responses.
- Interactive UI for querying Kushagra's profile.
- Real-time response rendering with React.
- Mobile-responsive design using TailwindCSS.
Follow these steps to set up the project locally:
git clone https://github.com/YourUsername/RAG_Microservice.git
cd RAG_Microservice
python -m venv venv
source venv/bin/activate # On Windows use: .\venv\Scripts\activate
pip install -r requirements.txt
cd rag-frontend
npm install
Copy the example .env
file and update it with your configuration:
cp .env.example .env
uvicorn rag_microservice.app:app --reload
In a new terminal:
cd rag-frontend
npm start
To deploy the application using Docker:
docker-compose up -d
- Configure an AWS EC2 instance with the necessary environment.
- Set up Docker and Jenkins for continuous integration and deployment.
- Refer to the
deployment/README.md
file for step-by-step instructions.
RAG_Microservice/
├── rag_microservice/ # Backend code
│ └── app.py # Main FastAPI application
├── rag-frontend/ # React frontend
├── data/ # Data directory
├── docker-compose.yml # Docker compose configuration
├── Jenkinsfile # CI/CD pipeline
└── deployment/ # Deployment documentation and scripts
Variable | Description |
---|---|
CORPUS_DOCUMENTS_PATH |
Path to the document corpus (e.g., professional details). |
TEXT_EMBEDDING_MODEL |
Model for text embeddings (e.g., sentence-transformers/all-MiniLM-L6-v2 ). |
GENERATOR_MODEL |
Model for text generation (e.g., google/flan-t5-large ). |
Variable | Description |
---|---|
REACT_APP_API_URL |
Backend API URL (e.g., http://localhost:8000 ). |
- Implements RAG using Haystack components:
- Document Splitter: Chunks documents for better retrieval.
- Retriever: Retrieves relevant documents based on queries.
- Prompt Builder: Constructs dynamic prompts for generation.
- Generator: Produces well-structured responses.
- Pre-processed corpus includes:
- Professional profile, achievements, and technical skills.
- Work experience and key projects.
- Contact information.
- Query interface with a clean, professional design.
- Supports real-time query results.
- Mobile-friendly and intuitive layout.
- Ask a Question: The user asks a question, such as "What are Kushagra's skills?".
- Document Retrieval: The system retrieves relevant sections from the document store.
- Dynamic Prompt Creation: A prompt is generated based on the question type.
- Answer Generation: The model generates a structured, bullet-pointed response.
- Response Display: The frontend displays the response in a user-friendly format.
To provide a quick, reliable, and structured way for recruiters to:
- Understand Kushagra's professional profile.
- Explore technical expertise and projects.
- Gain insights into recent achievements and research focus.
- Time-Saving: Direct answers without the need to parse lengthy resumes.
- Structured Format: Responses are concise and categorized for better readability.
- Accurate Data: Fetches verified information only.
- "Who is Kushagra Sikka?"
- Learn about Kushagra's current roles, educational background, and recent impact.
- "What are Kushagra's achievements?"
- Understand his teaching impact, technical accomplishments, and academic recognition.
- "Tell me about Kushagra's skills."
- Explore his technical expertise with usage examples.
- Fork the repository.
- Create a feature branch:
git checkout -b feature/AmazingFeature
- Commit your changes:
git commit -m 'Add AmazingFeature'
- Push to the branch:
git push origin feature/AmazingFeature
- Open a Pull Request.
For further queries or contributions, reach out to Kushagra Sikka:
- Email: [email protected]
- Portfolio: kushagrasikka.com
- GitHub: github.com/KushagraSikka
- LinkedIn: linkedin.com/in/kushagrasikka