Welcome to the Techietalks AI repository! This project is a multi-model AI-powered chat assistant built using Streamlit, Pydantic_AI, and multiple AI models like OpenAI, DeepSeek, and Gemini. It allows users to interact with different AI models, switch between them, and enjoy a conversational experience. Additionally, this version includes Retrieval-Augmented Generation (RAG) functionality, enabling the assistant to answer questions based on uploaded PDF documents.
This repository contains all the necessary files to set up and run the chat assistant locally or in a Docker container. Below, you'll find a detailed explanation of the project and instructions to get started.
Hereβs a breakdown of the files in this repository:
.DS_Store
: A macOS-specific file that stores folder attributes (e.g., icon positions). You can ignore this file..env
: A file to store environment variables, such as API keys. Important: Add your OPENAI_API_KEY
, DEEPSEEK_API_KEY
, and GEMINI_API_KEY
here to authenticate with the respective APIs..gitignore
: Specifies files and folders that Git should ignore (e.g., .env
to avoid exposing sensitive information).Dockerfile
: Contains instructions to build a Docker image for the Streamlit application. It sets up a Python environment, installs dependencies, and runs the Streamlit app.app.py
: The main application file. It contains the code for the Streamlit-based chat interface, integrates with multiple AI models (OpenAI, DeepSeek, and Gemini), and includes RAG functionality for processing PDF documents.docker-compose.yml
: A configuration file to run the application using Docker Compose. It sets up two services: the chatbot (Streamlit app) and the webserver (NGINX for serving static content).requirements.txt
: Lists all Python dependencies required to run the application.sree.txt
: A placeholder text file (likely for personal notes or testing).data/
: A directory containing:
pdfs/
: Stores uploaded PDF documents.vector_store/
: Stores vector embeddings of the PDF documents for RAG functionality.web/static/index.html
: A static HTML file served by the NGINX web server. You can customize this file to add additional content or documentation.This project uses the following technologies:
When you type a question into the chat interface, the app sends it to the selected AI model (OpenAI, DeepSeek, or Gemini), processes the response, and displays it in a conversational format. It also maintains a history of the conversation, allowing the AI to provide more context-aware answers. The "New Chat" button resets the conversation, clearing the history and starting fresh.
The RAG functionality allows users to upload PDF documents, which are processed and used to provide context-aware answers. The app uses ChromaDB to store vector embeddings of the documents and retrieve relevant information when answering questions.
Follow these steps to set up and run the project:
Clone the Repository:
bash
git clone https://github.com/schogini/techietalksai.git
cd techietalksai/0007
Set Up Environment Variables:
.env
file in the project directory.plaintext
OPENAI_API_KEY=your_openai_api_key_here
DEEPSEEK_API_KEY=your_deepseek_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here
Run the Application:
bash
docker-compose up --build
http://localhost:8502
.http://localhost:8503
.Upload PDFs:
This project is open-source and available under the MIT License. Feel free to use, modify, and distribute it as needed.
For AI consultancy, training, and development, contact Schogini Systems Private Limited at https://www.schogini.com.
Sreeprakash Neelakantan
- Website: https://www.schogini.com
- GitHub: https://github.com/schogini/techietalksai.git
Enjoy using the Techietalks AI Multi-Model Chat Assistant with RAG! If you have any questions or feedback, feel free to reach out. π