Nikunj2003
/
LLaMa-MCP-Streamlit
star
25
AI assistant built with Streamlit, NVIDIA NIM (LLaMa 3.3:70B) / Ollama, and Model Control Protocol (MCP).

Llama MCP Streamlit

This project is an interactive AI assistant built with Streamlit, NVIDIA NIM's API (LLaMa 3.3:70b)/Ollama, and Model Control Protocol (MCP). It provides a conversational interface where you can interact with an LLM to execute real-time external tools via MCP, retrieve data, and perform actions seamlessly.

The assistant supports:

  • Custom model selection (NVIDIA NIM / Ollama)
  • API configuration for different backends
  • Tool integration via MCP to enhance usability and real-time data processing
  • A user-friendly chat-based experience with Streamlit

📸 Screenshots

Homepage Screenshot

Tools Screenshot

Chat Screenshot

Chat (What can you do?) Screenshot

📁 Project Structure

llama_mcp_streamlit/
│── ui/
│   ├── sidebar.py       # UI components for Streamlit sidebar
│   ├── chat_ui.py       # Chat interface components
│── utils/
│   ├── agent.py         # Handles interaction with LLM and tools
│   ├── mcp_client.py    # MCP client for connecting to external tools
│   ├── mcp_server.py    # Configuration for MCP server selection
│── config.py            # Configuration settings
│── main.py              # Entry point for the Streamlit app
.env                      # Environment variables
Dockerfile                # Docker configuration
pyproject.toml            # Poetry dependency management

🔧 Environment Variables

Before running the project, configure the .env file with your API keys:

# Endpoint for the NVIDIA Integrate API
API_ENDPOINT=https://integrate.api.nvidia.com/v1
API_KEY=your_api_key_here

# Endpoint for the Ollama API
API_ENDPOINT=http://localhost:11434/v1/
API_KEY=ollama

🚀 Running the Project

Using Poetry

  1. Install dependencies:
    poetry install
    
  2. Run the Streamlit app:
    poetry run streamlit run llama_mcp_streamlit/main.py
    

Using Docker

  1. Build the Docker image:
    docker build -t llama-mcp-assistant .
    
  2. Run the container:
    docker compose up
    

🔄 Changing MCP Server Configuration

To modify which MCP server to use, update the utils/mcp_server.py file. You can use either NPX or Docker as the MCP server:

NPX Server

server_params = StdioServerParameters(
    command="npx",
    args=[
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/username/Desktop",
        "/path/to/other/allowed/dir"
    ],
    env=None,
)

Docker Server

server_params = StdioServerParameters(
    command="docker",
    args=[
        "run",
        "-i",
        "--rm",
        "--mount", "type=bind,src=/Users/username/Desktop,dst=/projects/Desktop",
        "--mount", "type=bind,src=/path/to/other/allowed/dir,dst=/projects/other/allowed/dir,ro",
        "--mount", "type=bind,src=/path/to/file.txt,dst=/projects/path/to/file.txt",
        "mcp/filesystem",
        "/projects"
    ],
    env=None,
)

Modify the server_params configuration as needed to fit your setup.


📌 Features

  • Real-time tool execution via MCP
  • LLM-powered chat interface
  • Streamlit UI with interactive chat elements
  • Support for multiple LLM backends (NVIDIA NIM & Ollama)
  • Docker support for easy deployment

🛠 Dependencies

  • Python 3.11+
  • Streamlit
  • OpenAI API (for NVIDIA NIM integration)
  • MCP (Model Control Protocol)
  • Poetry (for dependency management)
  • Docker (optional, for containerized deployment)

📜 License

This project is licensed under the MIT License.


🤝 Contributing

Feel free to submit pull requests or report issues!


📬 Contact

For any questions, reach out via GitHub Issues.


Stars
25
Mar 13Mar 18Mar 24Mar 30Apr 05Apr 11
Configuration
mcpradar.com © 2024 - 2025.
Made by @bytesbay