madtank
/
OllamaAssist
star
17
A versatile AI chatbot leveraging function-calling language models via Ollama. Features include advanced function calling, self-reflection, and a Streamlit interface. Optimized for Llama models but adaptable to other function-calling LLMs. Run powerful AI assistants locally with enhanced interactivity.

OllamaAssist

A Streamlit interface for Ollama models with full MCP (Model Context Protocol) integration. Works with any tool-calling capable model like deepseek-r1-tool-calling:14b or llama2:latest.

Key Features

  • Local LLM Execution: Run models locally using Ollama (deepseek-r1)
  • MCP Integration: Universal tool protocol support
  • Streamlit Interface: Real-time streaming chat interface
  • Dynamic Tool Support: Automatic capability detection

What is MCP (Model Context Protocol)?

MCP is a universal protocol that standardizes how AI models interact with tools and services. It provides:

  • Universal Tool Interface: Common protocol for all AI tools
  • Standardized Messages: Consistent communication format
  • Discoverable Capabilities: Self-describing tools and services
  • Language Agnostic: Works with any programming language
  • Growing Ecosystem: Many tools available

Learn more:

Prerequisites

  • Python 3.9+
  • Ollama desktop app installed and running
  • MCP-compatible tools
  • python-dotenv
  • An Ollama-compatible model with tool-calling support

Installation

  1. Prerequisites:

    # Install Ollama desktop app from https://ollama.ai/download
    
    # Make sure Ollama is running
    # Then pull the recommended model (or choose another tool-calling capable model)
    ollama pull MFDoom/deepseek-r1-tool-calling:14b
    
    # Alternative models that support tool calling:
    # ollama pull llama2:latest
    
  2. Setup:

    git clone https://github.com/madtank/OllamaAssist.git
    cd OllamaAssist
    python -m venv venv
    source venv/bin/activate
    pip install -r requirements.txt
    

Environment Configuration

OllamaAssist uses environment variables for configuration. Create a .env file:

# Brave Search Configuration
BRAVE_API_KEY=your_api_key_here

# Optional: Override default commands
#BRAVE_COMMAND=docker
#BRAVE_ARGS=run -i --rm -e BRAVE_API_KEY mcp/brave-search

# Filesystem Configuration
#FILESYSTEM_PATHS=/path1:/path2:/path3

Variables can be:

  • Set in .env file
  • Commented out to use defaults
  • Override using environment variables

MCP Configuration

OllamaAssist uses MCP to provide powerful capabilities through standardized tools. Configure available tools in mcp_config.json:

{
  "mcpServers": {
    "brave-search": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-brave-search"],
      "env": {
        "BRAVE_API_KEY": "your-api-key-here"
      }
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"]
    }
  }
}

Available MCP Servers

The project supports various MCP servers:

Core Functionality

  • brave-search - Web and local search capabilities
  • filesystem - Secure file operations
  • chromadb - Vector database operations
  • postgres - SQL database integration
  • mcp-memory - Long-term context persistence
  • sqlite - Lightweight database operations

AI & Development

  • huggingface - Model and dataset access
  • langchain - AI workflow integration
  • git - Repository operations
  • jupyter - Notebook integration

Check out Awesome MCP Servers for more.

Adding MCP Servers

  1. Each server entry needs:

    • command: The MCP tool executable
    • args: Optional command line arguments
    • env: Environment variables (like API keys)
  2. Common MCP servers:

    • brave-search: Web search (requires Brave API key)
    • filesystem: Local file operations
    • sequential-thinking: Self-reflection capabilities
    • Add your own MCP-compatible tools!

Configuring API Keys

For services requiring authentication:

  1. Get your API key (e.g., Brave Search API)
  2. Add it to the appropriate server's env section
  3. Never commit API keys to version control

Using MCP Tools

Example tool implementation:

async def brave(action: str, query: str = "", count: int = 5) -> Any:
    """Brave Search API wrapper"""
    server_name = "brave-search"
    return await mcp(
        server=server_name,
        tool=f"brave_{action}_search",
        arguments={"query": query, "count": count}
    )

Adding Custom MCP Tools

  1. Create an MCP-compatible tool
  2. Add it to mcp_config.json
  3. The tool will be automatically available to the chatbot

Running the Application

  1. Ensure Ollama desktop app is running
  2. Launch OllamaAssist:
    streamlit run streamlit_app.py
    

Testing

Run tests:

python -m pytest tests/test_tools.py -v

Development

Creating MCP Tools

Want to create your own MCP tool? Follow these guides:

Testing MCP Tools

Use the MCP Inspector to test your tools:

mcp dev your_server.py

Or install in Claude Desktop:

mcp install your_server.py

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Test your changes
  4. Submit a pull request

Roadmap

  • Additional MCP server integrations
  • Enhanced model capability detection
  • Advanced tool chaining
  • UI improvements for tool interactions

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • MCP for the universal tool protocol
  • Ollama for local LLM execution
  • Streamlit for the web interface
Stars
17
Mar 13Mar 18Mar 24Mar 30Apr 05Apr 11
Configuration
mcpradar.com © 2024 - 2025.
Made by @bytesbay