概述
🧠 Ollama Client – Chat with Local LLMs Inside Your Browser
Ollama Client is a lightweight, privacy-first Chrome extension that brings the power of locally hosted large language models (LLMs) directly to your browser. No cloud dependencies. No API keys. No data sent externally.
Just fast, secure, offline-first AI chat powered by open-source models like LLaMA 3, Mistral, Gemma, CodeLLaMA, and more — all running on your own machine using the Ollama backend.
✨ Works on all Chromium-based browsers (Chrome, Edge, Brave) and Firefox (with additional setup). 100% open-source.
🚀 Key Features
🔌 Local Ollama Integration – Connect to your own Ollama server, no API keys required.
💬 In-Browser Chat UI – Lightweight, minimal chat interface.
⚙️ Custom Settings Panel – Configure base URL, default model, themes, excluded URLs, and prompt templates.
🔄 Model Switcher – Switch between any installed Ollama models on the fly.
🧭 Model Search & Add – Search, pull, and add new Ollama models and track download progress directly from the options page. (Known issue: pressing Stop during model pull may cause some glitches.)
🎛️ Model Parameter Tuning – Adjust temperature, top_k, top_p, repeat penalty and stop sequence.
✂️ Content Parsing – Automatically extract and summarize page content with Mozilla Readability.
📜 Transcript Parsing – Supports transcripts from YouTube, Udemy, Coursera.
🔊 Text-to-Speech – Click the “Speak” button to have the browser read aloud chat responses or page summaries using the Web Speech API.
📋 Regenerate / Copy Response – Easily rerun AI responses or copy results to clipboard.
🗂️ Multi-Chat Sessions – Manage multiple chat sessions locally with save, load, and delete.
🛡️ Privacy-First – All data processing and storage stays local on your machine.
🧯 Declarative Net Request (DNR) – Handles CORS automatically, no manual config needed (since v0.1.3).
🧭 Tab Access (Optional)
Want your LLM to understand the content of a page you're viewing? Enable Tab Access in the settings to fetch page content or transcripts for better contextual answers.
✔️ Fully opt-in
✔️ You choose which tabs to share
✔️ Customizable exclude list (regex supported)
✔️ No tab data ever leaves your device
⚙️ Installation & Setup
1️⃣ Install Ollama Client from the Chrome Web Store
2️⃣ Install Ollama on your machine from https://ollama.com and run `ollama serve`
3️⃣ Pull your favorite models (e.g., `ollama pull llama3:8b`, `gemma:2b`) and start chatting!
Advanced users can customize themes, model parameters, prompt templates, and excluded URLs from the Options page.
🎯 Who Should Use Ollama Client?
👩💻 Developers building with or debugging LLMs
📚 Researchers who want local, private LLM interfaces
🎓 Students using AI as study aids on local hardware
🔐 Privacy advocates avoiding cloud AI and APIs
🤖 AI tinkerers and open-source model enthusiasts
⚡ Performance & Hardware Recommendations
💻 8 GB RAM (no GPU): gemma:2b, mistral:7b-q4
💻 16 GB RAM (no GPU): gemma:3b-q4, gemma:2b
🚀 16 GB+ with GPU (6GB VRAM): llama3:8b-q4, gemma:3b
💥 32 GB+ or high-end GPU: llama3:8b, codellama:13b
🔥 RTX 3090+, Apple M3 Max: llama3:70b, mixtral
Note: Ollama Client is a frontend interface only. All LLM generation happens via your local Ollama install. Speed and output depend on your system.
🔗 Useful Links
🌐 Chrome Web Store: https://chromewebstore.google.com/detail/ollama-client/bfaoaaogfcgomkjfbmfepbiijmciinjl
📘 Setup Guide: https://shishir435.github.io/ollama-client/ollama-setup-guide
💻 Landing Page: https://shishir435.github.io/ollama-client/ollama-client
🧑💻 GitHub: https://github.com/Shishir435/ollama-client
🧳 Portfolio: https://www.shishirchaurasiya.in
🚀 Start chatting in seconds — private, fast, and fully local AI conversations on your own machine.
Built for developers, researchers, and anyone who values speed, privacy, and full control.
留言
来自商店的评价 (0)