A so-far non-public Chromium browser extension enabling you to generate and configure realistic emails for Gmail and Proton Mail, including decoy recipients, using Ollama (local AI models) or randomized templates (when Ollama is unavailable). The extension also generates random encrypted attachments.
This extension does not collect or share any data with IA Defensa or third parties. The source code is available for auditing. The extension is free for personal use.
Download and install Ollama on your machine.
Any model works. Pull one, for example:
ollama pull llama3.2:latestEssential: Browser extensions require CORS, so you must start Ollama from the command line.
To use the extension, run this command:
env OLLAMA_ORIGINS="*" nohup ollama serve < /dev/null > /tmp/ollama.log 2>&1 & disownVerify it’s working:
curl -s http://localhost:11434/api/tags | head -1Should return {"models":[… (JSON data, not an error)
Note: Don’t start the Ollama app at login—it doesn’t support CORS and will conflict with the CLI version.
chrome://extensions/ (or equivalent in other Chromium browsers, like edge://extensions/ or vivaldi://extensions/)Afterwards, configure the extension according to your preferences and consider pinning it to your toolbar for easy access.
Tip: Enable the extension in private mode (“Allow in Incognito”). The extension does not share any information with IA Defensa or third parties.
If Ollama is available, content is AI-generated (10–20 seconds). If not, the extension falls back to template-based drafts.
When Ollama is available, topic and custom prompt fields appear in the popup:
The extension pop-up shows real-time Ollama status:
In both “not available” states, the extension still works using template-based drafts.
Issue: Extension shows “❌ Ollama not available” after restarting your computer.
Why: Either Ollama isn’t running, or it started without CORS support (which blocks the extension).
Do this after every restart:
Kill any running Ollama processes:
pkill -f ollamaStart Ollama with CORS support:
env OLLAMA_ORIGINS="*" nohup ollama serve < /dev/null > /tmp/ollama.log 2>&1 & disownTest it’s working:
curl -s http://localhost:11434/api/tags | head -1Should return {"models":[… (not an error)
Check Ollama is running:
tail -f /tmp/ollama.logTest the extension:
chrome://extensions/ (or equivalent)| Problem | Solution |
|---|---|
| “Port occupied, basic mode” | The Ollama app is running instead of CLI; kill it and start CLI: pkill -f Ollama.app && env OLLAMA_ORIGINS="*" nohup ollama serve < /dev/null > /tmp/ollama.log 2>&1 & disown |
| “Basic mode” (no port occupied) | Ollama isn’t running—use step 2 above |
| “Address already in use” | Multiple Ollama processes: lsof -ti:11434 | xargs kill && env OLLAMA_ORIGINS="*" ollama serve |
| Extension icon grayed out | Reload extension in Chrome Extensions (chrome://extensions/ or equivalent) |
| No models available | Download a model: ollama pull llama3.2:latest |
| Only cloud models installed | Cloud models require authentication and can’t be used locally; pull a local model: ollama pull llama3.2:latest |
| Slow generation (>30s) | Normal for larger models; try a smaller/faster model if needed |
| Empty compose window | Check browser console (F12) for error messages, ensure you’re on Gmail/Proton Mail |
If problems persist:
chrome://extensions/ (or equivalent)ollama --version (requires v0.1.20+)The Ollama app and CLI cannot both run simultaneously, because they use the same port (11434).
If you accidentally start the app, the extension detects it and switches to basic mode. Kill the app and start the CLI with CORS to restore AI generation.