IA Defensa

Random Email Generator Chromium (Beta)

A so-far non-public Chromium browser extension enabling you to generate and configure realistic emails for Gmail and Proton Mail, including decoy recipients, using Ollama (local AI models) or randomized templates (when Ollama is unavailable). The extension also generates random encrypted attachments.

Ask about availability

This extension does not collect or share any data with IA Defensa or third parties. The source code is available for auditing. The extension is free for personal use.

Features

Setup

1. Install Ollama

Download and install Ollama on your machine.

2. Download a Model

Any model works. Pull one, for example:

ollama pull llama3.2:latest

3. Start Ollama With CORS Support

Essential: Browser extensions require CORS, so you must start Ollama from the command line.

To use the extension, run this command:

env OLLAMA_ORIGINS="*" nohup ollama serve < /dev/null > /tmp/ollama.log 2>&1 & disown

Verify it’s working:

curl -s http://localhost:11434/api/tags | head -1

Should return {"models":[… (JSON data, not an error)

Note: Don’t start the Ollama app at login—it doesn’t support CORS and will conflict with the CLI version.

4. Install Extension

  1. Download or clone the extension repository
  2. Open Chrome and navigate to chrome://extensions/ (or equivalent in other Chromium browsers, like edge://extensions/ or vivaldi://extensions/)
  3. Enable developer mode (top right toggle)
  4. Click “Load unpacked” and select the extension directory

Afterwards, configure the extension according to your preferences and consider pinning it to your toolbar for easy access.

Tip: Enable the extension in private mode (“Allow in Incognito”). The extension does not share any information with IA Defensa or third parties.

5. Test Setup

  1. Click the extension icon
  2. You should see: “🦙 Ollama available” (green background)
  3. If you see “❌ Ollama not available”, check troubleshooting below

Usage

Basic Usage

  1. Open Gmail or Proton Mail in Chrome
  2. Click the “Random Email Generator” extension icon in the toolbar
  3. Click “Prepare draft”
  4. A compose window opens with generated subject and content

If Ollama is available, content is AI-generated (10–20 seconds). If not, the extension falls back to template-based drafts.

Custom Content (AI Mode)

When Ollama is available, topic and custom prompt fields appear in the popup:

  1. Enter a specific topic (e.g., “meeting follow-up,” “vacation request”)
  2. Optionally add a custom prompt for specific requirements
  3. Uncheck “Include attachment” if you don’t want one
  4. Click “Prepare draft”

Tips for Best Results

Troubleshooting

Quick Status Check

The extension pop-up shows real-time Ollama status:

In both “not available” states, the extension still works using template-based drafts.

After Machine Restart

Issue: Extension shows “❌ Ollama not available” after restarting your computer.

Why: Either Ollama isn’t running, or it started without CORS support (which blocks the extension).

Do this after every restart:

  1. Kill any running Ollama processes:

    pkill -f ollama
  2. Start Ollama with CORS support:

    env OLLAMA_ORIGINS="*" nohup ollama serve < /dev/null > /tmp/ollama.log 2>&1 & disown
  3. Test it’s working:

    curl -s http://localhost:11434/api/tags | head -1

    Should return {"models":[… (not an error)

  4. Check Ollama is running:

    tail -f /tmp/ollama.log
  5. Test the extension:

Common Problems and Solutions

ProblemSolution
“Port occupied, basic mode”The Ollama app is running instead of CLI; kill it and start CLI: pkill -f Ollama.app && env OLLAMA_ORIGINS="*" nohup ollama serve < /dev/null > /tmp/ollama.log 2>&1 & disown
“Basic mode” (no port occupied)Ollama isn’t running—use step 2 above
“Address already in use”Multiple Ollama processes: lsof -ti:11434 | xargs kill && env OLLAMA_ORIGINS="*" ollama serve
Extension icon grayed outReload extension in Chrome Extensions (chrome://extensions/ or equivalent)
No models availableDownload a model: ollama pull llama3.2:latest
Only cloud models installedCloud models require authentication and can’t be used locally; pull a local model: ollama pull llama3.2:latest
Slow generation (>30s)Normal for larger models; try a smaller/faster model if needed
Empty compose windowCheck browser console (F12) for error messages, ensure you’re on Gmail/Proton Mail

Getting Help

If problems persist:

  1. Check browser console (F12 → Console) for detailed error messages
  2. Verify extension has localhost permissions in chrome://extensions/ (or equivalent)
  3. Test with a fresh browser profile to rule out conflicts
  4. Check Ollama version: ollama --version (requires v0.1.20+)

App vs. CLI

The Ollama app and CLI cannot both run simultaneously, because they use the same port (11434).

If you accidentally start the app, the extension detects it and switches to basic mode. Kill the app and start the CLI with CORS to restore AI generation.