CommunityOllama

AI Agent

An embedded AI assistant that understands your schemas, enrichments, and governance rules. Ask questions in natural language, run analysis commands, and perform write operations — all from a terminal-style interface with real-time streaming.

How it works

The AI Agent connects to Ollama (local or cloud) and has access to live data from all your connected registries. When you ask a question or run a command, event7 fetches real-time context (schemas, enrichments, AsyncAPI specs, governance scores) and injects it into the system prompt.

Responses are streamed in real-time via Server-Sent Events (SSE). The agent can also perform write operations (enrich schemas, generate AsyncAPI specs, delete subjects) with explicit user confirmation.

Commands

Type a command to fetch live context from your registries. The agent analyzes the data and responds with insights.

/healthHealth check all connected registries — response times, subject counts, connectivity status
/schemasSchema overview — total count, format distribution (Avro/JSON/Protobuf), version statistics
/driftDetect breaking changes in the last 2 schema versions across all subjects
/catalogEnrichment coverage analysis — missing owners, descriptions, tags, classification gaps
/refsReference graph analysis — orphan schemas, most depended-on subjects, dependency chains
/asyncapiAsyncAPI spec status — documented vs undocumented subjects, coverage percentage

Write actions

The agent can perform write operations when you use action keywords like “set owner”, “add tag”, “generate asyncapi”, or “delete subject”. Write actions always require explicit confirmation — the agent shows an action card with parameters and you must click Confirm.

Enrich Schema

Set owner, description, tags, or classification on a subject

Generate AsyncAPI

Generate or regenerate an AsyncAPI spec from schema + enrichments

Delete Subject

Delete a schema subject from the registry (destructive, requires confirmation)

Interface

Terminal style

Dark terminal UI with timestamps, role labels (YOU, EVENT7), and a command prompt

SSE streaming

Responses appear character by character in real-time, with a blinking cursor

Quick commands

Command buttons at the top for one-click access to /health, /schemas, /drift

Action cards

Write operations show an orange confirmation card — click Confirm or Cancel

Configuration

The AI Agent requires an Ollama-compatible endpoint. Set these environment variables on the backend:

# Local Ollama
OLLAMA_HOST=http://ollama:11434
OLLAMA_MODEL=llama3.1:8b
# Cloud (Ollama.com)
OLLAMA_HOST=https://ollama.com
OLLAMA_MODEL=kimi-k2.5:cloud
OLLAMA_API_KEY=sk-your-key

If OLLAMA_HOST is empty, the AI Agent is disabled and the page shows a configuration message.

Status bar

The top bar shows the agent state (ONLINE or THINKING), the detected provider (ollama, claude, openai, gemini), and the model name. The provider is auto-detected from the OLLAMA_HOST hostname.