Installation Guide#
This guide covers all installation methods for The Edge Agent (tea).
Quick Install#
# Linux/macOS - Download and install Rust binary
curl -L https://github.com/fabceolin/the_edge_agent/releases/latest/download/tea-rust-linux-x86_64 -o tea
chmod +x tea
sudo mv tea /usr/local/bin/
# Verify installation
tea --version
Pre-built Binaries#
Pre-built binaries are available for all major platforms. No Python or Rust installation required!
Latest Release: GitHub Releases
Platform Matrix#
Platform |
Python CLI |
Rust CLI |
|---|---|---|
Linux x86_64 |
|
|
Linux ARM64 |
|
|
macOS Intel |
|
|
macOS Apple Silicon |
|
|
Windows |
|
|
Python Binary Variants#
The Python implementation offers several binary variants optimized for different use cases:
Binary |
Prolog |
Deps |
Size (est.) |
Description |
|---|---|---|---|---|
|
No |
Core |
~15MB |
Core features only (networkx, pyyaml, jinja2) |
|
No |
Full |
~80MB |
All optional deps (openai, numpy, chromadb, pandas, etc.) |
|
Yes* |
Core |
~25MB |
Core + janus-swi (Prolog support) |
|
Yes* |
Full |
~90MB |
All deps + janus-swi (Prolog support) |
|
Yes |
Full |
~150MB |
Self-contained with all libs + SWI-Prolog runtime |
|
No |
Core |
~15MB |
Core features only |
|
No |
Full |
~80MB |
All optional deps |
|
Yes* |
Core |
~25MB |
Core + janus-swi |
|
Yes* |
Full |
~90MB |
All deps + janus-swi |
|
Yes |
Full |
~150MB |
Self-contained with all libs + SWI-Prolog runtime |
*Requires SWI-Prolog 9.1+ installed on the system (apt install swi-prolog-nox)
Full deps include: openai, numpy, chromadb, requests, RestrictedPython, pycozo, pandas, s3fs, gcsfs, adlfs, lupa, kuzu
Rust Binary Variants (Prolog Support)#
For neurosymbolic AI with Prolog inference, additional binary variants are available:
Binary |
Prolog |
Size (est.) |
Description |
|---|---|---|---|
|
No |
~15MB |
Core features, statically linked (musl) |
|
Yes* |
~18MB |
With Prolog support, requires |
|
Yes |
~50MB |
Self-contained Rust binary with all libs bundled |
|
No |
~15MB |
Core features, statically linked (musl) |
|
Yes* |
~18MB |
With Prolog support, requires |
|
Yes |
~50MB |
Self-contained Rust binary with all libs bundled |
*Requires SWI-Prolog installed on the system (apt install swi-prolog-nox)
LLM-Bundled Distributions#
For offline LLM inference without network connectivity, LLM-bundled distributions include a pre-bundled GGUF model:
Binary |
Model |
Size (est.) |
Description |
|---|---|---|---|
|
Gemma 3n E4B |
~5GB |
Rust + Gemma model (best quality) |
|
Gemma 3n E4B |
~5GB |
Rust ARM64 + Gemma model |
|
Phi-4-mini Q3_K_S |
~2GB |
Rust + Phi-4-mini (compact) |
|
Phi-4-mini Q3_K_S |
~2GB |
Rust ARM64 + Phi-4-mini |
|
Gemma 3n E4B |
~5GB |
Python + Gemma model |
|
Gemma 3n E4B |
~5GB |
Python ARM64 + Gemma model |
|
Phi-4-mini Q3_K_S |
~2GB |
Python + Phi-4-mini |
|
Phi-4-mini Q3_K_S |
~2GB |
Python ARM64 + Phi-4-mini |
Model Comparison:
Model |
Context |
Quality |
Size |
Best For |
|---|---|---|---|---|
Gemma 3n E4B |
128K tokens |
Higher |
~4.5GB |
Complex reasoning, longer conversations |
Phi-4-mini Q3_K_S |
32K tokens |
Good |
~1.9GB |
Quick responses, constrained storage |
Download and Run LLM AppImage#
# Download Gemma variant (recommended for quality)
curl -L https://github.com/fabceolin/the_edge_agent/releases/download/v0.9.5/tea-rust-llm-gemma-0.9.5-x86_64.AppImage -o tea-llm.AppImage
chmod +x tea-llm.AppImage
# Or download Phi-4-mini variant (smaller size)
curl -L https://github.com/fabceolin/the_edge_agent/releases/download/v0.9.5/tea-rust-llm-phi4-0.9.5-x86_64.AppImage -o tea-llm.AppImage
chmod +x tea-llm.AppImage
# Run offline chat workflow
./tea-llm.AppImage run examples/llm/local-chat.yaml --input '{"question": "What is 2+2?"}'
Model Path Configuration#
The LLM backend searches for models in this order:
Priority |
Source |
Example |
|---|---|---|
1 |
|
|
2 |
|
|
3 |
|
See example below |
4 |
|
AppImage bundled model (auto) |
5 |
|
Default cache location |
Using Environment Variable:
# Set custom model path
export TEA_MODEL_PATH=/path/to/my-model.gguf
./tea-llm.AppImage run examples/llm/local-chat.yaml
Using YAML Settings:
settings:
llm:
backend: local
model_path: ~/.cache/tea/models/gemma-3n-E4B-it-Q4_K_M.gguf
n_gpu_layers: 0 # 0 = CPU only, -1 = all layers on GPU
Auto-Detection in AppImage:
When using LLM-bundled AppImages, the model path is automatically detected from the bundled location ($APPDIR/usr/share/models/). No configuration needed.
Which Binary Should I Use?#
Distribution Selection Flowchart#
flowchart TD
A[Need LLM capabilities?] -->|No| B[Standard AppImage<br/>tea-version-arch.AppImage]
A -->|Yes| C{Need offline/bundled model?}
C -->|No| D[Standard + API LLM<br/>Configure with OPENAI_API_KEY]
C -->|Yes| E{Prefer quality or size?}
E -->|Quality| F[tea-rust-llm-gemma-version-arch.AppImage<br/>~5GB, 128K context]
E -->|Smaller| G[tea-rust-llm-phi4-version-arch.AppImage<br/>~2GB, 32K context]
B --> H[~50MB download]
D --> H
F --> I[~5GB download]
G --> J[~2GB download]
style F fill:#90EE90
style G fill:#87CEEB
Python Implementation:
flowchart TD
A[Need Prolog support?] -->|No| B{Need full deps?}
B -->|No| C[tea-python-linux-arch<br/>smallest, core deps]
B -->|Yes| D[tea-python-linux-arch-full<br/>all optional deps]
A -->|Yes| E{SWI-Prolog 9.1+ installed?}
E -->|No| F[tea-python-version-arch.AppImage<br/>RECOMMENDED: batteries-included]
E -->|Yes| G{Need full deps?}
G -->|No| H[tea-python-linux-arch-prolog<br/>core + Prolog]
G -->|Yes| I[tea-python-linux-arch-full-prolog<br/>all deps + Prolog]
style F fill:#90EE90
Rust Implementation:
flowchart TD
A[Need Prolog support?] -->|No| B[tea-rust-linux-arch<br/>smallest, static]
A -->|Yes| C{SWI-Prolog installed?}
C -->|Yes| D[tea-rust-linux-arch-prolog<br/>smaller]
C -->|No| E[tea-version-arch.AppImage<br/>self-contained]
style E fill:#90EE90
AppImage Installation#
AppImages are self-contained executables that bundle the tea binary, SWI-Prolog runtime, and all dependencies. No installation required!
Python AppImage (Recommended for Full Features)#
The Python AppImage includes ALL optional dependencies (openai, numpy, chromadb, pandas, etc.) plus janus-swi and the complete SWI-Prolog runtime. This is the batteries-included choice for neurosymbolic AI.
# Download the Python AppImage
curl -L https://github.com/fabceolin/the_edge_agent/releases/latest/download/tea-python-0.8.1-x86_64.AppImage -o tea-python.AppImage
# Make executable and run
chmod +x tea-python.AppImage
./tea-python.AppImage --version
# Verify it's the Python implementation
./tea-python.AppImage --impl
# Output: python
# Run a Prolog-enabled agent
./tea-python.AppImage run examples/prolog/simple-prolog-agent.yaml --input '{"value": 21}'
Rust AppImage (Recommended for Performance)#
The Rust AppImage includes the Rust binary with SWI-Prolog support, optimized for performance and smaller size.
# Download the Rust AppImage
curl -L https://github.com/fabceolin/the_edge_agent/releases/latest/download/tea-0.8.1-x86_64.AppImage -o tea.AppImage
# Make executable and run
chmod +x tea.AppImage
./tea.AppImage --version
# Verify it's the Rust implementation
./tea.AppImage --impl
# Output: rust
# Run a Prolog-enabled agent
./tea.AppImage run examples/prolog/simple-prolog-agent.yaml
AppImages work on any Linux distribution (Ubuntu, Fedora, Arch, Alpine, etc.) without installing SWI-Prolog or Python system-wide.
AppImage Requirements#
janus-swi: Bundled (no system installation needed)
SWI-Prolog: Bundled (no system installation needed)
Python: Bundled in Python AppImage (no system installation needed)
FUSE: Required to run AppImages natively. Most systems have it. If not:
# Use --appimage-extract-and-run flag as workaround ./tea-python.AppImage --appimage-extract-and-run --version
WASM LLM Package (Browser)#
For running TEA YAML workflows with LLM inference directly in the browser, use the WASM LLM package. This enables completely offline AI inference without any server backend.
WASM Release Assets#
Asset |
Size |
Description |
|---|---|---|
|
~50 MB |
WASM package + TypeScript wrapper |
|
~1.9 GB |
Bundled LLM model |
|
<1 KB |
Model metadata |
|
<1 KB |
Checksums for WASM assets |
Installation#
# Download from GitHub Releases
VERSION="0.9.5"
wget https://github.com/fabceolin/the_edge_agent/releases/download/v${VERSION}/tea-wasm-llm-${VERSION}.tar.gz
wget https://github.com/fabceolin/the_edge_agent/releases/download/v${VERSION}/microsoft_Phi-4-mini-instruct-Q3_K_S.gguf
wget https://github.com/fabceolin/the_edge_agent/releases/download/v${VERSION}/manifest.json
# Extract and organize
tar -xzf tea-wasm-llm-${VERSION}.tar.gz
mkdir -p models
mv microsoft_Phi-4-mini-instruct-Q3_K_S.gguf manifest.json models/
Server Requirements#
CRITICAL: Your web server MUST set these headers for SharedArrayBuffer support:
Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp
See rust/tea-wasm-llm/README.md for Nginx, Apache, and Caddy configuration examples.
Basic Usage#
<script type="module">
import { initTeaLlm, executeLlmYaml, loadBundledModel } from './pkg/index.js';
// Load model (cached in IndexedDB after first download)
const modelData = await loadBundledModel({
modelBasePath: './models',
useCache: true,
onProgress: (loaded, total) => console.log(`${(loaded/total*100).toFixed(1)}%`)
});
// Initialize with your LLM backend (e.g., wllama)
await initTeaLlm({}, async (paramsJson) => {
const params = JSON.parse(paramsJson);
// Call your LLM backend here
return JSON.stringify({ content: '...' });
});
// Execute YAML workflow
const result = await executeLlmYaml(yamlString, { input: 'hello' });
</script>
For complete documentation, see rust/tea-wasm-llm/README.md.
Python Installation#
From Source#
cd python && pip install -e .
python -c "import the_edge_agent as tea; print(tea.__version__)"
From Git#
pip install git+https://github.com/fabceolin/the_edge_agent.git
After installation, the tea command will be available globally.
Rust Installation#
From Source#
cd rust && cargo build --release
./target/release/tea --help
Verify Downloads#
Each release includes SHA256SUMS.txt for verification:
# Download checksum file and binary
curl -L https://github.com/fabceolin/the_edge_agent/releases/latest/download/SHA256SUMS.txt -o SHA256SUMS.txt
curl -L https://github.com/fabceolin/the_edge_agent/releases/latest/download/tea-rust-linux-x86_64 -o tea-rust-linux-x86_64
# Verify (Linux)
sha256sum -c SHA256SUMS.txt --ignore-missing
# Verify (macOS)
shasum -a 256 -c SHA256SUMS.txt --ignore-missing
Implementations#
This is a polyglot monorepo with two implementations:
Implementation |
Status |
Best For |
|---|---|---|
Production-ready |
Online edge computing, full feature set, 20+ built-in actions |
|
Active development |
Embedded offline systems, resource-constrained environments |
The Python implementation is optimized for online edge computing scenarios where network connectivity enables access to external APIs, LLM services, and cloud resources. The Rust implementation is designed for embedded offline systems where minimal footprint, deterministic execution, and operation without network dependencies are critical.
Both implementations share the same YAML agent syntax and can run the same agent configurations from the examples/ directory.
Repository Structure#
the_edge_agent/
+-- python/ # Python implementation (full features)
+-- rust/ # Rust implementation (performance)
+-- examples/ # Shared YAML agents (works with both)
+-- docs/
+-- shared/ # Language-agnostic docs (YAML reference)
+-- python/ # Python-specific guides
+-- rust/ # Rust-specific guides
Next Steps#
CLI Reference - Command-line usage
YAML Reference - Agent configuration syntax